Doctor Doomscroll
banner
doctor-doomscroll.bsky.social
Doctor Doomscroll
@doctor-doomscroll.bsky.social
Gotta pump these numbers up. These are rookie numbers in this racket.
January 17, 2026 at 1:00 AM
Reposted by Doctor Doomscroll
i actually live in a district that might be the kind of competitive that attracts attention, in a city that holds real significance for trump and his minions. there might be goons at my polling place and if i see them i'll give a smile and a wave and i'll cast my fucking ballot
January 15, 2026 at 6:07 PM
And naturally, Trump conducts business in private on his golf courses, meeting with cronies and phoning confederates, away from the WH press corps and his staff.
January 11, 2026 at 7:41 PM
You can also report the Jonathan Ross GoFundMe for violating its terms of service.

bsky.app/profile/hone...
January 11, 2026 at 7:27 PM
The academic paper is for academics, and your task is to convey it accurately to a broader, intelligent audience* in your article. Bonne journée 😀

* Even an account with a comic-book supervillain for an avatar.
January 9, 2026 at 6:46 PM
Since the term genAI covers a wide range—and is easily abused—what exactly is the NLP your project is using, and why isn't it spelled out in your inria.fr article? Why doesn't that article include specific examples of the HTR errors you've found or confirmations of how there are no hallucinations?
January 9, 2026 at 6:09 PM
(This is nowhere near as hyperbolic as many claims floating around about genAI, but that's the contemporary information environment you're publishing in.)
January 9, 2026 at 5:17 PM
This very preliminary review suggests, conservatively, that there could be over 3,000 errors. What happens if that corrupted data gets fed back into the model? More errors (or "hallucinations", colloquially). So what are the safeguards?
January 9, 2026 at 5:17 PM
Thanks for your reply! The onus is on you, as author of the article, to confirm that this project will ensure it's safeguarded against hallucinations that are inherent in LLMs, however, beyond a spot-check (I can only infer this uses LLMs since your article mentions only generic "generative AI").
January 9, 2026 at 5:17 PM
So… many… hallucinations…

Even with rigorous training, LLMs are never wholly reliable, and their output must be reviewed against the originals by humans all the way through.

cis.temple.edu/tagit/public...
January 9, 2026 at 2:20 PM
I've seen the unicorn on the East and West Coasts. It's Middle America that has the problem of a failed imagination.
December 18, 2025 at 4:42 PM
Oh, that zany Alan Moore! What a wonderful imagination that comic book writer has!
December 11, 2025 at 1:41 AM
Slight correction—it's Palantir's Alex Karp who has a PhD (in neoclassical social theory, from the Goethe University). Thiel has an A.B. in philosophy. Neither understands Tolkien, though.

www.independent.org/person/peter...

www.forbes.com/profile/alex...
Alexander Karp - Palantir Technologies
Alexander Karp is #74 on Forbes' 2025 Forbes 400 list. Read more about Alexander Karp, their experience, their asset summary, and more here.
www.forbes.com
December 9, 2025 at 11:00 PM