#Llm
I genuinely believe this is a huge part of the LLM worship; half of these people are functionally illiterate
Putting aside all the horrible shit in them: I cannot believe the global elite composes emails like fucking horses who somehow learned to crudely type. My emails read like Shakespeare compared to the lowercased, unpunctuated screeds written to the worlds biggest pedophile. Fucking write like adults!
November 12, 2025 at 6:40 PM
new theory is that elon musk is actually a low parameter llm
November 11, 2025 at 1:28 AM
thread. interesting research

but also, they named their french LLM Baguettotron

i hope it sticks. i hope Baguettotron is the best LLM out there and everyone has to say its name
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 11, 2025 at 2:57 AM
All those moments will be lost in time

Like data in an LLM

Time to dAI
November 12, 2025 at 12:33 PM
expansion of LLM chat bots and the like is incompatable with human survival
November 12, 2025 at 12:10 PM
seriously though, I've often wondered how much of LLM "hallucination" is just shitposting in the training data bubbling up
November 12, 2025 at 10:59 PM
A Grokpedia não é enciclopédia, é desastre ambiental.

Desperdiça energia e água para gerar gororoba ideológica negacionista despejada pelo LLM autodenominado MechaHitler.

Qual o defeito #1 de LLM? Não é confiável para citar fontes.

Sem citações confiáveis não existe enciclopédia.

É lixo tóxico
November 10, 2025 at 9:25 PM
Demonstrating that an LLM will only ever be a bland paraphrase of human craft...
November 11, 2025 at 6:56 AM
O sonho dos donos de faculdade particular é vender EAD gerado por LLM.
November 9, 2025 at 4:05 PM
I think Musk is like a LLM and doesn't really "know" anything. He just reproduces content.
Do you think Elon Musk knows that Tolkien and Hitler fought on opposing sides at the Battle of the Somme? That Tolkien much later called Hitler a “ruddy little ignoramus” in a refusal to a German publisher to provide proof of his non-Jewish ancestry in order to be published in Germany?
November 11, 2025 at 7:12 PM
I made a fake LLM to jingle keys in front of my day job’s leadership
November 9, 2025 at 11:57 AM
Doctors owe patients duties. Lawyers owe clients duties. Clergy owe penitents duties. A a minimum, all have a duty to act in the best interests of the patient/client/penitent.

You want to privilege communications with a LLM, you need to explain what duty the LLM owes the babbler.
More and more people these days are interacting with LLMs as legal, psychological, and spiritual confidants. The government should not be able to have access to those thoughts willy nilly. My latest: www.nytimes.com/2025/11/10/o...
Opinion | Doctors, Lawyers and Priests Keep Secrets. Why Not Your Chatbot?
www.nytimes.com
November 10, 2025 at 6:46 PM
No, it's an LLM wrapper around Perplexity and it's not fact-checking anything, it's just extruding statistically-likely text responses that bullshit words that accidentally portray factual reality instead of nonsense.
November 11, 2025 at 7:44 PM
i am seeing people retweet @aigeneral.bsky.social who I know are not here for llm slop so just please be advised there is an llm slop generator posting under the hypnovember tag lol
November 12, 2025 at 3:37 PM
One of the most fascinating aspects of this mess that threatens to - quite literally - kill me? The Schumer-stans in my replies. Are they real? Are they a clever LLM op? who can say
He voted no. You keep telling yourself that he voted yes.
November 10, 2025 at 11:36 PM
Every single time an LLM hallucinates, I am grateful:

Grateful that I spotted it, and thus remind myself that any and all LLM output needs to be validated. You can never trust these things 100%, unless you have additional validation in place that is 100% reliable.
November 9, 2025 at 2:20 PM
i regret the error. some smaller companies sell them specifically as therapists. i sincerely and unironically hope their underlying llm is claude. bsky.app/profile/tech...
Not true! Nomi.ai actually advertises one of their product personas as an unlicensed, cheap therapist!
November 11, 2025 at 1:12 AM
It's why they were so excited about outsourcing writing them to an LLM
Content aside, it's always jarring to see highly educated people writing emails like teenagers
November 12, 2025 at 5:08 PM
My first advice to junior contributors is to STOP using vibe coding for PRs. OSS is always about people more than about code. We don't need more code generated by LLM, we need more people who care.
November 10, 2025 at 11:47 AM
I went to the bad place and read the original threads. There are tons of similar posts. Elon has created the first tin-foil-hat-wearing LLM.
High end wealth inequality allows for shit like this
November 12, 2025 at 7:01 AM
This notification indicates that a Wikipedia editor has identified or suspects that LLM text has been added to a page. It’s a warning while editors sort out a page update to remove the suspected LLM text.

It does NOT mean that Wikipedia as an org is using LLM text. Hope this helps readers.
November 10, 2025 at 5:20 PM
asking any new LLM i test "how to set up wifi on openBSD" and immediately discarding it if it mentions wpa_supplicant
November 11, 2025 at 1:43 AM
A lil' look at what's in the kitchen at the moment. Spent the day making this very high level visual of LLM inference.
November 10, 2025 at 6:13 PM
i think “ai data centers in space makes sense”
is the peak of the bubble. this is the beanie baby happy meal toy of the LLM era.
November 9, 2025 at 2:24 AM
more like slopaganda. this is llm generated.
November 9, 2025 at 3:12 PM