Computational Cosmetologist
banner
dferrer.bsky.social
Computational Cosmetologist
@dferrer.bsky.social
ML Scientist (derogatory), Ex-Cosmologist, Post Large Scale Structuralist

I worked on the Great Problems in AI: Useless Facts about Dark Energy, the difference between a bed and a sofa, and now facilitating bank-on-bank violence. Frequentists DNI.
Qualia have that seductive mystique to them. Our very special minds interact specially with the immaterial. Scientists hate this one trick but they can’t stop you.

And honestly, they *are* more fun to talk about
February 13, 2026 at 1:15 AM
3.12 generics / type params are so much better to look at
February 13, 2026 at 1:03 AM
His books are great! If you haven’t seen him in interviews , I suggest just letting the books stay your memory and ignore me
February 12, 2026 at 11:02 PM
He’s not Dawkins-level bad, but none of the four horsemen came out of the 2010s in glory
February 12, 2026 at 11:00 PM
If you haven’t encountered his views on feminism, it’s probably better to not. Also has some Islam takes that have not aged well at all.
February 12, 2026 at 10:59 PM
Except gender, and sometimes Islam. Otherwise, yeah
February 12, 2026 at 10:57 PM
This is the first wave of better pretraining loss to penalize hallucinations making it to production. I think we’re gonna see a lot of movement across the board on this in the next few months
February 12, 2026 at 12:33 PM
Dennet was a garbage person in a lot of ways so apologies for the association—but a lot of his work against anti-materialism still holds up
February 12, 2026 at 11:34 AM
This is a similar point to Dennet’s objection to Swamp Man (somehow even worse than Searle’s CR), the Knowledge Argument, etc: a thought experiment that asks us to imagine something that is “possible” only in the loosest sense of the word and then trust our intuition about it is dishonest.
February 12, 2026 at 11:34 AM
“I can install linux as a dual boot on mom’s computer and she won’t care because it won’t change anything in windows”—last words of a boy about to get a hard lesson about GRUB, the MBR, and how to fix it
too many kids today have never destroyed the family PC with sketchy downloads

you must learn to fear Computer before you can properly wield Computer
folks today don't know the abject fear of inadvertently downloading, for the first (and only lol) time, a .exe file from kazaa or limewire and it shows

def one of those "it only takes one time to learn" but oooo boy at what cost for that single instance
February 12, 2026 at 3:32 AM
Genuinely curious how this goes without tool use. The whole agent trend means a lot of models are post-trained with it allowed. It’s a fun puzzle finding eval problems where you can leave it on (and see their full abilities) but aren’t trivialized by search.
February 12, 2026 at 3:24 AM
I love this idea. Not because it’s good or because I’ll use it—it is transcendentally bad. I am filled with joy at the thought of the people who *are* going to use it. I hope they post their stories. It’s like fireworks you have to be blackout drunk to light.
February 12, 2026 at 12:13 AM
Authorial intent is often for something that would ruin good art. Nabakov’s intention for Pale Fire from interviews was for it to be a dumb satire of how badly critics / editors mangle books. It’s the worst serious take on the book I’ve ever read. His work is so much better if you ignore him
February 11, 2026 at 11:56 PM
In retrospect, I way over-fixated on GPT3’s failure to do logic better than chance. Turns out that *was* just something you could train out. No need for any of the big advances in understanding internal representations I thought it would require. We didn’t even need to scale up any further. Oops.
February 11, 2026 at 11:26 PM
Getting to the former absolutely doesn’t mean the latter will happen. We *should* doubt that. But I have to stop myself from acting like my skepticism is the same as it was when GPT-3 came out. It’s oriented the same way, but my previous thresholds for success were met. I need to account for that.
February 11, 2026 at 11:20 PM
I think skepticism is totally warranted! We just have to have meta-skepticism of our own stance there too—it’s easy to catch yourself steadily moving to goalposts down the field from “not going to write code I won’t laugh at” to “not going to write code I can maintain in a few years”
February 11, 2026 at 11:20 PM
Even more than code, math has become the thing high-end LLMs excel at. Being able to directly attribute every step in a proof to its success or failure has brought Math from “lol how can you not count to 3” to “one of the first jobs that could be totally changed by this technology”
February 11, 2026 at 10:59 PM
I had to rethink a lot of priors during the 6 months last year when LLMs went from “can’t add” to “can’t do undergrad hw math” to “can’t find group isomorphisms it takes me an hour to do by hand” to “Can’t reinvent obscure parts of my thesis from scratch reliably given only a vague description”
Unpopular opinion here (and I’m not even an AI booster, I’m mostly a skeptic), but I think a whole lot of people are overindexed on AI being the too-many-fingers-and-weird-teeth machine and aren’t accepting or even understanding that its output has generally been continuously improving.
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 11, 2026 at 10:54 PM
Meanwhile my most optimistic prediction from a year ago—that we’d see a journal-published academic math paper by Q3 2026 where the main contribution was from AI—is on track to happen. So on track that it feels like I was too pessimistic.
February 11, 2026 at 6:44 PM
This absolutely includes me! Any pessimistic opinion about language models I’ve given has ended up being wrong.

Except about recurrence. My half-joking disdain of RNN derivatives is still going strong. I’m hoping if I keep doubling down we’ll finally get long-horizon learned memory that isn’t shit.
February 11, 2026 at 6:40 PM
When Altman lies about his product, it’s usually to an audience that expects him to lie in a setting where lies are common and expected. That doesn’t make his statements anything other than lies, but it’s not interesting to argue against them.
February 11, 2026 at 6:31 PM
Many hype-people also seem completely aware they are saying insane things in a way the deniers aren’t. Sometimes in a bad-faith way, and sometimes in a way that’s just advertising.

“GPT-5.2 isn’t actually a machine god” feels as necessary to say as “Steve’s new subs aren’t actually the best ever”
February 11, 2026 at 6:31 PM
Arguing with the boosters also usually feels like arguing with advertisement puffery. “Downy isn’t actually the best possible laundry detergent conceivable across all possible worlds” feels pointless to say.
February 11, 2026 at 6:20 PM
But also it’s hard to find academic linguists or philosophers of mind who actually agree on what “intelligence” means either.

I’ll grant the industry hype is more vacuous, but it’s not like there’s a non-controversial definition they could use
February 11, 2026 at 6:14 PM