Computational Cosmetologist
banner
dferrer.bsky.social
Computational Cosmetologist
@dferrer.bsky.social
ML Scientist (derogatory), Ex-Cosmologist, Post Large Scale Structuralist

I worked on the Great Problems in AI: Useless Facts about Dark Energy, the difference between a bed and a sofa, and now facilitating bank-on-bank violence. Frequentists DNI.
Struggling to reconcile my sneering dismissal of Prompt “Engineering” as a discipline with my just having written a 20k word document for work on how to do it effectively
November 18, 2025 at 3:00 PM
Dennett had plenty of problems, but his dismissal of the argument by intuition pump has always stuck with me. If your “thought experiment” involves pushing the subjects beyond the bounds of anything we’ve observed, it’s a bad argument. Usually it amounts to a bare assertion of the thesis.
"Working in isolation"

Wow, she switches from "AGI is impossible" to "but otherwise it'd be in NC and easy" on a dime

(Hint: algorithms have implied dependency graphs and data flows)

Maybe she just meant Jay and Silent Bob do it all...
November 18, 2025 at 2:54 PM
Reposted by Computational Cosmetologist
“No way to prevent this” says only language where this regularly happens
November 17, 2025 at 11:11 PM
Brings to mind Sagan's not being "so open-minded your brains fall out." There is a lot of value in the critique of positivism, de-centering one's own viewpoint, etc., but we have to stop before sophistic absolute relativism. Or, if you want, Nothing is True isn't My Truth.
it would be good if we progressives would stop being parodies of ourselves. sometimes
November 11, 2025 at 10:38 PM
Even the idea that we need to explain the previous movies at all (“007 is a code multiple people have held”) is too much for me. New works can have references or homages to old ones, but the idea that we need a Bond continuity is absurd.
James Bond’s death in No Time to Die is causing a nightmare for the next film. Writers are stuck because Bond “was blown to pieces.”

Anthony Horowitz, author of three 007 novels, says:

“You can't have him wake up in shower and saying it was all a dream."

radaronline.com/p/james-bond...
November 11, 2025 at 2:41 PM
There's the overwritten AI style--which can mostly be prompted away--but the larger weakness of AI for this kind of project is that it's bad at coming up with a new, coherent world. They struggle with things like having a unifying theme or connected story and setting.
like, I'm going to pop three examples here, two written by people and one AI slop, and I will bet everyone can instantly tell which is which.
November 7, 2025 at 3:42 PM
Beyond the race “science” bigotry here, this is a classic failure mode of ML. Even if there were somehow a real effect here, it would be much smaller than the race / sex biases in the training set by virtue of coming from our society. Models will always learn the easiest signal they can
November 7, 2025 at 12:17 AM
“It’s bad ontological hygiene to block people just because you disagree” is maybe a defensible position—in the sense of “you should choose not to do it”. The Special Block Police reviewing your blocks for wrongthink is such a bad idea that it feels like a refutation of the original position.
November 5, 2025 at 2:00 AM
I was willing to entertain the idea that so many anti-AI people blocking her was, if not quite bad, at least a symptom of something… not ideal maybe. I was willing to listen. This is pathetic, though. No one is owed an audience. Policing blocks would be an insane choice.
I'm sorry, but this is the most incredibly entitled issue I've seen in a long time. Not only does this fatally compromise critical safety features, but it undermines that idea that users have choice over what they see on their TL and who can see their posts*.

*yes, I know the firehose.
Mitigate discrimination/bullying against others via Block List erasure · Issue #9077 · bluesky-social/social-app
Describe the Feature From my own experience: It appears I am currently snowballed-upon-snowballed into Block Lists that mark me as a Spammer, Crypto Bro, etc., amounting to almost a quarter million...
github.com
November 5, 2025 at 1:47 AM
Not be fully conspiracy brained, but are we sure the AEI didn’t cook this up to be as maddening and disheartening as possible? There’s no claim these people are randomly selected. This feels more like a push-poll than a genuine exercise.
October 29, 2025 at 3:56 PM
Betting against LLMs having something worth calling "understanding" is like betting against Bell Inequality Violations. It was always a little dubious, but not entirely out-of-hand dismissible at first. As evidence has mounted, there's still some little path to truth proponents can cling to.
October 28, 2025 at 1:34 AM
There's a seductive idea that's steadily become more popular from ~ the 19th century up through today. You see it behind fascination with the Nazi military, the Cult of the Operator, Post-liberalism, the "Dark Enlightment", etc: the chains of morality, empathy, humanity are holding us back.
October 22, 2025 at 9:31 PM
Apropos of no other discussions of unstable loci of meaning and the size of numbers, here's my first test chat with an old RAG demo I made that uses tools exclusively to find relevant passages from philosophy ebooks I pirated for class in my undergrad.

Its takes are much hotter now.
October 16, 2025 at 7:26 PM
There's this amazing Stancil phenomenon where he makes a post that maybe kinda is a Motte / Bailey ("'Love' is a four letter word"). He almost never says the Bailey part explicitly (here it would be something like "Love is pointless shit") . I make no claim as to what Will's intended reading is.
October 16, 2025 at 7:10 PM
In general, this kind of Post-Truth "Deep Battle" doctrine is a loser for us. For Trump specifically, though, he is both broadly culturally relevant and has no fixed beliefs. Do you think my AI Trump is wrong to endorse Medicaid for All? Here's a video of him saying it's great from 2016. Is it real?
There's also no need to let the Republicans own AI Trump. It disgusts me on a deep level, but there's nothing stopping anyone anywhere from "discovering" they too have the Trump endorsement. Use AI Trump to sell dish detergent. Have an AI Trump on SNL. Let a hundred thousand AI Trumps bloom.
October 16, 2025 at 3:58 PM
It's surreal that this has such different reaction here to the recent worries about people "befriending" or "loving" chatbots. Like---in that case people here can understand "LLMs are dangerously good at mimicking human interactions and many people struggle with emotional distance with them."
"this is a bad thing that you should not be promoting because it is actually based on realworld racism and is used like realworld racism" is apparently the anti-woke position on bluesky right now, and everyone is rushing to show that they don't agree with it
October 8, 2025 at 4:32 AM
This may be the most fun I’ve had making diagrams for a presentation
September 18, 2025 at 12:53 PM
As someone who is (sadly) not a layman at this, this is a good description of the problem. It's a design-level failure if your "do I bomb the hospital?" system has to roll a die to decide between "yes" and "no", even if "yes" is *really* unlikely.
LLMs are unbounded, but weighted string generators. The universe of possible strings is infinite and thus the number of *possible* errors is also infinite, and adjusting weights or pre-prompting the model simply make some errors more or less likely.
I find this essay compelling and one big takeaway is all LLM production is a hallucination and understanding the risk of error is essentially impossible because the set of errors is unbounded.
August 26, 2025 at 11:40 PM
The precise way in which LLMs model high-level concepts is not well understood, but the low-level way they model structures like Language *is*. The most intuitive starting point for Transformer--if you already have a strong ML / math foundation--is that Transformer is "Dynamic Graph Perceptron"
August 14, 2025 at 12:25 PM
This is a great overview of the interpretation (and non-interpretation) of LLMs. There's an orthogonal point I think is important though: LLMs are built with in a way we *expect* to be able to produce a effective model of language that is hard to interpret. Our current situation is not a surprise.
There is some confusion about whether or not we understand LLMs. The answer is yes and no, but mostly no. It's a complicated enough question that it seemed like it needed an article.

www.verysane.ai/p/do-we-unde...
Do we understand how neural networks work?
Yes and no.
www.verysane.ai
August 14, 2025 at 11:10 AM
Somehow GAN mode collapse returned
August 8, 2025 at 8:42 PM
This is both true and grading the model on too much of a curve. The model clearly has memorized the token to letter correspondence—for other models where you can get the reasoning trace, models explicitly separate the letters and count them “manually”. GPT-5 is just still shockingly bad at counting.
We have a lot of fun tripping up AI with this, but asking it to parse a word by individual letters is kind of a nonsensical question given how tokenizers operate. It's like asking a Chinese speaker how many G's are in 中国, that's not how they process language.
August 8, 2025 at 1:35 PM
All of the big models released this year can answer this question well without tool calling. We can try to tell ourselves that this is just because the model has memorized every possible description of geographic relations between states, though this isn't a common benchmark or thing people discuss
July 30, 2025 at 8:39 PM
Heard someone say (again) today that "Idiocracy was a documentary," and I thought back to the excellent @sarahz.bsky.social's video on it: www.youtube.com/watch?v=o52z...
I don't disagree with her. I think her points hold up. But the current moment and the movie still seem to have "resonance".
No, Idiocracy Is Not A Documentary
YouTube video by Sarah Z
www.youtube.com
June 11, 2025 at 2:35 AM
There’s this incredible, persistent innumeracy in the replies to this and the original where people say “but I can eat beef! LLMs are useless!” Or “as if we shouldn’t just stop eating beef.” *Training* one of the mostly costly models to produce to date used maybe *1* entire cow.
large cohort of people on here categorically refuse to entertain this point but Mark is correct. AI water use is a rounding error compared to cattle feed and it will likely fall as models become more efficient. the problem with AI is what it does
A cheeseburger uses a lot more water than a ChatGPT request 🍔

Actual farms, not the data center variety, are sucking up groundwater more quickly than surface water, explains @markgongloff.bsky.social 🎥
June 4, 2025 at 12:56 PM