Computational Cosmetologist
banner
dferrer.bsky.social
Computational Cosmetologist
@dferrer.bsky.social
ML Scientist (derogatory), Ex-Cosmologist, Post Large Scale Structuralist

I worked on the Great Problems in AI: Useless Facts about Dark Energy, the difference between a bed and a sofa, and now facilitating bank-on-bank violence. Frequentists DNI.
Some of them are so well known that we don't consider them flaws. There is an image that looks *more* like a smiling human face than any real face---despite objectively having little in common with one. We could imagine aliens or AIs without our visual system finding it strange we see this way.
November 7, 2025 at 3:42 PM
My completely unserious personal benchmark for new LLMs is how much joy asking basic SWE questions to this pipeline brings me. We've come a long way in the last year.

Here is the triumphal conclusion to "The Linterphallus: Symbolic Castration of the Pythonic *Real*" from DeepSeek 3.1
October 16, 2025 at 7:59 PM
Apropos of no other discussions of unstable loci of meaning and the size of numbers, here's my first test chat with an old RAG demo I made that uses tools exclusively to find relevant passages from philosophy ebooks I pirated for class in my undergrad.

Its takes are much hotter now.
October 16, 2025 at 7:26 PM
This may be the most fun I’ve had making diagrams for a presentation
September 18, 2025 at 12:53 PM
No more Vichy+ here
September 18, 2025 at 1:01 AM
This was great for early modeling of smaller graphs like social media and molecular structures. But it requires us to *know* the graph in advance. We can combine these two ideas: use a bilinear form to compute the edges from each node's vector embedding, then pass messages on them
August 14, 2025 at 12:25 PM
Now, let's say I want to *do something* with a graph. Pre-2017, the way to model interactions on graphs was "message passing on graphs". Essentially, each node is given a vector. We combine vectors along weighted edges, and then mix the results using a standard perceptron.
August 14, 2025 at 12:25 PM
But there are other, less regular graphs on language as well. Here we have phrases to their referent phrases across sentences. These kinds of relationships are much harder to compute from a formal procedure. These semantic graphs are derived from the syntactic graph (the sentence diagram)
August 14, 2025 at 12:25 PM
The familiar sentence diagram is a graphical representation of the relationship between words. Traditional language AI often tried to algorithmically construct structures like this. At the single sentence level this is tractable. Grammar has a strong, regular structure.
August 14, 2025 at 12:25 PM
Language has many parallel graphical structures. The first is the chain-graph of adjacency. Words are made of up of letters arranged in a particular order. For LLMs, the equivalent of this is the sequence of tokens.
August 14, 2025 at 12:25 PM
Somehow GAN mode collapse returned
August 8, 2025 at 8:42 PM
All of the big models released this year can answer this question well without tool calling. We can try to tell ourselves that this is just because the model has memorized every possible description of geographic relations between states, though this isn't a common benchmark or thing people discuss
July 30, 2025 at 8:39 PM
A fun hobby project I’ve been working on is seeing how much (with a mixture of RAG and prompting) I can force a model out of the groveling, obsequiousness Instruct-tuning tends to give them. I present “The Lacanalyzer”, a Qwen3 based chatbot with a condescending obsession with critical theory.
May 28, 2025 at 9:43 PM