Vlad Chituc
banner
vladchituc.bsky.social
Vlad Chituc
@vladchituc.bsky.social
Cognitive scientist studying how morality, happiness, and other subjective magnitudes can be quantified.

Postdoctoral fellow at Yale University

https://www.vladchituc.com/
So while self-report ratings only show that a war crime is worse than a prank when scenarios are presented together (or within-subjects), you get the obvious difference with magnitude estimation whether you're rating the scenarios together or apart (between-subjects).

5/7
September 28, 2025 at 9:45 PM
This is obvious for everyday adjectives—if I say I have a small house and a big dog, you know I'm not saying that the latter is larger than the former.

And the same holds true for immorality.

(Also: this project has my favorite joke that I've ever snuck into a paper).

3/7
September 28, 2025 at 9:45 PM
Thrilled to announce a new paper out this weekend in
@cognitionjournal.bsky.social.

Moral psychologists almost always use self-report scales to study moral judgment. But there's a problem: the meaning of these scales is inherently relative.

A 2 min demo (and a short thread):

1/7
September 28, 2025 at 9:45 PM
Or 2) vaguely gesture toward algorithmic improvements, as if that can play any meaningful role, here at all.

"Modern chess algorithms are better than Deep Blue with less computational power! That's exponential growth! Hard takeoff baby!!"

This is how fucking dumb you sound:
September 16, 2025 at 10:43 PM
At best, you get people 1) gesturing toward evolution or growth over time, as if these are anything more than an indirect proxy for the stuff we actually care about (which, again and without exception grows sublinearly), and are not in fact prone to be extremely misleading on their own...
September 16, 2025 at 10:43 PM
Every complex system we could think to care about scales sublinearly. I can think of no exception. It's how AI works (see above), it's how chess work (exponent of ~.35; linear increase in search creates linear increase in ELO but geometric increase in board states)...
September 16, 2025 at 10:43 PM
But if you actually look at how e.g. model size scales with benchmark performance: it's ALSO diminishing returns! you have you to keep doubling your model size for a tiny improvement. Formally, we'd describe these all as power laws with a sublinear (< 1 ) exponent (alpha below)
September 16, 2025 at 10:43 PM
Now a 30% decrease in error can be a lot, but each 100x increase in parameter count decreases a smaller and smaller error by 30% each time — classic diminishing returns. As an analogy, the inner circle is getting 30% smaller each step.
September 16, 2025 at 10:43 PM
For example, the first big scaling laws paper described model error as a function of parameters count raised to the power of -.08. Concretely, that means if you make a model 100x bigger (approx difference between each GPT release) it's error decreases by... 30%.
September 16, 2025 at 10:43 PM
The most obvious example is how people discuss LLM scaling laws. As you commonly see it, AI trajectories are framed as either "will scaling laws hold?" or "will AI hit a wall?"

But reader—this is dumb. A distinction without a difference.

Scaling laws ARE the wall.
September 16, 2025 at 10:43 PM
This is the passage I’d show people when they didn’t believe me that Moby Dick was an insanely good novel, actually, and in fact very, very gay.

This is fully and entirely in context. The book was very gay.
September 7, 2025 at 4:50 PM
There’s a recent meta analysis claiming that pregnant mothers taking Tylenol increases risk of ADHD. I’ve yet to see anyone in this literature consider a very obvious confound: ADHD is heritable, and people with ADHD are more likely to get injured (either from impulsivity or inattentiveness, etc).
August 15, 2025 at 2:32 AM
PoVerTy oF tHe sTiMuLuS Is aS WrOnG As PhLoGiStoN
August 9, 2025 at 8:07 PM
Not to be all “Gary Marcus was right about everything” but it really does seem like LLMs have predictably gone the way of video game graphics and literally every other technological advance. They hit the wall of diminishing returns HARD after round 3.
August 7, 2025 at 10:29 PM
Though we replicated the purported sex difference in anger using the standard 10-point scale (twice, actually), we found that it reliably disappeared when tested using the gLMS (even though we had a sample so large that it could detect an effect of only d=.14 with 99% power).

(12/14)
July 5, 2025 at 12:25 AM
Even though these real differences are massive (much larger than any putative group difference in emotion), they're still reliably hidden by 10-point scales. To find these differences, taste researchers developed entirely new measures (like the general Labeled Magnitude Scale, or gLMS).

(10/14)
July 5, 2025 at 12:25 AM
What's so interesting about supertasters is that they differ in more than just their experience of bitterness, they experience ALL tastes more intensely. Salt is saltier, sugar is sweeter, lemons are lemonier, and the meaning of "very strong" *itself* is very... stronger.

(9/14)
July 5, 2025 at 12:25 AM
This mistake (called the El Greco fallacy) is just as much a problem for our 10-point scales. If a person (or a group) feels anger more intensely, then our scales would never show as much, since that very same difference in anger changes what "very intense" means in the context of anger.

(7/14)
July 5, 2025 at 12:25 AM
This is what El Greco's painting *actually* looks likes: strangely and unusually elongated. Art historians puzzled over why El Greco painted this way, and one art historian provided a simple answer: astigmatism. El Greco saw the world as elongated, and he simply painted what he saw.

(5/14)
July 5, 2025 at 12:25 AM
But there's a problem: that can't actually work. And to understand why, it'd be useful to start with the following painting of Saint John the Baptist by the Spanish Renaissance painter El Greco. On first glance, this painting seems totally normal, but that's only because I lied to you.

(4/14)
July 5, 2025 at 12:25 AM
I love philosophy.
June 21, 2025 at 1:22 PM
Wish my brain was this smooth.
June 20, 2025 at 1:43 PM
I was lied to in research methods, and you probably were too. You don't have to say "participant."

vladchituc.substack.com/p/you-dont-h...
May 27, 2025 at 12:06 AM
No one hated psychophysics quite as much as William James, and I think that's beautiful.

(I actually spent like 5 hours typesetting this quote for the beginning of my dissertation about adapting the methods of sensory psychophysics, and it's genuinely one of my favorite parts of my dissertation).
March 18, 2025 at 2:05 AM
As a total aside, I had a lot of fun fucking around with midjourney to make the AI art for this piece
March 15, 2025 at 5:31 PM