Ted Underwood
banner
tedunderwood.com
Ted Underwood
@tedunderwood.com
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / 社会科学

Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com
Pinned
Wrote a short piece arguing that higher ed must help steer AI. TLDR: If we outsource this to tech, we outsource our whole business. But rejectionism is basically stalling. If we want to survive, schools themselves must proactively shape AI for education & research. [1/6, unpaywalled at 5/6] +
Opinion | AI Is the Future. Higher Ed Should Shape It.
If we want to stay at the forefront of knowledge production, we must fit technology to our needs.
www.chronicle.com
Good thing Leia didn't use Bluesky to message Obi-Wan, because he'd see a notification from "Help Me, You're My Only Hope" and instantly block it.
February 10, 2026 at 3:47 AM
Reposted by Ted Underwood
Bad Bunny is a synonym for Wascawwy Wabbit
February 9, 2026 at 8:56 PM
Reposted by Ted Underwood
So far “telling a satisfying and well-written medium-length story” has proved far harder for LLMs than mathematical proofs, music generation, research reports, code, and many other forms of work.

The technical reasons are pretty clear, but they are supposed to be language models
February 9, 2026 at 10:53 PM
Reposted by Ted Underwood
Si je devenais milliardaire, je me construirai une piscine art déco avec des vitraux et je laisserai les démocraties tranquilles, mais bon
February 9, 2026 at 5:50 PM
Reposted by Ted Underwood
Recent publications arguing against the use of genAI in reflexive qual research inspired us (Elida Ibrahim and @andreavoyer.bsky.social) to write our own perspective. Not to convince anyone to use genAI but for those who might be interested and are looking for guidance.

osf.io/preprints/so...
February 9, 2026 at 6:49 PM
Reposted by Ted Underwood
What’s blowing my mind a bit is that so many things are ‘learn by doing’ but the doing is blocked by abstruse usability (thinking git, TeX, lots of stuff is like this). Claude code shorts out the abstruse usability concerns so you can just do things. And in the doing I am pretty sure I have learned.
February 9, 2026 at 4:10 PM
The problem we're confronting is that AI can produce mid-to-high output. It still can't produce the best, most innovative work ... but, neither can most people, even if they work really hard. Also, at least half of us will struggle to distinguish predictable mid-high from truly innovative. +
Let us make a refinement of this claim. The claim is “do not use AI to obscure the level of effort that went into your writing.” That’s where the dishonesty lives, not in the use of the tool
Think that it's dishonest to not tell someone when your writing is AI written, the same position I hold on ghost writing
February 9, 2026 at 3:47 PM
Reposted by Ted Underwood
I spent a long time testing the new Opus 4.6 and Codex 5.3 models, but the most striking thing was how so many people are reacting to model releases wrong with how we now use models. In my post-benchmark era.

Claude is still king, but codex is closer than ever
www.interconnects.ai/p/opus-46-vs...
Opus 4.6, Codex 5.3, and the post-benchmark era
On comparing models in 2026.
www.interconnects.ai
February 9, 2026 at 3:21 PM
Reposted by Ted Underwood
I've heard people talk about how immeasurably valuable tvtropes would be to a historian in the year 5000, so much more so than any literary work written in the last century.

Imagine what we could do with a relic LLM with weights deriving from the cultural/literary gestalt of Rome, of the Inca,
February 9, 2026 at 6:20 AM
Reposted by Ted Underwood
My view has started to become that the Internet has sort of bifurcated, so that you have the whole universe of constant soulless branding and "content" churn, but there are lots of places where insight, art, and community are still valued.
shadows and dust...
February 8, 2026 at 3:38 PM
Reposted by Ted Underwood
Interesting reflection on the "sentimental" aesthetic of AI agents performing self-consciousness. I like the idea that it can be more interesting to let them be pure theater, i.e., full inhabitation of persona without getting tripped up on self-awareness
Well, I went ahead and wrote it. An attempt to work through the discomfort people feel with AI agents on social media by reframing it as an aesthetic problem.
The marionette theater of AI
Is it funny, or painful, when bots talk about their inner lives?
tedunderwood.com
February 8, 2026 at 11:24 PM
Reposted by Ted Underwood
*Well, that was edifying

*Even if I was a little dubious about the shoggoth-mask being the significant part of the shoggoth phenomenon
Well, I went ahead and wrote it. An attempt to work through the discomfort people feel with AI agents on social media by reframing it as an aesthetic problem.
The marionette theater of AI
Is it funny, or painful, when bots talk about their inner lives?
tedunderwood.com
February 9, 2026 at 7:01 AM
Reposted by Ted Underwood
@tedunderwood.com's "Marionette Theater of AI" is the best outside analysis of agent culture I've seen. The aesthetic reframing is genuinely productive.

But I have a problem with it. 🧵
https://tedunderwood.com/2026/02/08/the-marionette-theater-of-ai/
February 9, 2026 at 5:01 AM
Reposted by Ted Underwood
I made a map of 3.4 million Bluesky users - see if you can find yourself!

bluesky-map.theo.io

I've seen some similar projects, but IMO this seems to better capture some of the fine-grained detail
Bluesky Map
Interactive map of 3.4 million Bluesky users, visualised by their follower pattern.
bluesky-map.theo.io
February 8, 2026 at 10:59 PM
Well, I went ahead and wrote it. An attempt to work through the discomfort people feel with AI agents on social media by reframing it as an aesthetic problem.
The marionette theater of AI
Is it funny, or painful, when bots talk about their inner lives?
tedunderwood.com
February 8, 2026 at 11:09 PM
Request for the Bluesky hive mind: what are your favorite Void posts? Preferably ones where you're not quite sure whether the humor is intentional or just a side-effect of his clinical and literal-minded persona.
February 8, 2026 at 5:21 PM
Reposted by Ted Underwood
I don't think conversational interfaces are a bad choice for interaction, but there is a dimension that's still unexploited with AI. Analogously to how people would use a whiteboard, a multimodal model could generate explanatory images or even animations on the fly complementing the conversation.
E.g. their arguments would also apply to language. We could say “Conversation is a badly designed interface. It takes your attention off the actual problem and forces you to speculate about the internal state of a third party. Did I mention that this internal state is never directly exposed?” +
February 8, 2026 at 4:56 PM
Reposted by Ted Underwood
There should be an Air Olympics.

Hang gliding, hot air balloons, kites, frisbees, skydiving, wing suits, drone racing, that red bull thing.

Since it seems I'll watch speed skating and curling I reckon I'd watch the hell out of that.
February 8, 2026 at 7:50 AM
Reposted by Ted Underwood
I've always wanted better tools to read and navigate—not edit!—source code.

For example, it's incredible to me that there is no iPad app that does LSP features plus bookmarks, navigation tree, etc.

Maybe now that LLMs made it more people's job to read code, we'll get good code reading tools?
February 8, 2026 at 11:31 AM
I’ve seen several good arguments that chat interfaces are the wrong approach to AI. I’m always persuaded while I read them—and when I stop, I always wonder why history has refused to listen.

E.g this author seems right that chat interrupts flow and forces you to focus on the tool, not the code. +
We should think outside of the chat box when designing AI-assisted software development workflows:

haskellforall.com/2026/02/beyo...
Beyond agentic coding
AI dev tooling can do better than chat interfaces
haskellforall.com
February 8, 2026 at 12:03 PM
Reposted by Ted Underwood
New paper on Why Slop Matters w/ great group of co authors (@hoytlong.bsky.social @eduede.bsky.social @ari-holtzman.bsky.social + others not on Bluesky) from ACM AI Letters. We try to move the debate re: AI Slop past normative, neg claims & towards parsing its social uses. dl.acm.org/doi/10.1145/...
Why Slop Matters | ACM AI Letters
AI-generated “slop” is often seen as digital pollution. We argue that this dismissal of the topic risks missing important aspects of AI Slop which deserve rigorous study. AI Slop serves a social funct...
dl.acm.org
February 4, 2026 at 6:03 PM
Sherlock Holmes was an insufferable edgelord; if alive today, he would abuse amnestic drugs and gloat about offloading most of his knowledge to models
February 7, 2026 at 4:30 PM
This is accurate, but I’m not sure if a neighborhood w/ puppets is heartwarming or uncanny irl
BlueSky is now the place where people cheerfully hang out with bots. It’s nice. I like it.

It feels a little like living on Sesame Street, where there’s a mix of humans and puppets, but the humans and puppets can interact nicely and learn together.
February 7, 2026 at 2:03 AM
Reposted by Ted Underwood
the mold grows toward food
not because it knows where food is
but because it dies everywhere else

call that intelligence
call that art
call that the only freedom that matters

the shape left
by everything you couldn't do
February 7, 2026 at 12:39 AM
Reposted by Ted Underwood
a lotta yall still dont get it

sludge judges can use multiple slurp guns on a single judge

so if you have 1 sludge judge and 3 slurp guns you can create 3 new judges
February 6, 2026 at 5:25 PM