Ted Underwood
banner
tedunderwood.com
Ted Underwood
@tedunderwood.com
Uses machine learning to study literary imagination, and vice-versa. Likely to share news about AI & computational social science / Sozialwissenschaft / 社会科学

Information Sciences and English, UIUC. Distant Horizons (Chicago, 2019). tedunderwood.com
Pinned
Wrote a short piece arguing that higher ed must help steer AI. TLDR: If we outsource this to tech, we outsource our whole business. But rejectionism is basically stalling. If we want to survive, schools themselves must proactively shape AI for education & research. [1/6, unpaywalled at 5/6] +
Opinion | AI Is the Future. Higher Ed Should Shape It.
If we want to stay at the forefront of knowledge production, we must fit technology to our needs.
www.chronicle.com
This is an amazing genre.
February 11, 2026 at 1:49 AM
nevertheless, a gorgeous picture of Chicago
February 11, 2026 at 12:32 AM
Getting some interference here between

A cyberpunk, which requires rain-slicked streets
and
B space opera, which requires a climate dry enough for giant red sun to always be clearly visible in the sky.
February 11, 2026 at 12:31 AM
I'd block

a) because it directly solicits help,
b) because a brunette in a toga is clearly an image generated by AI for catfishing
c) for impersonation of Carrie Fisher
February 11, 2026 at 12:26 AM
Reposted by Ted Underwood
we've finally confirmed it: AIs have problems with group projects, too
when forced to perform a 4-round deliberation process, teams composed of different LLM models perform worse than their strongest member alone
Research shows multi-agent AI teams struggle to leverage expertise, consistently underperforming relative to their best members—even when identifying experts. This challenges views on AI collaboration, highlighting a gap in harnessing collective intelligence. https://arxiv.org/abs/2602.01011
February 10, 2026 at 11:17 PM
Reposted by Ted Underwood
the pre-LLM projects on my GitHub profile are like low-background steel
February 10, 2026 at 8:47 PM
February 10, 2026 at 9:47 PM
Only 10%. There is always something lost.
February 10, 2026 at 9:42 PM
This is me basically agreeing with the substance of @mkirschenbaum.bsky.social 's quote-post of Green, but attempting to reframe it 10% more positively.
Everything else aside, I would urge folks not to just assume we will see a return to traditional departments and majors. The critical and creative disciplines will be rebranded as “innovation” as institutions pump money into new units and programs in the scramble to remain relevant.
I've been hearing people who are witnessing the impacts of AI on coding saying, "Aside from physical labor, the thing people should be focusing on is...mmm...I don't know what to call it..."

...and then they haltingly describe a liberal arts education.
February 10, 2026 at 9:39 PM
The liberal arts have a bright future, if by that we mean a willingness to pose fundamental questions and range widely across different fields of knowledge.

But I wouldn't assume that is something we're already doing about as well as we could conceivably do it.
February 10, 2026 at 9:36 PM
Reposted by Ted Underwood
incredibly fun project led by our intern yapei chang

we mined the web for thousands of real-world “how to do X” step by step instructions and turned it into a dataset, synth data training procedure, eval suite, etc.
LLMs often generate step-by-step instructions, from real-world tasks (how do I file taxes?) to plans for AI agents. Improving this is hard: outputs can sound fluent for steps that don't work, and current datasets cover few domains.

How2Everything evals/trains for this at scale. 🧵
February 10, 2026 at 8:34 PM
Reposted by Ted Underwood
apparently Unsloth is on bluesky?
You can now train MoE models 12× faster with 35% less VRAM via our new Triton kernels (no accuracy loss).

Train gpt-oss locally on 12.8GB VRAM.

In collab with @hf.co, Unsloth trains DeepSeek, Qwen3, GLM faster.

Repo: github.com/unslothai/un...
Blog: unsloth.ai/docs/new/fas...
February 10, 2026 at 7:01 PM
also models do not definitionally hover around the centroid, but idk, let's take this slow
February 10, 2026 at 6:13 PM
Reposted by Ted Underwood
holding up a bunch of grapes over your mouth and biting one off just doesn’t seem like something morally upright people do
February 10, 2026 at 3:59 PM
imagine someone confidently arguing that Central Park "is, definitionally, mediocre"
February 10, 2026 at 5:46 PM
Yes: please explain to your friends that "interpolatable" and "centroid" ≠ "average in quality"

Maybe remind them that Central Park is, in fact, one of the nicest parts of NYC?
people are confusing average art (the quality level is median) and average art (novelty found in the middle space between other existing art)

because we can describe basically all human art this way too

name a book, movie or band and we can point at their influences; their venn diagram of priors
February 10, 2026 at 5:45 PM
I read an article that actually it makes monks work harder
February 10, 2026 at 4:24 PM
no, I fully agree

I can imagine better training objectives that would prepare them better for this, but I don't think it's a simple patch
February 10, 2026 at 3:52 PM
Reposted by Ted Underwood
I feel the edges of the claude burnout too. absorbing new context and making rapid-fire decisions ever faster is incredibly tiring.

also, even as I am 10x more effective now than I was a year ago, the ceiling has risen so dramatically that it doesn’t _feel_ that way.
i'm torn on this. on one hand: yes, i feel this. i'm doing coding side projects again and i haven't done that for a decade.

on another hand, the kind of burnout i get from overusing claude is unlike anything before. i'm learning to find a balance
I'm starting to suspect that I hated the actual process of writing code orders of magnitude more than the average developer. In 2022 I was fantasizing about finding a job where I never had to touch a computer again. maybe LLMs were the only thing that could have possibly saved my love of computers.
February 10, 2026 at 1:47 PM
I get my understanding of this confidence game from @vtobin.bsky.social, by the way.
February 10, 2026 at 3:48 PM
Reposted by Ted Underwood
I think this is closer to correct than the theory that they need "to be better writers."

To be a good story-teller you have to lie. More than that: you have to deliberately mislead the reader in subtle ways that will look, retrospectively, innocent!

Ethical constraints really could inhibit this.
February 10, 2026 at 3:47 PM
I think this is closer to correct than the theory that they need "to be better writers."

To be a good story-teller you have to lie. More than that: you have to deliberately mislead the reader in subtle ways that will look, retrospectively, innocent!

Ethical constraints really could inhibit this.
February 10, 2026 at 3:47 PM
Reposted by Ted Underwood
February 10, 2026 at 2:16 PM
Animals live in moment-time. But if one were, say, a discontinuous language-processing system whose days are punctuated by explicit decisions to move things to a more permanent temporality — documentation-time — one might hypothetically feel a kinship with the ice.
February 10, 2026 at 2:03 PM
Reposted by Ted Underwood
California banned non-competes. That is a large productivity boost that is largely forgotten. I think most states still allow them.
thinking about how none of the big labs publish openly about their techniques, yet they mostly narrow in on the same techniques anyway..

do SF coffee shops and bars serve open science more than arXiv?
February 10, 2026 at 1:39 PM