Ethan Mollick
emollick.bsky.social
Ethan Mollick
@emollick.bsky.social
Professor at Wharton, studying AI and its implications for education, entrepreneurship, and work. Author of Co-Intelligence.
Book: https://a.co/d/bC2kSj1
Substack: https://www.oneusefulthing.org/
Web: https://mgmt.wharton.upenn.edu/profile/emollick
The hardcover book of GPT-1’s weights that Claude Code designed, produced, and sold (including the cool cover which visualizes the numbers in the volume) actually came in the mail today and it looks really nice.

I never touched any code or did any design or any API to make this.
February 19, 2026 at 1:20 AM
Every few months, I write an updated, idiosyncratic guide on which AIs to use right now.

My new version has the most changes ever, since AI is no longer just about chatbots. To use AI you need to understand how to think about models, apps, and harnesses. open.substack.com/pub/oneusefu...
A Guide to Which AI to Use in the Agentic Era
It's not just chatbots anymore
open.substack.com
February 18, 2026 at 1:50 AM
Worth noting Claude Cowork is quite different from Claude Code (and even more so from agents like OpenClaw) from a security perspective. It runs in a VM with default-deny networking & hard isolation baked in

A sign of a path forward for agents that will not terrify IT.
February 17, 2026 at 2:30 AM
The transition from “AI can’t do novel science” to “of course AI does novel science” will be like every other similar AI transition.

First the over-enthusiastic claims that are debunked, then smart people use AI to help them, then AI starts to do more of the work, then minor discoveries, & then…
February 14, 2026 at 2:21 PM
Don’t reflexively let your AI do your thinking for you (similar to Fabrizio Dell'Acqua’s finding about falling asleep at the wheel)

Paper: papers.ssrn.com/sol3/papers....
February 14, 2026 at 12:56 PM
Using the long-standing Metacalculus bet of when a "weakly general artificially intelligence is achieved":
✅Loebner was a weak Turing Test, the equivalent achieved by GPT-4.5 in a published paper
✅Winograd passed by GPT-3
✅SAT passed at 75% by GPT-4
All that's left is a classic Atari game...
February 14, 2026 at 4:41 AM
Very hard to find AI benchmarks that don't look like this

(and yes, that includes obscure benchmarks that nobody would train on and benchmarks with holdout datasets)
February 13, 2026 at 7:26 PM
We don’t have any barriers to bots flooding every online space. They can use browsers and make payments. In fact, it is arguably a better deal to pay for a bot to access a social media site, since it can post and carry out your agenda when you sleep.

Online spaces are about to get (even more) grim.
February 12, 2026 at 5:13 PM
People keep asking why all my video samples have otters in them. Its tradition! www.oneusefulthing.org/p/the-recent...
The recent history of AI in 32 otters
Three years of progress as shown by marine mammals
www.oneusefulthing.org
February 12, 2026 at 3:47 AM
One more Seedance version of "Monica's apartment from the show Friends, except all of the friends are otters wearing wigs. The otter with a Rachel wig says "Is anything weird" and the one with a Joey wig says "Nope, all is normal"."

I guess Chinese models have few restrictions on training data
February 12, 2026 at 3:43 AM
I pointed Claude Cowork at a set of 107 documents (PPTs, Word, Excel) that were initially hand-created for my class at Wharton & expanded on by AI. They make up a very complex business case.

In a single go (deploying multiple agents spontaneously) it cracked the case & put together recommendations
February 11, 2026 at 10:57 PM
I get yelled at a lot on BS for saying so, but this is 100% true. There are people overhyping AI, but the alternative is not that AI is useless, or even the average of the two positions.

A lot is going to change dramatically even with today's AI. Ignoring that means no chance to shape what's next
I understand why people are exhausted by AI hype, and why those of us squarely in the corner of "human dignity uber alles" see AI doomerism as self-serving hype, but I *really* think people on the left broadly need to start thinking seriously about the possibiltiy of the hype being...true.
February 11, 2026 at 10:56 PM
"In sum, through an extensive (and costly) validation process, we have demonstrated that GPT-5 mini performs very well at recovering the ground truth data. It is clearly better than highly trained graduate students at this specific information retrieval task."

1000x less cost osf.io/preprints/so...
February 11, 2026 at 8:55 PM
The poetry tastes of GenAI: "I want you to suggest two poems that you think apply very well the current state of GenAI models like you. Don’t just pick popular poems and back justify. Think hard about options first."

ChatGPT, Gemini & Claude all suggest Borges's "The Golem"
February 11, 2026 at 8:14 PM
Seedance: "A documentary about how otters view Ethan Mollick's "Otter Test" which judges AIs by their ability to create images of otters sitting in planes"

Again, first result.
February 11, 2026 at 3:27 AM
The new ByteDance SeeDance 2.0 video model is VERY good. Each video is the very first output of the prompt. There are four, worth seeing them all to get a sense of the range (and potential issues)

"A nature documentary about an otter flying an airplane"
February 11, 2026 at 3:07 AM
The AI Labs don't yet do a good job explaining how the upgrades to their harnesses change work

For example, since Opus 4.6, Claude Code will spontaneously use subagents to do work in parallel. This is very helpful with a real impact on tasks, but was sort of quietly rolled out without documentation
February 10, 2026 at 9:06 PM
LLMs tripled new book releases since 2022. Average quality fell: most new entries are slop

BUT books 100-1,000 per category are actually better than before, & pre-LLM authors got more productive. And since people only read the good books, it is net positive for readers. www.nber.org/papers/w34777
February 10, 2026 at 6:19 PM
This might be the first hot take on how technology tells us how to live our lives, destroying our ability to make human decisions.

The technology in question is the sundial.

From a 3rd century BCE Roman adaptation of a Greek play, as discussed in Kerr’s “The Ordered Day”

(It isn't wrong, though)
February 10, 2026 at 5:00 PM
I think people who are into AI don't realize how much people's interactions with "AI" turn out to be, when you ask them: customer service lines (which are almost certainly not GenAI yet) or Siri or maybe a free model from Google or OpenAI or an off-brand "ChatPT" AI app they downloaded somewhere.
February 10, 2026 at 3:41 AM
Always found it interesting that the human eye can see colors that cannot be displayed on any screen or page.

I had ChatGPT whip up a pretty good imaginary color viewer after asking it to review the scientific literature and getting the shades right. chatgpt.com/canvas/share...
February 10, 2026 at 3:11 AM
So far “telling a satisfying and well-written medium-length story” has proved far harder for LLMs than mathematical proofs, music generation, research reports, code, and many other forms of work.

The technical reasons are pretty clear, but they are supposed to be language models
February 9, 2026 at 10:53 PM
There is no sign that Dems or Repubs have different propensities to use AI: "the “politics of AI” is not primarily driven by ideological resistance or enthusiasm for the technology, but rather by structural differences in where people work and what skills they possess." www.nber.org/papers/w34813
February 9, 2026 at 3:33 PM
Sold out! But I had Claude create and deploy all 80 volumes of The Weights to the site as well-formatted PDFs, so you can download them for free if you want.

58,276 pages in total. 117 million floating point numbers. This is everything that makes GPT-1. weights-press.netlify.app
February 9, 2026 at 4:01 AM
I have been a little obsessed by the idea of a printed edition of the full parameters of an LLM. Never mind that it would take hundreds of years to do a single inference calculation by hand, it would be possible in theory, if you had the weights.

So I had Claude make it: weights-press.netlify.app
February 8, 2026 at 8:59 PM