Tom Cunningham
Tom Cunningham
@testingham.bsky.social
Economics Research at OpenAI.
How does AI change the balance of power in content moderation & communication?

I wrote something on this (pre-OpenAI) with a simple prediction:

1. Where the ground truth is human judgment, AI favors defense.
2. Where the ground truth is facts in the world, AI favors offense.
The Influence of AI on Content Moderation and Communication | Tom Cunningham
Tom Cunningham blog
tecunningham.github.io
March 3, 2025 at 4:42 PM
A new post: On Deriving Things

(about the time spent back and forth between clipboard whiteboard blackboard & keyboard)

tecunningham.github.io/posts/2020-1...
January 31, 2025 at 7:14 PM
When too much good news is bad news:

1. If an AB test shows an effect of +2% (±1%) it’s very persuasive, but if it shows a an effect of +50% (±1%) then the experiment was probably misconfigured, and it’s not at all persuasive.
December 28, 2024 at 12:31 AM
New post: Thinking about tradeoffs? Draw an ellipse.

With applications to (1) experiment launch rules; (2) ranking weights in a recommender; and (3) allocating headcount in a company.
October 25, 2023 at 3:47 PM
My daughter and I have an arrangement: She does whatever I ask her to do. In return I only ask her to do things that I know she's going to do anyway.
October 23, 2023 at 2:16 PM
A long and partisan note about experiment interpretation based on experience at Meta and Twitter.

The common thread is that people are pretty good intuitive Bayesian reasoners, so just summarize the relevant evidence and let a human be the judge:

tecunningham.github.io/posts/2023-0...
Experimentation Interpretation and Extrapolation | Tom Cunningham
tecunningham.github.io
October 17, 2023 at 7:07 PM
I wrote a long blog post working through the different ways in which an AI trained to *imitate* humans could *outperform* humans: tecunningham.github.io/posts/2023-0...
An AI Which Imitates Humans Can Beat Humans | Tom Cunningham
tecunningham.github.io
October 6, 2023 at 8:04 PM