Tom Cunningham
Tom Cunningham
@testingham.bsky.social
Economics Research at OpenAI.
Thanks!
March 3, 2025 at 6:11 PM
I think the effects of AI on the offense-defense balance across many domains is really important topic & I'm surprised there aren't more people working on it, it seems a perfect fit for econ theorists.
The Influence of AI on Content Moderation and Communication | Tom Cunningham
Tom Cunningham blog
tecunningham.github.io
March 3, 2025 at 4:42 PM
Thank you!
January 31, 2025 at 8:51 PM
I also talk about the related point that outliers typically have one big cause rather than many small causes.
December 28, 2024 at 12:31 AM
Some more formalization of the argument here:

tecunningham.github.io/posts/2024-1...
Too Much Good News is Bad News | Tom Cunningham
Tom Cunningham blog
tecunningham.github.io
December 28, 2024 at 12:31 AM
2. If a drug is associated with a 5% higher rate of birth defects it’s probably a selection effect, if it’s associated with a 500% higher rate of birth defects it’s probably causal.
December 28, 2024 at 12:31 AM
October 25, 2023 at 3:49 PM
Choosing headcount? Increasing headcount on a team will shift out the Pareto frontier of a team, and so you can then sketch out the *combined* Pareto frontier across metrics as you reallocate headcount.
October 25, 2023 at 3:48 PM
Choosing ranking weights? You can think of the set of classifier scores (pClick,pReport) as drawn from a distribution, and if additive it's easy to calculate the Pareto frontier, and if Gaussian then the Pareto frontier is an ellipse.
October 25, 2023 at 3:48 PM
Choosing launch criteria? You can think of the set of experiments as pairs (ΔX,ΔY) from some joint distribution, and if additive it's easy to calculate the Pareto frontier, and if (ΔX,ΔY) are Gaussian then the Pareto frontier is an ellipse.
October 25, 2023 at 3:47 PM
6. To extrapolate the effect of metric B from metric A, it's best to do a cross-experiment regression, but be careful about bias.
October 17, 2023 at 7:09 PM
4. Empirical Bayes estimates are useful but only incorporate some information, so shouldn't be treated as best-guess of true causal effects.

5. Launch criteria should identify "final" metrics with conversion factors from "proximal" metrics. Don't make decisions on stat-significance.
October 17, 2023 at 7:09 PM
Claims:

1. Experiments are not the primary way we learn causal relationships.

2. A simple Gaussian model gives you a robust way of thinking about challenging cases.

3. The Bayesian approach makes it easy to think about things that are confusing (multiple testing, peeking, selective reporting).
October 17, 2023 at 7:08 PM
The most interesting mechanisms: (1) AI can find patterns which humans didn't know about; (2) AI can use human tacit knowledge, not available to our conscious brain.
October 6, 2023 at 8:07 PM
It requires formalizing the relationship between the AI, the human, and the world. Interestingly there are a number of reasons why the AI, who only encounters the real world via mimicking human responses, can have a superior understanding of the world. Can describe this visually:
October 6, 2023 at 8:06 PM