David Duvenaud
davidduvenaud.bsky.social
David Duvenaud
@davidduvenaud.bsky.social
Machine learning prof at U Toronto. Working on evals and AGI governance.
That's great. But do you have any idea of the magnitude of change in odds in various circumstances? Surely this was examined by public health people?
November 4, 2025 at 3:34 PM
I'm happy for you. How much difference do you think it makes in your reinfection odds whether other people mask?
November 4, 2025 at 3:19 PM
This workshop follows one we ran in July, adding optional specialized talks, and light moderation in the breakout sessions. To see how that one went, and videos of the talks, see this thread:

www.lesswrong.com/posts/csdn3e...
Summary of our Workshop on Post-AGI Outcomes — LessWrong
Last month we held a workshop on Post-AGI outcomes.  This post is a list of all the talks, with short summaries, as well as my personal takeaways. …
www.lesswrong.com
October 28, 2025 at 10:06 PM
It’ll be co-located with NeurIPS. Our workshop is a separate event, so no need to register for NeurIPS to attend ours! Ours is free but invite-only, please apply here:

forms.gle/xcfgBNmaP7Wk...

Co-organized with @kulveit.bsky.social @scasper.bsky.social Raymond Douglas, and Maria Kostylew
Expression of Interest: Post-AGI Workshop: Economics, Culture, and Governance
This is a non-binding form to express your interest for the second Post-AGI Workshop. It will be held concurrently with NeurIPS in San Diego on December 3, 2025. For more details, see the workshop w...
forms.gle
October 28, 2025 at 10:06 PM
Iason Gabriel of Google Deepmind on Resisting Disempowerment

Atoosa Kasirzadeh of CMU on "Taking post-AGI human power seriously"

Deger Turan, CEO of Metaculus on "Concrete Mechanisms for Slow Loss of Control"
October 28, 2025 at 10:06 PM
Beren Millidge of Zyphra on "When does competition lead to recognisable values?"

Anna Yelizarova of Windfall Trust on "What would UBI actually entail?"

Ivan Vendrov of Midjourney on "Supercooperation as an alternative to Superintelligence"
October 28, 2025 at 10:06 PM
The draft program features:

Anton Korinek on the Economics of Transformative AI

Alex Tamkin of Anthropic on "The fractal nature of automation vs. augmentation"

Anders Sandberg on "Cyborg Leviathans and Human Niche Construction"
October 28, 2025 at 10:06 PM
What's the difference, in your view?
October 25, 2025 at 5:37 PM
More generally, we worry that liberalism itself is under threat - that the positive-sum-ness of laissez-faire governance won’t hold when citizens are mostly fighting over UBI. We hope we’re wrong!
September 19, 2025 at 9:04 PM
“So far, we humans have been steering our civilisation on easy mode—wherever people went, they were indispensable. Now we have to hit a dauntingly narrow target: to create a civilisation that will care for us indefinitely—even when it doesn’t need us.”
September 19, 2025 at 9:04 PM
“The average North Korean farmer has almost no power over the state, but they are still useful. The state can’t function unless it feeds its citizens. In an era of general automation, even this minimal duty of care will go.”
September 19, 2025 at 9:04 PM
“The right to vote is the most visible sign of human influence over the state. But consider all the other levers of influence that come from economic power, such as lobbying, protesting and striking, which would also be eroded by mass automation.”
September 19, 2025 at 9:04 PM
Some highlights:

“Democracies are still quite young, and were made possible only by technologies that made liberal, pluralistic societies globally competitive. We’re fortunate to have lived through this great confluence of human flourishing and state power, but we can’t take it for granted.”
September 19, 2025 at 9:04 PM
It's fair to say that people have wrongly predictive massive permanent unemployment before and been wrong. But our piece is asking what happens when everyone actually does become permanently unemployable.
September 19, 2025 at 9:02 PM
I agree. I was just reading a LessWrong comment making a similar point:

"Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life."

www.lesswrong.com/posts/onsZ4J...
www.lesswrong.com
July 10, 2025 at 3:09 PM
It’ll be co-located with ICML. Our workshop is a separate event, so no need to register for ICML to attend ours! Ours is free but invite-only, please apply on our site:

www.post-agi.org

Co-organized with Raymond Douglas, Nora Ammann,
@kulveit.bsky.social, and @davidskrueger.bsky.social
June 18, 2025 at 6:12 PM
- Are there multiple, qualitatively different basins of attraction of future civilizations?

- Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?

- What empirical evidence could help us tell which trajectory we’re on?
June 18, 2025 at 6:12 PM
Some empirical questions we hope to discuss:

- Could alignment of single AIs to single humans be sufficient to solve global coordination problems?

- Will agency tend to operate at ever-larger scales, multiple scales, or something else?
June 18, 2025 at 6:12 PM
Some concrete topics we hope to address:

-What future trajectories are plausible?
-What mechanisms could support long-term legacies?
-New theories of agency, power, and social dynamics.
-AI representatives and new coordination mechanisms.
-How will AI alter cultural evolution?
June 18, 2025 at 6:12 PM
And Anna Yelizarov, @fbarez.bsky.social, @scasper.bsky.social, Beatrice Erkers, among others.

We'll draw from political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency.
June 18, 2025 at 6:12 PM
Thanks for explaining, but I'm still confused. LLMs succeed regularly at following complex natural-language instructions without examples - it's their bread and butter. I agree they sometimes have problems executing algorithms consistently (unless fine-tuned to do so), but so do untrained humans.
June 17, 2025 at 6:39 PM
"only those individuals who explicitly understood a task (via a natural language explanation) reached a correct solution whereas implicit trial and error reinforcement failed to converge. This ... has yet to be demonstrated in an LLM."

Is this claiming LLMs haven't been shown to benefit from hints?
June 16, 2025 at 5:58 PM