Helen Toner
banner
hlntnr.bsky.social
Helen Toner
@hlntnr.bsky.social
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social‬ (opinions my own). Author of Rising Tide on substack: helentoner.substack.com
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:
July 22, 2025 at 12:49 AM
Been thinking recently about how central "AI is just a tool" is to disagreements about the future of AI. Is it? Will it continue to be?

Just posted a transcript from a talk where I go into this + a couple other key open qs/disagreements (not p(doom)!).

🔗 below, preview here:
June 30, 2025 at 8:40 PM
💡Funding opportunity—share with your AI research networks💡

Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.

Full details ➡️ cset.georgetown.edu/wp-content/u...

Summary ⬇️
May 19, 2025 at 4:59 PM
Criticizing the AI safety community as anti-tech or anti-risktaking has always seemed off to me. But there *is* plenty to critique. My latest on Rising Tide (xposted with @aifrontiers.bsky.social!) is on the 1998 book that helped me put it into words.

In short: it's about dynamism vs stasis.
May 12, 2025 at 6:21 PM
New on Rising Tide, I break down 2 factors that will play a huge role in how much AI progress we see over the next couple years: verification & generalization.

How well these go will determine if AI just gets super good at math & coding vs. mastering many domains. Post excerpts:
April 23, 2025 at 3:46 PM
Cognitive Revolution (🇺🇸): More insidery chat with @nathanlabenz.bsky.social getting into why nonproliferation is the wrong way to manage AI misuse; AI in military decision support systems, and a bunch of other stuff.

Clip on my beef with talk about the "offense-defense" balance in AI:
April 22, 2025 at 1:27 AM
Stop the World (🇦🇺): Fun, wide-ranging conversation with David Wroe of @aspi-org.bsky.social on where we're at with AI, reasoning models, DeepSeek, scaling laws, etc etc.

Excerpt on whether we can "just" keep scaling language models:
April 22, 2025 at 1:27 AM
2 new podcast interviews out in the last couple weeks—one for more of a general audience, one more inside baseball.

You can also pick your accent (I'm from Australia and sound that way when I talk to other Aussies, but mostly in professional settings I sound ~American)
April 22, 2025 at 1:27 AM
What to do instead? IMO the best option is to think in terms of "adaptation buffers," the gap between when we know a new misusable capability is coming and when it's actually widespread.

During that time, we need massive efforts to build as much societal resilience as we can.
April 5, 2025 at 6:09 PM
The basic problem is that the kind of AI that's relevant here (for "misuse" risks) is going to get way cheaper & more accessible over time. This means that to indefinitely prevent/control its spread, your nonprolif regime will get more & more invasive and less & less effective.
April 5, 2025 at 6:09 PM
Seems likely that at some point AI will make it much easier to hack critical infrastructure, create bioweapons, etc etc. Many argue that if so, a hardcore nonproliferation strategy is our only option.

Rising Tide launch week post 3/3 is on why I disagree 🧵

helentoner.substack.com/p/nonprolife...
April 5, 2025 at 6:09 PM
Lately it sometimes feels like there are only 2 AI futures on the table—insanely fast progress or total stagnation.

Talked with @axios.com's Alison Snyder at SXSW about the many in-between worlds, and all the things we can be doing now to help things go better in those worlds.
March 12, 2025 at 3:13 PM
Due in 2 days (!): comments on v2 of the EU's draft Code of Practice for general-purpose AI.

For anyone else who'd find it helpful, here's a side-by-side comparison with v1:
draftable.com/compare/NNBy...

One snap take: the rewritten taxonomy of systemic risks (p29) is much stronger.
January 13, 2025 at 8:09 PM
Positively heartwarming NYT story on the Little Taipei springing up near TSMC's new plant in Arizona. Beef noodle soup, English lessons at church, high-skill immigrant visas... everything a progress studies nerd's heart could desire.

www.nytimes.com/2024/12/29/b...
December 31, 2024 at 11:36 AM
Fun @jascha.sohldickstein.com post that straightforwardly connects 2 ways of thinking about the perils of optimization (Goodhart's Law and overfitting), then suggests some ways to apply anti-overfitting measures to societal problems. Thought-provoking!

sohl-dickstein.github.io/2022/11/06/s...
December 31, 2024 at 11:36 AM
By Anton Leicht: how well did AI safety advocates make use of the post-ChatGPT policy window? Some good nuance re: how hard it is to regulate tech that doesn't exist yet, and where the AI safety crowd made mistakes vs. incurred hard-to-avoid costs.

www.techpolicy.press/the-end-of-t...
December 31, 2024 at 11:36 AM