David Manheim
davidmanheim.alter.org.il
David Manheim
@davidmanheim.alter.org.il
Humanity's future can be amazing - let's make sure it is.

Visiting lecturer at the Technion, founder https://alter.org.il, Superforecaster, Pardee RAND graduate.
Definitely worth a reminder, even though I should certainly know to do so!
October 23, 2025 at 6:21 PM
And I find myself strongly agreeing with almost everything Emma Ruttkamp-Bloem is saying in her #AIES2025 keynote about the future of AI ethics - despite continuing to worry that the vision for how to make AI systems more ethical does not sufficiently address future risks.
October 22, 2025 at 11:32 AM
Many things are explainable, but not understood by us. You were asserting that LLMs are understood, not that they are theoretically understandable.
October 13, 2025 at 12:14 PM
Good to see you at least read the abstract.

Now try the paper, especially about how behaviors of attention heads in a model evolve with respect to the training data distribution, and then tell me that language models are directly coded again.
October 5, 2025 at 6:21 PM
...what makes someone qualify as a Zionist, in your view?

Because if you mean supporting genocide and/or ethnic cleansing in Gaza, sure, that's obviously horrific. But if you mean wanting a 2-state solution instead of wanting all of Israel wiped off the map, I'm much more concerned.
October 5, 2025 at 6:19 PM
I'm very unsure if you're shockingly ignorant for someone so confident, or shockingly confident for someone so ignorant, but I'm not sure it matters.

Anyways, here's an expert obviously disagreeing with you for you to ignore: arxiv.org/abs/2504.18274
Structural Inference: Interpreting Small Language Models with Susceptibilities
We develop a linear response framework for interpretability that treats a neural network as a Bayesian statistical mechanical system. A small perturbation of the data distribution, for example shiftin...
arxiv.org
October 5, 2025 at 6:15 PM
Bottom line: Treat AI-enabled bio risk as rising but still governable, and aim for clearer threat models, empirical monitoring, and adaptive policies.
(12/12)

And if you don't want to read the 100+ page report, read the 6-page @rand.org brief for details:
www.rand.org/pubs/researc...
When Should We Worry About AI Being Used to Design a Pathogen?
Concerns that artificial intelligence (AI) might enable pathogen design are increasing, but risks and timelines remain unclear. This brief describes a Delphi study of biology and AI experts who debate...
www.rand.org
October 5, 2025 at 3:54 PM
And back to the risk, norms matter, but aren’t enough: self-governance (reviews, responsible disclosure) helps, yet can’t reliably constrain determined actors or novel misuse. We’ll need coordinated regulatory and institutional guardrails, though they don't need to be intrusive. (11/12)
October 5, 2025 at 3:53 PM
But even with all the risk, we can invest in pandemic readiness that pays off regardless of origin—rapid diagnostics, scalable vaccines, surge capacity. These reduce incentives and impact even if controls are bypassed.

ASB's recent interview discussed this:
www.youtube.com/watch?v=pnfT...
(10/12)
AI-designed diseases are coming. Here's the defence plan. | Andrew Snyder-Beattie
YouTube video by 80,000 Hours
www.youtube.com
October 5, 2025 at 3:52 PM
And to prepare, there are some concrete safeguards to build:
– Strengthen global gene-synthesis screening.
– Add identity checks, experiment pre-screens, and audit trails for cloud/automated labs.
– Improve data governance for genomic/experimental datasets (quality + access control).
(9/12)
October 5, 2025 at 3:51 PM
Policy implications (pragmatic):
– Focus mitigations on plausible, actionable risks and misuse pathways now.
– Risk will increase if barriers fall, so we should monitor four vectors: clinical bioengineering, lab automation, high-fidelity simulations, and generally capable AI.
(8/12)
October 5, 2025 at 3:50 PM
Where expert views diverge: speed of capability gains. Some expect steady, marginal increases; others worry about threshold effects where capabilities jump quickly. Both agree monitoring is essential soon, but there is lots of genuine uncertainty about timelines. (7/12)
October 5, 2025 at 3:50 PM
Biology still pushes back: transmissibility has physical/biological ceilings; environmental stability trades off with other fitness traits, etc. Many constraints interact, so the limits, which we explain at length in the report, need to be understood in concert. (6/12)
October 5, 2025 at 3:50 PM
What AI helps with today: pattern-finding, hypothesis generation, identifying gene targets, and speeding iterative design.

Today, these are force multipliers for sophisticated or state actors, not push-button bioweapons - but no fundamental limits to that happening in the future were found.
(5/12)
October 5, 2025 at 3:50 PM
The experts see risk expanding after 2027 as models, automation, and simulation improve.
Until then:
– Data limits: models are only as good as (noisy, incomplete, and biased) data.
– Complex biology: host–pathogen dynamics are hard to predict.
– Wet-lab bottlenecks: validation is slow and expensive.
October 5, 2025 at 3:48 PM
I should also clarify: we are explicitly not discussing terrorist misuse of language models to spread extant pathogens. That's unfortunately already possible, especially with jailbroken frontier models - but it is not the biggest risk.

So why the caution (not panic) right now?
(3/12)
October 5, 2025 at 3:46 PM
Bottom line (near term): Through ~2027, AI is mostly an accelerator for already-skilled actors—not an autonomous designer of novel pathogens. Useful, yes. Independent, no. And significant uncertainty remains about the slope of progress.
(2/12)
October 5, 2025 at 3:46 PM
"Circular financing" and "new revenue helps support NVIDIA’s valuation" is only true if investors think it should be.

So this is actually just saying "NVIDIA investors are happy to fund this deal."
September 26, 2025 at 10:09 AM
If true, the Dems are so screwed, and that means the US is so screwed.

If they can't go moderate, they will continue to lose elections to populists determined to undermine US democracy generally.
September 25, 2025 at 8:42 AM
Link to original: x.com/goodalexande...

Follow-up tweet:
September 7, 2025 at 9:30 AM