Dr Heidy Khlaaf (هايدي خلاف)
banner
heidykhlaaf.bsky.social
Dr Heidy Khlaaf (هايدي خلاف)
@heidykhlaaf.bsky.social
Climber 🇪🇬 |Chief AI Scientist at @ainowinstitute.bsky.social | Safety engineer (nuclear, software & AI/ML) | TIME 100 AI | MIT 35 U 35
x-Trail of Bits, OpenAI, Microsoft Research
https://www.heidyk.com/
Pinned
New paper with @smw.bsky.social & @meredithmeredith.bsky.social. We challenge the narrative emphasising AI bioweapons risks, and bring attention to the covert proliferation of military intelligence, surveillance, targeting, and reconnaissance (ISTAR) already occurring via foundation models. 1/5
People really lack the capacity to understand that Israel using Palestine as a lab for lethal AI surveillance means that the rest of world is the intended target for prime deployment. This is the plan for our future, one we have vehemently oppose.
nymag.com/intelligence...
Watched, Tracked, and Targeted in Gaza
Life under Israel’s all-encompassing surveillance regime.
nymag.com
December 6, 2025 at 12:17 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
The EBU dropped plans for a vote on excluding Israel because of the ceasefire

This is the point of the pretend ceasefire. Not to stop the genocide but to allow countries & organisations to more easily resist calls to put pressure on Israel & to reverse actions already taken
December 5, 2025 at 8:22 AM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future.
‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants
A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future.
www.404media.co
December 4, 2025 at 3:15 PM
I spoke to @mjgault.bsky.social as to why it was concerning to the US DOE take the position of using generative AI for safety cases and licensing at the IAEA symposium, while also advocating for eliminating human intervention during normal nuclear operations.
December 4, 2025 at 3:35 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
I have a fun presentation for you to watch, all about the AI-nuclear powered future Trump and Tech would love to deliver unto us all
‘Atoms for Algorithms:’ The Trump Administration’s Top Nuclear Scientists Think AI Can Replace Humans in Power Plants
A presentation at the International Atomic Energy Agency unveiled Big Tech’s vision of an AI and nuclear fueled future.
www.404media.co
December 4, 2025 at 3:19 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
This will end well 😩

New report from @ainowinstitute.bsky.social, written by Dr. Sofia Guerra @heidykhlaaf.bsky.social.

Link: ainowinstitute.org/publications...
December 3, 2025 at 1:10 AM
As AI continues to be adopted in national security and defence contexts, the rise of gen AI agents poses questions regarding both their cyber capabilities, and the novel attack vectors inherent to their use that may impede military operations. Excited to work with Boyan to assess exactly this!
We’re expanding our national security and defense work, and welcoming Boyan Milanov to the team. Boyan is a research scientist evaluating cybersecurity risks in agentic AI systems related to national security, defense, and safety-critical infrastructure.

ainowinstitute.org/contributor/...
December 2, 2025 at 8:14 PM
We don't talk enough about how our governments are captured by a bunch of X shitposters with substacks who "prove" things by pointing to cherry-picked or disproven corporate claims while yelling "abundance" and "build more!" with not an ounce of expertise. Especially with ...
December 1, 2025 at 7:03 PM
If you've spoken to any western military personnel, this has been known for quite sometime. Unsurprising given the track record of Oculus within the military. This is the outcome when defense contractors, especially those selling "AI", grade their own homework.
Alternate headline: major republican donor, who has received bipartisan billion-dollar contracts in order to create autonomous weapons and surveillance systems to satisfy an obsession with increasing the lethality and efficiency of warfare has faulty products.
Anduril's autonomous weapons stumble in tests and combat, WSJ reports | TechCrunch
Defense tech startup Anduril Industries has faced numerous setbacks during testing of its autonomous weapons systems, according to new reporting by the WSJ.
techcrunch.com
December 1, 2025 at 4:08 PM
And this is also exactly why the deference of nuclear regulation and oversight from the NRC to the DOD is particularly dangerous. These are political and partial actors who do not have public safety in mind.

www.theguardian.com/us-news/2025...
US navy accused of cover-up over dangerous plutonium in San Francisco
Advocates allege navy knew levels of airborne plutonium at Hunters Point shipyard were high before it alerted officials
www.theguardian.com
November 29, 2025 at 6:25 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
As the world shifts its gaze away from Palestine, the series stands as documentation of the continued ethnic cleansing and offers testimonies and stories from those facing displacement, homelessness and violence from settlers.
The Death that Keeps on Going
How much can one village physically take? The worst-case scenario has already happened countless times in the small West Bank community of Umm al-Khair. It happened when prominent Palestinian…
www.versobooks.com
November 27, 2025 at 6:02 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
It’s all happening in NYC! Taking real steps towards a politics of hope that's already diffusing far beyond this city and country: I couldn't be more excited to serve on Mayor-Elect Zohran Mamdani's wide-ranging and hugely inspiring transition team: www.cbsnews.com/newyork/news...
November 25, 2025 at 9:30 PM
Despite warnings in our report, today's release of the UK Nuclear Regulatory Review is littered with unsubstantiated claims and recommendations touting AI "as a powerful tool" and "cost-effective" to be used for safety and licensing without noted risks or caveats. This trend has now reached the UK.
Problem: AI needs massive amounts of power to thrive. Nuclear makes lots of power. Nuclear takes a long long time to do safely.

Proposed solution that I'm sure will have no unpleasant consequences: Use AI to speed up the construction of new nuclear plants.
Power Companies Are Using AI To Build Nuclear Power Plants
Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.
www.404media.co
November 24, 2025 at 1:16 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
I've said it before and I will say it again. There is no way to secure a system when it's potential attack surface is *all of language*
Looks like LLMs are *very* vulnerable to attack via poetic allusion: "curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90% ..."

https://arxiv.org/html/2511.15304v1
November 20, 2025 at 5:23 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
Tech companies are betting big on nuclear energy to meet AI’s massive power demands—and Trump’s done a lot to make it easier for them. Heidy Khlaaf, the head AI scientist at the AI Now Institute, tells us why that’s dangerous.

@mjgault.bsky.social has the story:
www.404media.co/power-compan...
November 14, 2025 at 6:57 PM
Great coverage by @mjgault.bsky.social on our report and what's at stake and what could go wrong in using AI in an attempt to accelerate nuclear development. Read our report here: ainowinstitute.org/publications...
NEW: Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.

And despite expert concerns about potential disaster, the US government is on board.
Power Companies Are Using AI To Build Nuclear Power Plants
Tech companies are betting big on nuclear energy to meet AIs massive power demands and they're using that AI to speed up the construction of new nuclear power plants.
www.404media.co
November 14, 2025 at 6:52 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
This fast-tracking approach comes alongside efforts from many of these AI companies themselves to apply unproven AI systems to speed the pace of licensing/regulation. It also forms the core of a new report from the @ainowinstitute.bsky.social @heidykhlaaf.bsky.social
November 14, 2025 at 3:51 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
New: I wrote about the nuclear push coming from an energy-constrained AI industry, something that’s importantly coupled with an increasingly de-regulatory environment — often mirroring the language coming directly from these corporations — coming from the White House.

puck.news/ais-nuclear-...
A.I. Goes Nuclear!
OpenAI, Google, and Microsoft are betting big on nuclear energy to power their A.I. data centers. But weakened regulations may create risks, nuclear safety experts warn.
puck.news
November 14, 2025 at 3:51 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
This week Open AI walked back a call for the govt to backstop financing for its trillion dollar investments in data centers. This was only the tip of the iceberg; a slow bailout for AI firms is already underway. Read more from @ambakak.bsky.social and I in @wsj.com: www.wsj.com/opinion/you-...
Opinion | You May Already Be Bailing Out the AI Business
Washington is treating the industry as if it’s too big to fail, even as the market sends lukewarm signals.
www.wsj.com
November 12, 2025 at 10:56 PM
New Report: Fission for Algorithms. We draw on our nuclear expertise to dissect the risky fast-tracking initiatives hastening nuclear development in service of AI. This includes proposals to use Gen AI for nuclear licensing, whilst lowering well-established nuclear thresholds.
Fission for Algorithms: The Undermining of Nuclear Regulation in Service of AI - AI Now Institute
A report examining nuclear “fast-tracking” initiatives on their feasibility and their impact on nuclear safety, security, and safeguards.
ainowinstitute.org
November 12, 2025 at 11:05 AM
"Rafael purchased AI technologies made available through AWS, including the state-of-the-art large language model Claude ... The materials reviewed also indicate Amazon sold cloud-computing services to Israel’s nuclear program and offices administering the West Bank"
SCOOP: I obtained internal documents showing Amazon has been selling cloud computing and AI services to the state-owned Israeli weapons manufacturers whose bombs and missiles have ravaged Gaza.

theintercept.com/2025/10/24/a...
As Israel Bombed Gaza, Amazon Did Business With Its Bomb-Makers
The Intercept has learned that Amazon sold cloud services to Israeli weapons firms at the height of Israel’s bombardment of Gaza.
theintercept.com
October 25, 2025 at 8:24 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
Anthropic’s partnership with the DOE to keep Claude from building a nuclear weapon makes for good headlines. @heidykhlaaf.bsky.social calls it security theater. The real risk is AI firms gaining access to national security data.

www.wired.com/story/anthro...
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?
Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.
www.wired.com
October 22, 2025 at 5:48 PM
I spoke to @mjgault.bsky.social in WIRED on what I ultimately view as safety theatre for "nuclear safeguarding" and how it distracts from the real risk of unregulated private corporations having access incredibly sensitive nuclear secrets given their insecure AI models.
October 21, 2025 at 9:58 AM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
Anthropic says its AI won't help you build a nuclear weapon. Will it work? And can a chatbot even help build a nuke?
Anthropic Has a Plan to Keep Its AI From Building a Nuclear Weapon. Will It Work?
Anthropic partnered with the US government to create a filter meant to block Claude from helping someone build a nuke. Experts are divided on whether its a necessary protection—or a protection at all.
www.wired.com
October 20, 2025 at 2:04 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
OpenAI, Anthropic & others have shifted from championing ethics to signing $200M+ defense contracts that embed gen AI into high-risk military systems. In @theverge.com, @heidykhlaaf.bsky.social explains why the move toward defense partnerships is a safety risk.

Listen here: shorturl.at/mTAZq
September 30, 2025 at 6:19 PM