Dr Heidy Khlaaf (هايدي خلاف)
banner
heidykhlaaf.bsky.social
Dr Heidy Khlaaf (هايدي خلاف)
@heidykhlaaf.bsky.social
Climber 🇪🇬 |Chief AI Scientist at @ainowinstitute.bsky.social | Safety engineer (nuclear, software & AI/ML) | TIME 100 AI
|x- Trail of Bits, Zipline, OpenAI, Adelard, MSFTResearch
https://www.heidyk.com/
Hi Nina, both Sofia and I are actually experts in nuclear safety and work on nuclear power. I recommend reading the report as there is no fear mongering regarding nuclear power.
November 14, 2025 at 7:02 PM
Reposted by Dr Heidy Khlaaf (هايدي خلاف)
This fast-tracking approach comes alongside efforts from many of these AI companies themselves to apply unproven AI systems to speed the pace of licensing/regulation. It also forms the core of a new report from the @ainowinstitute.bsky.social @heidykhlaaf.bsky.social
November 14, 2025 at 3:51 PM
Thank you for your kind words! The irony is that they're using the Cold war analogy to roll back the very thresholds established in that period.
November 12, 2025 at 12:08 PM
Despite safety and proliferation risks, both AI labs and governments continue to execute these initiatives through the positioning of nuclear infrastructure as an extension of AI infrastructure in service of the purported “AI Arms race”. A risky shortcut with catastrophic consequences.
November 12, 2025 at 11:07 AM
Yes we've actually engaged with your colleagues prior! Planned on reaching out during paper release and would be happy to include you in the thread.
October 21, 2025 at 3:47 PM
We have a paper on this soon and will share once it's out!
October 21, 2025 at 2:49 PM
There is an issue with this dichotomy that places this "solution" as sufficient when it's far from that, and ultimately doesn't produce anything of value. The approach of "doing something is better than nothing" distracts from the risks present with AI having access to nuclear secrets, for example.
October 21, 2025 at 10:03 AM