Dr Heidy Khlaaf (هايدي خلاف)
banner
heidykhlaaf.bsky.social
Dr Heidy Khlaaf (هايدي خلاف)
@heidykhlaaf.bsky.social
Climber 🇪🇬 |Chief AI Scientist at @ainowinstitute.bsky.social | Safety engineer (nuclear, software & AI/ML) | TIME 100 AI | MIT 35 U 35
x-Trail of Bits, OpenAI, Microsoft Research
https://www.heidyk.com/
Being recognised in MIT's 35 Under 35 just a week after being in TIME100 AI is such an honour! The profile gets to the heart of the motivation of my work, which includes the use of AI in Gaza that has contributed to a devastating death toll: www.technologyreview.com/innovator/he...
September 8, 2025 at 12:49 PM
This is Anas's final will and testament "I urge you not to let chains silence you, nor borders restrain you. Be bridges toward the liberation of the land and its people, until the sun of dignity and freedom rises over our stolen homeland. I entrust you to take care of my family."
August 10, 2025 at 11:31 PM
Was honoured to participate in this ICRC panel to discuss the launch of their recommendations for the use of AI decision support systems within military and lethal contexts. The recording of the event can be found here: vimeo.com/1094320405?s...
June 18, 2025 at 3:58 PM
I'll be joining the ICRC as a panelist for the upcoming event “AI in Military Decision Making: A Dialogue on How to Enhance IHL Compliance” to discuss AI decision support systems and their implications and risks in warfare.
May 26, 2025 at 11:48 AM
New paper! When safety-critical AI systems are unsafe, society at large is put at risk. But the evaluation of AI within NatSec uses are building on co-opted “safety” terms that not only dilute traditional safety methods, but place AI technologists as arbiters of life or death🧵:
April 22, 2025 at 11:26 AM
Appalling that NATO, through its acquisition of Palantir's AI system, is boasting about using LLMs for war fighting capabilities. LLMs frequently get basic tasks like citations, historical events, or calculations wrong. This is not a technology that can be used for targeting.
April 19, 2025 at 5:49 PM
Google's AI Search reporting April fool's jokes as fact in its results. This one is from Climbing Magazine.
April 1, 2025 at 7:37 PM
Incredibly refreshing to see this thorough study by NASA that examines the recent trend to use LLMs to produce safety cases. More safety engineers need to come forward and challenge the questionable narratives being put forward about safety cases in AI.
ntrs.nasa.gov/api/citation...
March 26, 2025 at 10:08 PM
It's great to see others aptly pointing to the security risks that LLMs pose in gov/natsec use, but "human oversight" is simply not a sufficient mitigation strategy. We outlined some recommendations in our paper, that necessitate much more than simply adding another HITL.
February 26, 2025 at 9:59 PM
I have a lot to say on this, but what I find most alarming is the use case of LLMs for translation and transcription, which have extremely poor accuracy, within intelligence and targeting contexts. So here is my LinkedIn post regarding this part of the investigation:
February 18, 2025 at 5:49 PM
Our statement on the UK AI Safety Institute transition to the UK AI Security Institute:
ainowinstitute.org/general/ai-n...
February 14, 2025 at 3:38 PM
I suppose the IDF using GPT-4 for Arabic translation perfectly explains this
January 23, 2025 at 1:23 PM
You cannot prioritize "national security" without actual security. Removing CISA will have significant consequences to the cyber security posture of national infrastructure.
January 22, 2025 at 1:28 PM
*Taps the quote post*
January 19, 2025 at 12:04 PM
Deregulating nuclear power plants? What's the worst that could happen, it's just bureaucracy. Surely nothing bad has ever happened in one.
January 13, 2025 at 9:19 PM
December 19, 2024 at 5:03 PM
This also puts in perspective the recent partnership between Anduril and OpenAI. AI companies are pooling their resources together to reshape the DoD as they see fit. Let's not forget what Karp recently said.
December 7, 2024 at 2:16 PM
All that's being demonstrated at this point is that UN/ICJ and ICC are all theatre and quickly discarded when it no longer serves them.
November 27, 2024 at 11:54 AM
Just happened on my own post, with a reply that I did not hide myself. I had to click 3 separate options to see the post, and now their account gives "Content Warning".
November 12, 2024 at 1:46 PM
Bringing Military x AI posts from other socials:
I cannot emphasize enough how much of a disaster this portion of Meta's announcement is for safety engineering and will guarantee a catastrophic incident. TechCrunch covered our criticism here:
techcrunch.com/2024/11/04/m...
November 11, 2024 at 12:18 PM
I don't mean to ruin everyone's vibe on here by sharing a post from the bad place, but this is why I won't be using Blue Sky as my main platform. I'm not interested in insulating myself from a genocide.
November 10, 2024 at 9:05 PM
New paper with @smw.bsky.social & @meredithmeredith.bsky.social. We challenge the narrative emphasising AI bioweapons risks, and bring attention to the covert proliferation of military intelligence, surveillance, targeting, and reconnaissance (ISTAR) already occurring via foundation models. 1/5
October 24, 2024 at 9:22 AM