Center for International Policy
banner
cipolicy.bsky.social
Center for International Policy
@cipolicy.bsky.social
Advancing a more peaceful, just & sustainable world by centering people & the planet in US foreign policy.

Home of International Policy Journal and UnDiplomatic Pod, The Iran Podcast and Black Diplomats.

Learn more at internationalpolicy.org
If you like this and other policy-forward writing, you can get our International Policy Journal newsletter in your inbox open.substack.com/pub/cippolic...
Can Complimentary Learning Methods Teach AI the Laws of War?
BASIC Training, Doubt Protocols
open.substack.com
November 10, 2025 at 5:26 PM
This argument is compatible and complimentary with a strong arms-control approach to the development and fielding of military AI, carving out a space of what more-responsible AI development looks like, to contrast with how military AI is at present. internationalpolicy.org/publications...
Can Complimentary Learning Methods Teach AI the Laws of War? - CIP
Training AI models on edge cases of war offers the possibility of hard-coding compliance with laws or war, or at least necessary doubts.
internationalpolicy.org
November 10, 2025 at 5:26 PM
Davit Khachatryan is thorough on what coding for international humanitarian law will require, and the piece offers concrete recommendations for how to mandate accountability in military AI, rather than using it as a shield to obfuscate responsibility.
November 10, 2025 at 5:26 PM
We can mandate higher standards for such AI use.

"Embedding the humane minimum into AI means that in every training run, whether through curated historical cases or artificially generated edge scenarios, the option that aligns with humane treatment under uncertainty must be given decisive weight."
Can Complimentary Learning Methods Teach AI the Laws of War? - CIP
Training AI models on edge cases of war offers the possibility of hard-coding compliance with laws or war, or at least necessary doubts.
internationalpolicy.org
November 10, 2025 at 5:26 PM
"The way those objectives are weighted in the reward function is decisive. If mission success is rewarded heavily and civilian harm only lightly penalized, the AI will statistically favor the course of action that maximizes mission success, even if that means accepting higher risks to civilians."
Can Complimentary Learning Methods Teach AI the Laws of War? - CIP
Training AI models on edge cases of war offers the possibility of hard-coding compliance with laws or war, or at least necessary doubts.
internationalpolicy.org
November 10, 2025 at 5:26 PM
"Left uncorrected, the AI may treat those lawful restraint decisions as statistical noise, unlikely to be repeated in practice."

internationalpolicy.org/publications...
Can Complimentary Learning Methods Teach AI the Laws of War? - CIP
Training AI models on edge cases of war offers the possibility of hard-coding compliance with laws or war, or at least necessary doubts.
internationalpolicy.org
November 10, 2025 at 5:26 PM
Training machines to hesistate and hold fire when the laws of war demand it might be more a question of will than tech. As Davit Khachatryan argues, both imitation and reinforcement learning could oversample situations that mandate restraint.

internationalpolicy.org/publications...
Can Complimentary Learning Methods Teach AI the Laws of War? - CIP
Training AI models on edge cases of war offers the possibility of hard-coding compliance with laws or war, or at least necessary doubts.
internationalpolicy.org
November 10, 2025 at 5:26 PM
Reposted by Center for International Policy
@w2t2impact.bsky.social reveals how the 1122 Program funneled over $100 million in military-style equipment and surveillance software to law enforcement at federal discounts, with minimal transparency or accountability.🔻

static1.squarespace.com/static/66269...
static1.squarespace.com
October 30, 2025 at 4:27 PM