danermcshaner.bsky.social
@danermcshaner.bsky.social
A human commander that leverages the intelligence distributed across an entire force more effectively/responsively might still win out against an AI superhuman intelligence that doesn't.
August 4, 2025 at 4:20 AM
As far as AI in command, my (amateur) hypothesis is that it potentially conflicts with mission command. Eventually, humans will be involved, and they need freedom to operate. Would an AI's plans still hold up if the humans in the loop deviate from it or if communication breaks down?
August 4, 2025 at 4:20 AM
I think this comes from their citation of Suchman's "Imaginaries of Omniscience." Suchman has written critical work on human-computer interaction, but I think she and the authors of the above piece misinterpret the OODA loop as a closed system that ignores interaction with the environment.
August 4, 2025 at 4:20 AM
From a sense-making standpoint, you might find Scott Snook’s Friendly Fire and Eric-Hans Kramer’s Organizing Doubt interesting. The latter is kind of hard to find, but there’s pdf of an earlier version available here - pure.tue.nl/ws/files/219...
pure.tue.nl
July 19, 2025 at 4:03 AM
Dave Snowden’s Cynefin framework might be worth a look if you haven’t already encountered it. I think he has said something similar.
June 11, 2025 at 5:59 AM