Amos Toh
amostoh.bsky.social
Amos Toh
@amostoh.bsky.social
senior counsel, Brennan Center for Justice. interested in all the ways money in tech helps and hurts. he/him, 🇸🇬 in 🇺🇸
25/ The effort to purge “woke AI” might force the hand of tech companies, leading to AI that is even more broken. This could saddle the military with faulty intel that harms soldiers and civilians. For the rest of us, we would be stuck with technology that leaves us more divided and less informed.
July 24, 2025 at 6:43 PM
24/ OpenAI and Anthropic are selling their foundation models to the military.

Google is working with Lockheed to develop AI weapons.

Meta has bought a 49% stake in Scale AI, an up-and-coming defense contractor.

Amazon and Microsoft are major military cloud providers.

Billions are at stake.
July 24, 2025 at 6:43 PM
23/ This time, tech company leaders have tried to pander to the administration’s “anti-woke” agenda, but this strategy could put them in a bind, particularly as they seek to land lucrative defense contracts that would offset the enormous costs of their AI business.

fortune.com/2025/01/13/z...
Zuckerberg says most companies need more ‘masculine energy’
Zuckerberg, who launched his career by rating the attractiveness of women at Harvard University, lamented the rise of “culturally neutered” companies.
fortune.com
July 24, 2025 at 6:43 PM
22/ Amazon complained this was retaliation against Jeff Bezos – Trump was unhappy at the time about reporting by WaPo, which Bezos owns. The suit was later dismissed after the contract was revised, and Amazon was awarded a portion of it.

www.reuters.com/legal/govern...
U.S. judge ends Amazon challenge to $10 bln cloud contract after Pentagon cancellation
A U.S. judge on Friday dismissed Amazon.com's legal challenge to the Defense Department's2019 decision to award a $10 billion JEDI cloud-computing project to rival Microsoft Corp (MSFT.O) after the Pentagon canceled the contract.
www.reuters.com
July 24, 2025 at 6:43 PM
21/ This wouldn’t be the first time the Trump admin has tried to kill federal contracts in response to perceived slights. In 2019, Amazon sued the first Trump admin for using “improper pressure” to deprive it of a $10 billion military cloud computing contract.

www.nytimes.com/2019/12/09/t...
Amazon Accuses Trump of ‘Improper Pressure’ on JEDI Contract (Published 2019)
www.nytimes.com
July 24, 2025 at 6:43 PM
20/ But the wild card here is the administration itself – a chatbot response goes viral, Trump takes offense, and threatens to cancel the provider’s federal contracts – sounds familiar?

www.wsj.com/business/tru...
Exclusive | Trump Aides Discussed Ending Some SpaceX Contracts, but Found Most Were Vital
Fallout between the president and the rocket maker’s billionaire founder threatened the company’s multibillion-dollar agreements with the government
www.wsj.com
July 24, 2025 at 6:43 PM
19/ The order provides room for this, clarifying that agencies should “account for technical limitations” in enforcing compliance and “avoid over-prescription and afford latitude for vendors.”
July 24, 2025 at 6:43 PM
18/ Of course, tech companies could treat the order as symbolic, and placate the admin with messaging about anti-woke measures rather than meaningful changes to how their models are developed and used.
July 24, 2025 at 6:43 PM
17/ The White House guidance does not apply to national security agencies, and the order allows for exceptions “as appropriate” for nat sec uses of Large Language Models. But the intent to mold foundation models in the admin’s image of what “truths” are acceptable is clear.
July 24, 2025 at 6:43 PM
16/ This risk is not theoretical: in April, the White House updated its guidance on federal use and acquisition of AI to remove all references to bias detection and mitigation, including measures to account for the tech’s impact on underrepresented populations. www.lawfaremedia.org/article/narr...
Narrowing the National Security Exception to Federal AI Guardrails
Fostering public trust in how the government uses AI to protect national security requires robust and enforceable rules on how it is authorized, tested, disclosed, and overseen.
www.lawfaremedia.org
July 24, 2025 at 6:43 PM
15/ While the order is limited to Large Language Models, it’s easy to see how defense contractors could be pressured to strip out bias detection the admin doesn't like across foundation model training. This may undermine other military functions that increasingly rely on AI, like target recognition.
July 24, 2025 at 6:43 PM
14/ If federal providers of AI-based translation services are prevented from fine tuning their models to be more accurate for languages other than in English, or to limit the reinforcement of harmful stereotypes, this could compromise the analysis of intelligence in these languages.
July 24, 2025 at 6:43 PM
13/ The problem is that this could harm model performance. And the federal agency that this would impact the most is the government’s biggest spender on AI - the Department of Defense. www.brookings.edu/articles/the...
The evolution of artificial intelligence (AI) spending by the U.S. government | Brookings
U.S. federal government spending on AI-related contracts has massively increased in the last year, especially in the defense sector.
www.brookings.edu
July 24, 2025 at 6:43 PM
12/ Tech companies may also try to comply by rolling back internal measures to mitigate the spread of content that denigrates or erases racial and other minorities, or account for underrepresented languages in training data.

Critical studies like this could be shelved: openai.com/index/evalua...
Evaluating fairness in ChatGPT
We've analyzed how ChatGPT responds to users based on their name, using language model research assistants to protect privacy.
openai.com
July 24, 2025 at 6:43 PM
11/ But fine-tuning is more insidious than content filters, since it biases the model to favor certain outcomes. We could end up with models that systematically produce biased responses in a neutral-sounding way, e.g. that there is no scientific consensus on the effects of climate change.
July 24, 2025 at 6:43 PM
10/ This type of “fine-tuning” is apparently used to prevent chatbots from teaching users how to build bombs, conduct cyberattacks, or engage in dangerous or illegal action. Again, not foolproof - users can manipulate prompts to evade these restrictions.

www.theguardian.com/technology/a...
AI chatbots’ safeguards can be easily bypassed, say UK researchers
Five systems tested were found to be ‘highly vulnerable’ to attempts to elicit harmful responses
www.theguardian.com
July 24, 2025 at 6:43 PM