chocmilkcultleader.bsky.social
@chocmilkcultleader.bsky.social
The militarization of AI is no longer speculative.

Last week, a Meta VP, a Palantir Technologies exec, and an ex-OpenAI research head were sworn into a new Army Reserve “innovation unit.”

Before that Anduril secured a $642M drone contract and the Pentagon’s Replicator program cleared $1B.
June 15, 2025 at 6:46 PM
preview of an article I'm working on-

What cannot be measured may still be sacred.
What cannot be owned may still be yours to protect.
April 25, 2025 at 10:45 PM
Cat people, please help your boy out.
December 15, 2024 at 3:31 AM
Recently, I’ve noticed a growing culture of rabid idol worship (both towards people and machines), sycophancy, and the devaluation of individuals (especially of those in outgroups) within the tech-finance-media landscape I’ve been hanging around in.
December 1, 2024 at 12:29 AM
Couldn’t come up with a clear outline in my head for this (too many paths to take) so I’m just going to wing it and see how it goes.

This will either be a masterpiece or crash and burn completely
November 28, 2024 at 5:26 AM
Our upcoming article on building innovative teams will feature the following masterpiece
November 25, 2024 at 5:18 AM
I've heard a lot about this platform. Looking to make friends with AI People interested in ethical Tech, fighting misinformation, automated weapons systems, surveillance systems and other forms of Tech that take power away from individuals and can lead to oppression.

Say hi #ai #tech #software
November 23, 2024 at 7:38 AM
OpenAI trying to hire a Research Scientist for Health while not requiring any understanding of healthcare as a space is peak Silicon Valley Tech Bro behavior. You can't RLHF your way out of not having domain knowledge.
artificialintelligencemadesimple.substack.com/p/a-follow-u...
October 28, 2024 at 8:34 PM
One of Silicon Valley's favorite buzzwords is useless and is used as a smokescreen to hide the real issues-
artificialintelligencemadesimple.substack.com/p/why-morall...
Why Morally Aligned LLMs Solve Nothing [Thoughts]
One of AI’s Biggest Buzzword is a Red-herring + How to Actually Solve some of AI’s Biggest Problems
artificialintelligencemadesimple.substack.com
October 17, 2024 at 5:10 PM
LLMs require human evaluations. Especially in specialized domains or for more complex tasks.

Unfortunately, this process is incredibly unreliable, and has lots of sources of error- both on the model and the evaluator side. The article below covers Amazon's framework for accounting for these biases-
How Amazon is Rethinking human evaluation for generative large language models [Breakdowns]
Reducing Bias and Improving Consistency in LLM Human Evaluations
artificialintelligencemadesimple.substack.com
October 8, 2024 at 9:46 PM
Hey there.

I'm a writer covering the important ideas in AI and Tech from multiple perspectives. My goal is to make the difficult ideas more accessible, ensure that technology serves everyone (not just the technoelite, and fighting misinformation.

If that sounds like your vibe, come say hi.
October 8, 2024 at 6:58 PM