Garrison Lovely
banner
garrisonlovely.bsky.social
Garrison Lovely
@garrisonlovely.bsky.social
Writing a book on AI+economics+geopolitics for Nation Books.
Covers: The Nation, Jacobin. Bylines: NYT, Nature, Bloomberg, BBC, Guardian, TIME, The Verge, Vox, Thomson Reuters Foundation, + others.
Pinned
Hello! I report on the intersection of capitalism, geopolitics, and artificial intelligence. If you're into that, subscribe to The Obsolete Newsletter.
The Obsolete Newsletter | Garrison Lovely | Substack
Reporting and analysis on capitalism, great power competition, and the race for machine superintelligence from journalist w/ work in NYT, BBC Future, The Verge, TIME, Vox, The Guardian, The Nation, an...
garrisonlovely.substack.com
I came out of book hibernation to give you perhaps the final piece in my series covering OpenAI's restructuring.

OpenAI is framing this as a neatly packaged fait accompli, but it's actually a tensely negotiated settlement, with lots of conditions. 🧵
x.com/OpenAI/stat...
October 28, 2025 at 10:58 PM
IMO there are 3 big problems with Dean's post:
1. I really don't think it's a reasonable prediction of how this statement would be operationalized
2. Superintelligence would inherently concentrate enormous power in whatever controls it
x.com/deanwball/s...
October 23, 2025 at 8:38 PM
Damn I love Wikipedia.
October 23, 2025 at 8:12 PM
imagine torching your decades in the making rep as a lib billionaire to get some free conference security.

(TBC I don't think that's why he did it...)
October 16, 2025 at 7:14 PM
I had been following these developments but no one had put them all together. The world's largest company appears to be increasingly throwing its weight around to fight regulation.
x.com/sjgadler/st...
October 16, 2025 at 4:55 PM
Wow, OpenAI's head of mission alignment just spoke out against the way the company has been using subpoenas to intimidate and disrupt political opponents.

A surprising number of OAI rank & file have no idea what their leadership is doing to kill regulation.
x.com/jachiam0/st...
October 10, 2025 at 6:55 PM
Well, now I need to update my book.

To my knowledge, this is the first time Sam Altman hasn't downplayed or dismissed AI existential risk since early 2023.

TBC, I think it's good of Altman to say this if that's what he actually believes...
x.com/ai_ctrl/sta...
October 8, 2025 at 4:20 PM
This reminds me of arguments that McKinsey would make to justify working for Gulf autocracies. However, academic research has found that the opposite tends to happen: companies abandon human rights to conform to their wealthy clients.
x.com/ShakeelHash...
October 7, 2025 at 8:07 PM
Last week, I had the incredible privilege of moderating a conversation on AI+geopolitics w/ Daron Acemoglu & Sandhini Agarwal (OpenAI trustworthy AI lead) for the Nobel Foundation at the Swedish Consulate.

It was lively+substantive (one audience member was pleasantly surprised)
x.com/swedennewyo...
September 30, 2025 at 5:58 PM
Newsom just signed Sen. Scott Wiener's SB 53, exactly one year after he vetoed SB 1047. www.gov.ca.gov/2025/09/29/...
Governor Newsom signs SB 53, advancing California’s world-leading artificial intelligence industry | Governor of California
www.gov.ca.gov
September 29, 2025 at 8:48 PM
"The human"
September 25, 2025 at 7:38 PM
Newsom just strongly signaled he'll sign SB 53, Scott Wiener's pared down redux of SB 1047 (which Newsom vetoed last year).

I and many others expected him to sign this one, but now it seems like a done-deal. He has to be physically in CA to sign, and this happened in NYC. 🧵
September 24, 2025 at 4:52 PM
What the hell is going on here??
September 23, 2025 at 3:57 PM
Earlier today, a source wondered why OpenAI was so quiet on export controls, despite the fact that they would help OAI keep a lead over Chinese efforts (and bc Sam Altman has adopted the China-race framing).

Well now we have a pretty good guess!
x.com/OpenAINewsr...
September 22, 2025 at 10:40 PM
Josh Hawley just demanded lots of information on chatbot policies from OpenAI, Meta, Google, CharacterAI and Snapchat.
September 18, 2025 at 6:45 PM
Thomas Friedman just soft endorsed a campaign to ban the creation of artificial superintelligence.
x.com/tomfriedman...
September 18, 2025 at 3:08 PM
Can a set of directives like Asimov's 3 Laws of Robotics actually make AI safer? Surprisingly, seems like yes!

Apollo Research worked with OpenAI to massively reduce rates of AI scheming. Quick 🧵
x.com/apolloaieva...
September 17, 2025 at 6:37 PM
You could be forgiven for thinking that OpenAI had given up on its plans to shed its nonprofit controls. It also made headlines for offering its nonprofit a $100B+ stake.

But it's actually trying to dramatically weaken nonprofit controls and its promise to share its profits. 🧵
September 12, 2025 at 11:44 PM
Revisiting this banger 1969 essay by the former US NatSec advisor on the enormous gulf between nuclear planning and the political reality of nuclear weapons. A single hydrogen bomb on NYC could easily kill more Americans than every US war death in history.
September 9, 2025 at 4:00 PM
OpenAI is suggesting that it might leave California over concerns that the state might block its effort to restructure to reduce nonprofit control+kill profit caps.

I'm skeptical.

If OpenAI is in California AT ALL, the state has powers to compel them 🧵
September 9, 2025 at 3:57 AM
How am I just learning about this now???
September 8, 2025 at 9:05 PM
Anthropic settled with authors over copyright violations for $1.5B, which is $500M more than the largest jury verdict on copyright (which was later overturned).

A huge amount of money, but basically in line with my prediction (and <1% of Anthropic's new $183B valuation).
x.com/RobertFreun...
September 5, 2025 at 7:12 PM
Scorching new letter from CA and DE AGs to OpenAI, who each have the power to block the company's restructuring to loosen nonprofit controls.

They are NOT happy about the recent teen suicide and murder-suicide that followed prolonged and concerning interactions with ChatGPT.
September 5, 2025 at 6:38 PM
Anthropic lives to train another day. We'll find out the damage in a few weeks.
x.com/GarrisonLov...
August 27, 2025 at 12:35 AM
This is horrific.
x.com/kashhill/st...
August 26, 2025 at 9:59 PM