Garrison Lovely
banner
garrisonlovely.bsky.social
Garrison Lovely
@garrisonlovely.bsky.social
Writing a book on AI+economics+geopolitics for Nation Books.
Covers: The Nation, Jacobin. Bylines: NYT, Nature, Bloomberg, BBC, Guardian, TIME, The Verge, Vox, Thomson Reuters Foundation, + others.
But the Judge all-but-ruled that if Musk had standing, she would have blocked the restructuring. The AGs definitely have standing, but chose not to do this, and it now seems very unlikely that she'll undo this deal. api.omarshehata.me/substack-pr...
October 28, 2025 at 10:59 PM
But the media and public framed it as a huge win for opponents of the restructuring. Even OAI employees told me they were really happy that the nonprofit would stay in control. www.obsolete.pub/p/four-pred...
October 28, 2025 at 10:59 PM
It's easy to see these numbers and write them off as ridiculous, but I think they're in the ballpark of what OAI founders actually expected when they wrote things like "it may be difficult to know what role money will play in a post-AGI world."
October 28, 2025 at 10:59 PM
In this world, Microsoft only captures <1% of the total value, with the VAST majority going to the nonprofit.
October 28, 2025 at 10:59 PM
UVA economist Anton Korinek has used standard economic models to estimate the value of AGI at between $1.25-71 quadrillion and assumes that OAI will capture 0.3*0.05 of that, which comes out to a $30.9T valuation for OAI.

www.genaiforecon.org/ValueAGI.pdf
October 28, 2025 at 10:59 PM
Zvi quotes Matt Levine estimating the value of the profit cap of up to $272B, back when OAI was only worth $157B (it's now $500B).

Obviously, this all depends on how well OAI actually does. But, the profit caps ONLY bite if OAI does very well.
October 28, 2025 at 10:59 PM
Rationalist blogger Zvi Mowshowitz has been consistently calling this the greatest theft in human history. He's argued that the control premium alone should give the nonprofit 20-40% of the PBC, before you even account for the value of profit beyond the caps.
x.com/TheZvi/stat...
October 28, 2025 at 10:59 PM
But it does give the nonprofit 26% stake in the for profit PBC plus unspecified additional equity if OAI >10xes after 15 years.

This is better than no profit caps and just the 26% stake, but still WAY worse than having the profit caps, at least in the worlds where OAI succeeds.
October 28, 2025 at 10:59 PM
Overall, if you're mainly worried about OAI recklessly pursuing some catastrophically risky AI development, then there's at least some on-paper governance measures hedging against that. Here are details from the new articles of incorporation:
October 28, 2025 at 10:59 PM
However, Page replies that the foundation board is almost identical to the PBC "kinda gutting this power." CMU prof Zico Kolter is the only one on the nonprofit board who's not also on the PBC board.
October 28, 2025 at 10:59 PM
Here's Todor Markov, another ex-OAI employee, highlighting the fact that the OAI Foundation (i.e. nonprofit) can replace the directors of the for-profit, which is more than he expected to get.
x.com/todor_m_mar...
October 28, 2025 at 10:59 PM
There are some important commitments the DE AG announced, here's former OAI employee and co-organizer of the Not for Private Gain letters Page Hedley highlighting some of them.
x.com/michaelhpag...
October 28, 2025 at 10:59 PM
4. cont'd: OAI did announce a $50M nonprofit fund in Sept that seemed more along these lines.

The new OAI Foundation is starting with a $25B commitment focusing on health and AI resilience. We'll have to wait and see on this one.
October 28, 2025 at 10:59 PM
In May, I made 4 predictions in Obsolete about how OpenAI's restructuring would go. Today, the restructuring as a for-profit public benefit corporation (PBC) is moving forward with the blessing of the DE and CA AGs. So, how'd I do?

api.omarshehata.me/substack-pr...
October 28, 2025 at 10:59 PM
I came out of book hibernation to give you perhaps the final piece in my series covering OpenAI's restructuring.

OpenAI is framing this as a neatly packaged fait accompli, but it's actually a tensely negotiated settlement, with lots of conditions. 🧵
x.com/OpenAI/stat...
October 28, 2025 at 10:58 PM
I think regular people should feel more empowered to voice their preferences on things that affect them, and ASI, if it is ever built, definitely would!

Here's a relevant sneak preview of my book (out in May!):
October 23, 2025 at 8:38 PM
IMO there are 3 big problems with Dean's post:
1. I really don't think it's a reasonable prediction of how this statement would be operationalized
2. Superintelligence would inherently concentrate enormous power in whatever controls it
x.com/deanwball/s...
October 23, 2025 at 8:38 PM
Damn I love Wikipedia.
October 23, 2025 at 8:12 PM
imagine torching your decades in the making rep as a lib billionaire to get some free conference security.

(TBC I don't think that's why he did it...)
October 16, 2025 at 7:14 PM
For instance, believing that OAI really does want regulation, but just wants it to happen at the federal level — despite the fact that OAI publicly called for preemption of state-level bills with no binding replacement, something that has literally never happened before.
October 10, 2025 at 6:55 PM
Wow, OpenAI's head of mission alignment just spoke out against the way the company has been using subpoenas to intimidate and disrupt political opponents.

A surprising number of OAI rank & file have no idea what their leadership is doing to kill regulation.
x.com/jachiam0/st...
October 10, 2025 at 6:55 PM
It's also openly called for the federal govt to preempt state-level AI regulations with no binding replacement — an unprecedented move and not something you'd expect from someone who thinks that superintelligence is the biggest threat to humanity
api.omarshehata.me/substack-pr...
October 8, 2025 at 4:20 PM
Well, now I need to update my book.

To my knowledge, this is the first time Sam Altman hasn't downplayed or dismissed AI existential risk since early 2023.

TBC, I think it's good of Altman to say this if that's what he actually believes...
x.com/ai_ctrl/sta...
October 8, 2025 at 4:20 PM
I wrote about this in my Current Affairs essay on McKinsey: www.currentaffairs.org/news/2019/0...
October 7, 2025 at 8:07 PM
This reminds me of arguments that McKinsey would make to justify working for Gulf autocracies. However, academic research has found that the opposite tends to happen: companies abandon human rights to conform to their wealthy clients.
x.com/ShakeelHash...
October 7, 2025 at 8:07 PM