Transformer
transformernews.ai
Transformer
@transformernews.ai
A publication about the power and politics of transformative AI. Subscribe for free: http://transformernews.ai/subscribe
Here's a startling sci-fi conceit: what if AIs misbehave because they're trained on all our stories about misbehaving AI? What if articles on the threat of misaligned AI are teaching AI to become misaligned? And what if this becomes a severe problem for all of us pretty soon?
December 12, 2025 at 6:04 PM
"As crunch time for AI approaches, those at the wheel seem perfectly happy driving a clown car." @shakeelhashim.com the dumbest timeline for AI, plus GPT 5.2, OpenAI's Disney deal and more in this week's Transformer Weekly.
www.transformernews.ai/p/were-in-th...
We’re in the dumbest timeline
Transformer Weekly: Trump signs EO, Hochul guts the RAISE Act, and GPT-5.2 launches
www.transformernews.ai
December 12, 2025 at 4:40 PM
EXCLUSIVE: New York’s governor is trying to turn the RAISE Act into an SB 53 copycat.

Gov. Kathy Hochul is proposing to strike the entire text of the RAISE Act, replacing it with verbatim language from SB 53, sources tell Transformer.

Read more: www.transformernews.ai/p/new-york-g...
New York’s governor is trying to turn the RAISE Act into an SB 53 copycat
EXCLUSIVE: Gov. Kathy Hochul is proposing to strike the entire text of the RAISE Act, replacing it with verbatim language from SB 53, sources tell Transformer.
www.transformernews.ai
December 11, 2025 at 7:49 PM
It would be sad, if almost amusingly ironic, if the ways we express our fears about AI were the very thing that ended up making them come true.
Why AI reading science fiction could be a problem
The theory that we’re accidentally teaching AI to turn against us
www.transformernews.ai
December 9, 2025 at 6:26 PM
Sam Altman has declared a "code red" over at OpenAI.
And The Future of Life Institute has refused to give any lab a mark higher than "C+" on their latest AI Safety Index.
There's all this, and much more, in our latest weekly briefing.
December 6, 2025 at 1:33 PM
In today's Transformer Weekly: Why the apparent lack of safety testing on DeepSeek's latest model speaks to bigger problems, plus preemption’s out of the NDAA, OpenAI’s ‘code red,’ Anthropic’s IPO prep and more: www.transformernews.ai/p/the-proble...
The problem with DeepSeek
Transformer Weekly: Preemption’s out the NDAA, OpenAI’s ‘code red,’ and Anthropic’s IPO prep
www.transformernews.ai
December 5, 2025 at 4:33 PM
By building their own intellectual ecosystem, researchers worried about existential AI risk shed academia's baggage — and, perhaps, some of its strengths
The perils of AI safety’s insularity
By building their own intellectual ecosystem, researchers worried about existential AI risk shed academia's baggage — and, perhaps, some of its strengths
www.transformernews.ai
December 4, 2025 at 6:06 PM
Well, the mission to get federal preemption of state AI laws has failed yet again.

And the debate around it has shown that a policy of little-to-no AI regulation is a political nonstarter: anyone who's pushed for it has triggered uproar from across the political spectrum.
December 4, 2025 at 10:14 AM
"The faster China diffuses AI, the faster a range of risks could materialize." Beijing's plan to embed AI across society within a year could backfire spectacularly, writes @carnegieendowment.org's Scott Singer
How China’s AI diffusion plan could backfire
Opinion: Scott Singer argues that the country’s plan to embed AI across all facets of society could create huge growth — and accelerate social unrest
www.transformernews.ai
December 3, 2025 at 4:19 PM
Anthropic is set to publish its whistleblowing policy for employees imminently — likely as soon as this week — making it the second major AI company to publicly reveal how it will handle internal whistleblowing. www.transformernews.ai/can-ai-embra...
Can AI embrace whistleblowing?
As Anthropic prepares to publish its whistleblowing policy, can the industry make the most of protecting those who speak out?
www.transformernews.ai
December 2, 2025 at 4:32 PM
AI safety might just find its future as a mass political movement. Indeed, Cathy Rogers from Social Change Lab tells us that it could even take inspiration from the climate movement — both are, after all, concerned with “things which are too terrifying to contemplate.”
November 29, 2025 at 4:41 PM
A survey of frontier lab employees has found that 100% (!) of respondents are either “Not Confident At All” or “Not Very Confident” that their concerns would be “understood and acted upon by the government.”

Which isn't really good for AI safety, Abra Ganz and Karl Koch argue.
November 29, 2025 at 1:33 PM
The AI safety movement has historically worked mostly behind the scenes. But in the past two years, some have begun to adopt strategies more similar to those favored by other causes, such as climate advocacy: attempting to build a mass movement.
Will AI safety become a mass movement?
Some AI safety activists think the community should borrow from the climate playbook and build broad public appeal — but not everyone agrees
www.transformernews.ai
November 28, 2025 at 4:10 PM
"The ability to report risks should not depend on charity and individuals choosing to risk losing their jobs and livelihoods." SB 53 protects AI whistleblowers, but places the greatest burden employees, not labs write @abra_ganz and @AIWI_Official's Karl Koch
www.transformernews.ai/p/sb-53-prot...
SB 53 protects whistleblowers in AI — but asks a lot in return
Opinion: Abra Ganz and Karl Koch argue that whistleblower protections in SB-53 aren’t good enough on the face of it — but how the state chooses to interpret the law could turn that around
www.transformernews.ai
November 27, 2025 at 2:26 PM
Google’s much-anticipated Gemini 3 Pro finally appeared last week — it immediately became the world’s most powerful AI model.

It currently ranks #1 on LMArena, where it outperformed Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1 at 19 math, science, multimodal, and agentic benchmarks.
November 26, 2025 at 7:04 PM
The quest for federal AI preemption – allowing the US government to overwrite state AI laws – is back.
November 25, 2025 at 6:03 PM
Focusing on AI child safety is vital — we shouldn't even need to say why — but it could also be a fulcrum point for focussing on AI safety more generally, since the risks children face simultaneously threaten us all.
November 21, 2025 at 7:23 PM
Advocates for federal preemption of state AI laws have redoubled their efforts. But this time, @shakeelhashim.com argues, their desperation is palpable.

Read more in today's Transformer Weekly: www.transformernews.ai/p/ai-preempt...
Preemption isn’t looking any better second time round
Transformer Weekly: Gemini 3 wows, GAIN AI’s not looking good, and OpenAI drops GPT-5.1-Codex-Max
www.transformernews.ai
November 21, 2025 at 6:54 PM
In certain parts of Silicon Valley, “safety” has become a dirty word, on par with “regulation” itself. Either you want to accelerate AI as quickly and carelessly as possible, or you're a an enemy of progress.

But what if AI safety could be made profitable?
November 21, 2025 at 6:04 PM
A leaked Trump executive order would create an "AI Litigation Task Force" to fight state AI regulations.

Read the full EO here:
Exclusive: Here's the draft Trump executive order on AI preemption
The EO would establish an “AI Litigation Task Force" to challenge state AI laws
www.transformernews.ai
November 20, 2025 at 1:48 AM
Child safety in AI has overwhelming public support and is driving legislative action. These measures could establish broader transparency and auditing requirements that address frontier risks for everyone. Read more:
Why pressure on AI child safety could also address frontier risks
Keeping kids safe is a priority for legislators globally — and might increase attention on other risks, too
www.transformernews.ai
November 18, 2025 at 5:39 PM
Interest in AI safety has waxed and waned dramatically. Earlier this year JD Vance told the AI Action Summit: “I’m not here to talk about AI safety.”
So how do we do a good job of AI safety policy in a world that’s just not that excited about it? www.transformernews.ai/p/doing-ai-s...
Doing AI safety policy when governments aren’t interested
Opinion: Jess Whittlestone argues that there are still ways to keep AI safety policy on the table even when governments don’t prioritize it
www.transformernews.ai
November 13, 2025 at 9:00 AM
"How do we do a good job of AI safety policy in a world that’s just not that excited about AI safety policy?" asks the Centre for Long Term Resilience's Jess Whittlestone. www.transformernews.ai/p/doing-ai-s...
Doing AI safety policy when governments aren’t interested
Opinion: Jess Whittlestone argues that there are still ways to keep AI safety policy on the table even when governments don’t prioritize it
www.transformernews.ai
November 12, 2025 at 5:28 PM
Discussions of AI safety often get too wrapped up in debates about whether "AGI" is imminent.

But that focus can lead people to ignore a fundamental issue: AI doesn’t need to be general to be dangerous.
November 11, 2025 at 9:39 PM
"How we choose to act now will determine whether our shared basis for reality — what we see and hear — remains trustworthy." Witness's @samgregory.bsky.social on Sora's threat to visual truth:
Sora Is Here. The window to save visual truth is closing
Opinion: Sam Gregory argues that generative video is undermining the notion of a shared reality, and that we need to act before it’s lost forever
www.transformernews.ai
November 5, 2025 at 5:18 PM