Seth Lazar
banner
sethlazar.org
Seth Lazar
@sethlazar.org
Philosopher working on normative dimensions of computing and sociotechnical AI safety.

Lab: https://mintresearch.org
Self: https://sethlazar.org
Newsletter: https://philosophyofcomputing.substack.com
Pinned
News and opportunities for philosophers working on normative questions raised by computing:

philosophyofcomputing.substack.com
Normative Philosophy of Computing Newsletter | Seth Lazar | Substack
News and opportunities for anyone interested in analytic philosophy on normative questions raised by computing, from aesthetics to AI ethics. Click to read Normative Philosophy of Computing Newsletter...
philosophyofcomputing.substack.com
Reposted by Seth Lazar
In a new paper in our AI & Democratic Freedoms series, Rachel M. Kim, Blaine Kuehnert, @sethlazar.org, Ranjit Singh, & Hoda Heidari propose creating an AI Power Disparity Index, designed to measure and signal the changing distribution of power in the AI ecosystem. knightcolumbia.org/content/the-...
The AI Power Disparity Index: Toward a Compound Measure of AI Actors’ Power to Shape the AI Ecosystem
knightcolumbia.org
September 8, 2025 at 2:44 PM
How will AI agents impact democratic values? Democracies are—for independent reasons—already under acute pressure. Since WWII Moore's Law and democratisation went up and to the right in lockstep. Not any more.
September 5, 2025 at 4:21 PM
Reposted by Seth Lazar
In the latest essay in our AI & Democratic Freedoms series, @sethlazar.org and Tino Cuéllar (@carnegieendowment.org)
discuss how AI agents might affect the realization of democratic values. knightcolumbia.org/content/ai-a...
AI Agents and Democratic Resilience
knightcolumbia.org
September 4, 2025 at 7:35 PM
Reposted by Seth Lazar
"Democracies are weaker than they have been for decades," write Carnegie president Mariano-Florentino Cuéllar and @sethlazar.org for @knightcolumbia.org. "A great wave is coming, and they are ill-prepared."

AI agents could help or hurt. And they won't protect democratic values on their own.
September 4, 2025 at 8:14 PM
@caseynewton.bsky.social in re an old discussion about AI denialists. , hope you’ve caught knightcolumbia.org/events/artif...
Artificial Intelligence and Democratic Freedoms
knightcolumbia.org
April 11, 2025 at 3:59 PM
Reposted by Seth Lazar
🚨 UPCOMING EVENT: Artificial Intelligence and Democratic Freedoms, April 10-11 at @columbiauniversity.bsky.social & online. In collaboration with Senior AI Advisor @sethlazar.org & co-sponsored by the Knight Institute and @columbiaseas.bsky.social. RSVP: knightcolumbia.org/events/artif...
February 28, 2025 at 4:38 PM
New Philosophy of Computing newsletter: share with your philosophy friends. Lots of CFPs, events, opportunities, new papers.

philosophyofcomputing.substack.com/p/normative-...
Normative Philosophy of Computing Newsletter
Welcome to February!
philosophyofcomputing.substack.com
February 25, 2025 at 5:06 AM
Reposted by Seth Lazar
I am a bit bashful about sharing this profile www.thetimes.com/uk/technolog... of me in @thetimes.com, but will do so because it kindly refers to my new book which is coming out in early March. www.penguin.co.uk/books/460891.... The tech titans pictured seem to be decoration (and not my co-authors)
These Strange New Minds
Stunning advances in digital technology have given us a new wave of disarmingly human-like AI systems. The march of this new technology is set to upturn our economies, challenge our democracies, and r...
www.penguin.co.uk
February 22, 2025 at 2:41 PM
Reposted by Seth Lazar
I spent a few hours with OpenAI's Operator automating expense reports. Most corporate jobs require filing expenses, so Operator could save *millions* of person-hours every year if it gets this right.

Some insights on what worked, what broke, and why this matters for the future of agents 🧵
February 3, 2025 at 6:04 PM
Since Agents are now on everyone's minds, do check out this tutorial on the ethics of Language Model Agents, from June last year.

Looks at what 'agent' means, how LM agents work, what kinds of impacts we should expect, and what norms (and regulations) should govern them.
LM Agents: Prospects and Impacts (FAccT tutorial)
YouTube video by Seth Lazar
www.youtube.com
January 24, 2025 at 7:29 AM
Reposted by Seth Lazar
We're excited to announce that our upcoming symposium on #AI and democracy w/ @sethlazar.org (4/10-4/11, at @columbiauniversity.bsky.social & online) will feature papers by a highly accomplished group of authors from a wide range of disciplines. Check them out: knightcolumbia.org/blog/knight-...
Knight Institute Symposium on AI and Democratic Freedoms to Feature Leading Scholars and Technologists
knightcolumbia.org
January 23, 2025 at 3:03 PM
January update from the normative philosophy of computing newsletter: new CFPs, papers, workshops, and resources for philosophers working on normative questions raised by AI and computing.
Normative Philosophy of Computing - January
Happy New Year!
mintresearch.org
January 16, 2025 at 6:48 AM
Reposted by Seth Lazar
EVENT: Artificial Intelligence and Democratic Freedoms, 4/10-11, at @columbiauniversity.bsky.social & online. We're hosting a symposium w/ @sethlazar.org exploring the risks advanced #AI systems pose to democratic freedoms and interventions to mitigate them. RSVP: knightcolumbia.org/events/artif...
Artificial Intelligence and Democratic Freedoms
knightcolumbia.org
January 9, 2025 at 9:08 PM
Reposted by Seth Lazar
📢 Excited to share: I'm again leading the efforts for the Responsible AI chapter for Stanford's 2025 AI Index, curated by @stanfordhai.bsky.social. As last year, we're asking you to submit your favorite papers on the topic for consideration (including your own!) 🧵 1/
January 5, 2025 at 5:42 PM
Reposted by Seth Lazar
Turns out we weren't done for major LLM releases in 2024 after all... Alibaba's Qwen just released QvQ, a "visual reasoning model" - the same chain-of-thought trick as OpenAI's o1 applied to running a prompt against an image

Trying it out is a lot of fun: simonwillison.net/2024/Dec/24/...
Trying out QvQ—Qwen’s new visual reasoning model
I thought we were done for major model releases in 2024, but apparently not: Alibaba’s Qwen team just dropped the Apache2 2 licensed QvQ-72B-Preview, “an experimental research model focusing on …
simonwillison.net
December 24, 2024 at 8:52 PM
Reposted by Seth Lazar
Reposted by Seth Lazar
Reposted by Seth Lazar
Some of my thoughts on OpenAI's o3 and the ARC-AGI benchmark

aiguide.substack.com/p/did-openai...
Did OpenAI Just Solve Abstract Reasoning?
OpenAI’s o3 model aces the "Abstraction and Reasoning Corpus" — but what does it mean?
aiguide.substack.com
December 23, 2024 at 2:38 PM
Reposted by Seth Lazar
OpenAI skips o2, previews o3 scores, and they're truly crazy. Huge progress on the few benchmarks we think are truly hard today. Including ARC AGI.
Rip to people who say any of "progress is done," "scale is done," or "llms cant reason"
2024 was awesome. I love my job.
December 20, 2024 at 6:08 PM
Reposted by Seth Lazar
OpenAI's o3: The grand finale of AI in 2024
A step change as influential as the release of GPT-4. Reasoning language models are the current and next big thing.

I explain:
* The ARC prize
* o3 model size / cost
* Dispelling training myths
* Extreme benchmark progress
o3: The grand finale of AI in 2024
A step change as influential as the release of GPT-4. Reasoning language models are the current big thing.
buff.ly
December 20, 2024 at 11:34 PM
I'm not seeing (here) much discussion of o3. If you are, point me to who's on here that I'm missing? If you're not: just registering that o3's performance on SWE-bench verified is *bananas*, and likely to have massive impacts in 2025.
December 26, 2024 at 6:04 AM
Busy shopping day in Causeway Bay (long exposures handheld with Spectre App)
December 23, 2024 at 3:04 AM
Feeling good (after o3) about some of the bets made in these papers… Human level software agents now seem nailed on for the near-term.
Two papers on anticipating and evaluating AI agent impacts now ready for (private) comments: if you're interested in how language agents might reshape democracy, or in how *platform agents* might intensify the worst features of the platform economy (but could also fix it), lmk.
December 21, 2024 at 5:30 AM
Two papers on anticipating and evaluating AI agent impacts now ready for (private) comments: if you're interested in how language agents might reshape democracy, or in how *platform agents* might intensify the worst features of the platform economy (but could also fix it), lmk.
December 20, 2024 at 8:28 AM