Vincent Conitzer
banner
conitzer.bsky.social
Vincent Conitzer
@conitzer.bsky.social
AI professor. Director, Foundations of Cooperative AI Lab at Carnegie Mellon. Head of Technical AI Engagement, Institute for Ethics in AI (Oxford). Author, "Moral AI - And How We Get There."
https://www.cs.cmu.edu/~conitzer/
Happy New Year everyone!
December 31, 2025 at 1:45 PM
"explain retirement accounts to me like I'm a kindergarten teacher" -- I'm sure kindergarten teachers love being talked to like this
December 30, 2025 at 7:06 PM
so close to an infinite money making scheme but that stupid Planck time gets in the way
December 28, 2025 at 9:46 PM
photography pro tip
December 28, 2025 at 3:00 AM
the archetype of the loud downstairs neighbor
December 23, 2025 at 11:00 PM
AI Overview, in a single response, disqualifies itself as a coach, a financial advisor, and a screenwriter.
December 21, 2025 at 12:22 AM
our upcoming AAAI paper on cooperative game theory in multi-winner voting, via an automated reasoning approach (using not satisfiability but mixed-integer linear programming); led by Emin Berker and Emanuel Tewolde!
arxiv.org/abs/2512.16895
On the Edge of Core (Non-)Emptiness: An Automated Reasoning Approach to Approval-Based Multi-Winner Voting
Core stability is a natural and well-studied notion for group fairness in multi-winner voting, where the task is to select a committee from a pool of candidates. We study the setting where voters eith...
arxiv.org
December 19, 2025 at 10:14 PM
The videos from this summer's Cooperative AI @coop-ai.bsky.social retreat are online now! Below is mine on "Cooperative AI via Simulations"
www.youtube.com/watch?v=I8ec...
Cooperative AI via Simulations by Vincent Conitzer
YouTube video by Cooperative AI Foundation
www.youtube.com
December 18, 2025 at 7:34 PM
family relationships logic
December 17, 2025 at 11:54 PM
"The Mechanics of Time & Self"
December 16, 2025 at 7:12 PM
The case against getting new soil for the yard.
December 15, 2025 at 1:41 AM
Somehow (Copilot?) "us" became "US" in the title in this version, resulting in something very different (and probably more clicks...).
www.msn.com/en-us/news/w...
MSN
www.msn.com
December 9, 2025 at 10:10 PM
still trying to figure out what I can actually conclude from this
December 7, 2025 at 6:39 PM
Excited to present our position paper in the 4:30pm poster session at @neuripsconf.bsky.social today! We discuss the problem that AI systems may suspect they're being tested and act differently as a result, & how to approach this with game theory.
arxiv.org/abs/2508.14927
neurips.cc/virtual/2025...
AI Testing Should Account for Sophisticated Strategic Behaviour
This position paper argues for two claims regarding AI testing and evaluation. First, to remain informative about deployment behaviour, evaluations need account for the possibility that AI systems und...
arxiv.org
December 3, 2025 at 5:33 PM
New paper from our lab, led by UG Xander Heckett: a game-theoretic view of AI safety via debate, where two agents argue for different courses of action and we try to set the rules so that the one that is right will win.
www.arxiv.org/abs/2511.23454
Designing Rules for Choosing a Winner in a Debate
We consider settings where an uninformed principal must hear arguments from two better-informed agents, corresponding to two possible courses of action that they argue for. The arguments are verifiabl...
www.arxiv.org
December 2, 2025 at 6:21 AM
I like its confidence that I would *never* misplace my keys.
December 1, 2025 at 3:50 PM
"what if you asked me a question"
-- some confusion about 'you' and 'me' but the best part is if you recognize the video it brings up
November 30, 2025 at 2:41 PM
Hey I just posted about this earlier. Is this really based on them thinking the guardrails are good enough now?
futurism.com/artificial-i...
OpenAI Restores GPT Access for Teddy Bear That Recommended Pills and Knives
OpenAI has reinstated GPT access to FoloToy, the maker of the AI powered teddy bear which was caught having inappropriate conversations.
futurism.com
November 29, 2025 at 2:00 PM
"Respectively" is a tricky word.
November 28, 2025 at 4:48 PM
Happy Thanksgiving everyone!
November 27, 2025 at 4:31 PM
a common mistake
November 27, 2025 at 12:13 AM
(continuing) I worry too many people think of today's LLM-based chatbots as a good model for studying AGI. Even if they're on the path to AGI, they're not at all what AGI would be like. Here's an -- I hope a bit entertaining -- chapter I wrote about this recently.
philsci-archive.pitt.edu/26351/
What Would It Look Like to Align Humans with Ants? - PhilSci-Archive
philsci-archive.pitt.edu
November 24, 2025 at 2:08 PM
For this article I weighed in on companies using the word "superintelligence" as it suits them and how that gets in the way of having a clear conversation about benefits and risks. (no paywall) (to be continued)
www.msn.com/en-us/money/...
MSN
www.msn.com
November 23, 2025 at 2:18 PM
simplest explanation
November 22, 2025 at 6:36 PM
ad placement
November 22, 2025 at 2:44 AM