Atoosa Kasirzadeh
@atoosakz.bsky.social
societal impacts of AI | assistant professor of philosophy & software and societal systems at Carnegie Mellon University & AI2050 Schmidt Sciences early-career fellow | system engineering + AI + philosophy | https://kasirzadeh.org/
Thanks, Stephen! BTW, the last version of the paper is here and open access: link.springer.com/article/10.1...
I recommend reading that
I recommend reading that
Two types of AI existential risk: decisive and accumulative - Philosophical Studies
The conventional discourse on existential risks (x-risks) from AI typically focuses on abrupt, dire events caused by advanced AI systems, particularly those that might achieve or surpass human-level i...
link.springer.com
October 14, 2025 at 7:10 PM
Thanks, Stephen! BTW, the last version of the paper is here and open access: link.springer.com/article/10.1...
I recommend reading that
I recommend reading that
Huge congratulations, Aylin!!!
October 11, 2025 at 4:55 PM
Huge congratulations, Aylin!!!
Reposted by Atoosa Kasirzadeh
Great work led by @andyliu.bsky.social and collaborators:
@kghate.bsky.social, @monadiab77.bsky.social, @daniel-fried.bsky.social, @atoosakz.bsky.social
Preprint: www.arxiv.org/abs/2509.25369
@kghate.bsky.social, @monadiab77.bsky.social, @daniel-fried.bsky.social, @atoosakz.bsky.social
Preprint: www.arxiv.org/abs/2509.25369
Generative Value Conflicts Reveal LLM Priorities
Past work seeks to align large language model (LLM)-based assistants with a target set of values, but such assistants are frequently forced to make tradeoffs between values when deployed. In response ...
www.arxiv.org
October 2, 2025 at 6:37 PM
Great work led by @andyliu.bsky.social and collaborators:
@kghate.bsky.social, @monadiab77.bsky.social, @daniel-fried.bsky.social, @atoosakz.bsky.social
Preprint: www.arxiv.org/abs/2509.25369
@kghate.bsky.social, @monadiab77.bsky.social, @daniel-fried.bsky.social, @atoosakz.bsky.social
Preprint: www.arxiv.org/abs/2509.25369
Reposted by Atoosa Kasirzadeh
Kasirzadeh also has a really insightful, heartbreaking thread about what "normal" hides here: bsky.app/profile/atoo...
Normal isn't a fact; it's a perception built from repetition & power. My brother can call a night of missile fire normal simply because he's used to it.To an outsider, the scene is clearly abnormal. Judgements of normality rest on thin, shifting ground. History shows how quickly baselines move. 5/n
September 17, 2025 at 9:30 PM
Kasirzadeh also has a really insightful, heartbreaking thread about what "normal" hides here: bsky.app/profile/atoo...
Reposted by Atoosa Kasirzadeh
Anyway, big fan of the 3rd option Sigal presents, @atoosakz.bsky.social 's warning of "gradual accumulation of smaller, seemingly non-existential, AI risks" until catastrophe.
A warning that suggests AI safetyists should take AI ethics & sociology much more seriously! arxiv.org/pdf/2401.07836
A warning that suggests AI safetyists should take AI ethics & sociology much more seriously! arxiv.org/pdf/2401.07836
arxiv.org
September 17, 2025 at 3:39 PM
Anyway, big fan of the 3rd option Sigal presents, @atoosakz.bsky.social 's warning of "gradual accumulation of smaller, seemingly non-existential, AI risks" until catastrophe.
A warning that suggests AI safetyists should take AI ethics & sociology much more seriously! arxiv.org/pdf/2401.07836
A warning that suggests AI safetyists should take AI ethics & sociology much more seriously! arxiv.org/pdf/2401.07836
We are very excited to have you, Bálint!
August 19, 2025 at 10:28 AM
We are very excited to have you, Bálint!
You can read AI as normal technology here: knightcolumbia.org/content/ai-a...
June 15, 2025 at 5:59 AM
You can read AI as normal technology here: knightcolumbia.org/content/ai-a...
But because judgments of normal are too subjective—and because the world we live in entangled with advanved AI systems is anything but normal—the metaphor can conceals more than it reveals. So here is my proposal: Keep the demystification project live; drop the label “normal”. 10/n
June 15, 2025 at 5:59 AM
But because judgments of normal are too subjective—and because the world we live in entangled with advanved AI systems is anything but normal—the metaphor can conceals more than it reveals. So here is my proposal: Keep the demystification project live; drop the label “normal”. 10/n
—I’ve pushed back against the “super-intelligence will kill us all” narrative myself for years.
(See for example: Two types of AI existential risk: decisive and accumulative: link.springer.com/article/10.1... AI safety for everyone: www.nature.com/articles/s42...) 9/n
(See for example: Two types of AI existential risk: decisive and accumulative: link.springer.com/article/10.1... AI safety for everyone: www.nature.com/articles/s42...) 9/n
June 15, 2025 at 5:58 AM
—I’ve pushed back against the “super-intelligence will kill us all” narrative myself for years.
(See for example: Two types of AI existential risk: decisive and accumulative: link.springer.com/article/10.1... AI safety for everyone: www.nature.com/articles/s42...) 9/n
(See for example: Two types of AI existential risk: decisive and accumulative: link.springer.com/article/10.1... AI safety for everyone: www.nature.com/articles/s42...) 9/n
A system that decides who lives or dies is nothing like a household appliance. I admire the scholars Arvind Narayanan and Sayash Kapoor who coined the phrase and share their wish to redefine the notion of “normal” to move the AI governance debate beyond extreme doom. 8/n
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
June 15, 2025 at 5:57 AM
A system that decides who lives or dies is nothing like a household appliance. I admire the scholars Arvind Narayanan and Sayash Kapoor who coined the phrase and share their wish to redefine the notion of “normal” to move the AI governance debate beyond extreme doom. 8/n
The metaphor of "AI as normal technology" appeals cause it demystifies in the first place. But the label normal collapses the moment AI directs drones, misidentifies civilians, rewrites battlefield doctrine, or nudges users toward self-harm. 7/n
June 15, 2025 at 5:56 AM
The metaphor of "AI as normal technology" appeals cause it demystifies in the first place. But the label normal collapses the moment AI directs drones, misidentifies civilians, rewrites battlefield doctrine, or nudges users toward self-harm. 7/n
Seat belts were fussy in 1960, masks were not normal in 2019; both now signal basic care. Those who control information circulation or policy can freeze and stabilize a narrative in place, branding a live hazard normal by sheer repetition. 6/n
June 15, 2025 at 5:55 AM
Seat belts were fussy in 1960, masks were not normal in 2019; both now signal basic care. Those who control information circulation or policy can freeze and stabilize a narrative in place, branding a live hazard normal by sheer repetition. 6/n
Normal isn't a fact; it's a perception built from repetition & power. My brother can call a night of missile fire normal simply because he's used to it.To an outsider, the scene is clearly abnormal. Judgements of normality rest on thin, shifting ground. History shows how quickly baselines move. 5/n
June 15, 2025 at 5:55 AM
Normal isn't a fact; it's a perception built from repetition & power. My brother can call a night of missile fire normal simply because he's used to it.To an outsider, the scene is clearly abnormal. Judgements of normality rest on thin, shifting ground. History shows how quickly baselines move. 5/n
A toaster browns bread & stops; an agentic chatbot can keep learning and might suddenly write ransomware. Consumer devices have fixed functions & well-tested safety rules. Self-modifying agentic systems don't. Using the same label 4 both narrows the questions we ask about oversight & liability. 4/n
June 15, 2025 at 5:53 AM
A toaster browns bread & stops; an agentic chatbot can keep learning and might suddenly write ransomware. Consumer devices have fixed functions & well-tested safety rules. Self-modifying agentic systems don't. Using the same label 4 both narrows the questions we ask about oversight & liability. 4/n