Cameron Domenico Kirk-Giannini
cdkg.bsky.social
Cameron Domenico Kirk-Giannini
@cdkg.bsky.social
Language 🗣️💬, AI 🤖👾, social philosophy 🏳️‍🌈🏳️‍⚧️, and religion 😇😈 at Rutgers University.
Super excited to finally be able to share a project I've been working on for quite some time — a new paper on the Singularity Hypothesis! We argue that there are more good arguments for it and fewer good arguments against it than a lot of philosophers assume.

philpapers.org/archive/KIRR...
philpapers.org
July 16, 2025 at 3:38 PM
Philosophers and AI folks — I'm writing a paper on the singularity hypothesis, and I'm looking for some recent (i.e. since late 2024) expressions of skepticism about it from philosophers or ML folks that I can quote. The more well known the person, the better! Any ideas?
June 3, 2025 at 10:59 AM
Social philosophers! Check out this short new paper in which I revisit my dilemmatic account of gaslighting and think about what kind of evidence should lead us to doubt our epistemic competence in different domains.

philpapers.org/rec/KIRGAE
Cameron Domenico Kirk-Giannini, Gaslighting and Epistemic Competence - PhilPapers
Anti-intentionalist, purely epistemic accounts of gaslighting that center its dilemmatic structure have a range of attractive features. However, they appear to face an overgeneration problem: if there...
philpapers.org
May 4, 2025 at 2:01 PM
Excited to share a new review paper I wrote with William D'Alessandro about the range of exciting philosophical and technical work currently being done on AI safety! Forthcoming at Philosophy Compass.

philpapers.org/archive/DALA...
philpapers.org
April 30, 2025 at 2:42 PM
Third, in "AI safety: A climb to Armageddon?" Herman Cappelen, Josh Dever, and John Hawthorne ask a question that gets far too little attention in AI safety: Could the work we're doing simply be ensuring that safety failures will be worse when they occur?

link.springer.com/article/10.1...
AI safety: a climb to Armageddon? - Philosophical Studies
This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, th...
link.springer.com
March 7, 2025 at 5:26 AM
Second, in "Off-Switching Not Guaranteed," Sven Neth describes a number of important problems for Stuart Russell's idea of provably beneficial AI.

link.springer.com/article/10.1...
Off-switching not guaranteed - Philosophical Studies
Hadfield-Menell et al. (2017) propose the Off-Switch Game, a model of Human-AI cooperation in which AI agents always defer to humans because they are uncertain about our preferences. I explain two rea...
link.springer.com
March 7, 2025 at 5:25 AM
First, in "Bias, Machine Learning, and Conceptual Engineering," Rachel Rudolph and colleagues explore the connections between LLM training and conceptual engineering, with special attention to questions of bias.

link.springer.com/article/10.1...
Bias, machine learning, and conceptual engineering - Philosophical Studies
Large language models (LLMs) such as OpenAI’s ChatGPT reflect, and can potentially perpetuate, social biases in language use. Conceptual engineering aims to revise our concepts to eliminate such bias....
link.springer.com
March 7, 2025 at 5:24 AM
Exicted to share *three* important new papers from the special issue on AI safety!
March 7, 2025 at 5:24 AM
It's finally out! 👉 Click to find out whether YOUR AI assistant is a moral patient!

In all seriousness, though, this is an important project and I hope it helps advance discussion of the possible moral properties of artificial systems.

link.springer.com/article/10.1...
AI wellbeing - Asian Journal of Philosophy
Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little ...
link.springer.com
February 1, 2025 at 10:03 PM
My paper "How to Solve the Gender Inclusion Problem" is now typeset and officially citable!

www.cambridge.org/core/journal...
How to Solve the Gender Inclusion Problem | Hypatia | Cambridge Core
How to Solve the Gender Inclusion Problem
www.cambridge.org
January 24, 2025 at 2:05 PM
Excited to share this paper by Christian Tarsney from the special issue on AI safety I'm editing. It defends a useful new account of deception and manipulation in AI systems.

link.springer.com/article/10.1...
Deception and manipulation in generative AI - Philosophical Studies
Large language models now possess human-level linguistic abilities in many contexts. This raises the concern that they can be used to deceive and manipulate on unprecedented scales, for instance sprea...
link.springer.com
January 22, 2025 at 5:37 PM
By now you've probably heard about AI safety — but have you ever wondered what AI safety actually *is*, or how it's related to AI ethics?

Well, you're in luck! Jacqueline Harding and I have a new paper answering these questions.

philpapers.org/archive/HARW...
philpapers.org
January 13, 2025 at 7:26 PM
Philosophers and AI folks — I'm excited to share a new paper on AI and catastrophic risk, coauthored with Adam Bales and Bill D'Alessandro, which is now forthcoming at Phil Compass!

philpapers.org/rec/BALAIA-5
Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini, Artificial Intelligence: Argument...
Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two...
philpapers.org
January 24, 2024 at 8:11 PM
I wrote a short explainer-type piece on the Turing Test with my colleague Simon Goldstein!
AI is closer than ever to passing the Turing test for ‘intelligence’. What happens when it does?
The Turing test, first proposed in 1950 by Alan Turing, was framed as a test that could supposedly tell us whether an AI system could ‘think’ like a human.
theconversation.com
October 16, 2023 at 10:04 PM
Hello, world!
October 11, 2023 at 8:42 PM