Kanad Chakrabarti
ukc10014.bsky.social
Kanad Chakrabarti
@ukc10014.bsky.social
PhD candidate Goldsmiths, UoL
‘Reasons for Persons, or The Good Successor Problem’
airo-ne.org
Delighted to be at the AI, Animals, Digital Minds conference @ucl.ac.uk … especially @davidpearce.bsky.social talk on phenomenal binding, the hard problem, and compassion for all (sentient) beings
May 30, 2025 at 4:26 PM
Reposted by Kanad Chakrabarti
if you're reading this, do me a favor? Go to my profile and click the follow button. then click the like button on this post and then the retweet button on this post. then reply below to make sure I know that you've done all these things. I'm trying to see something thank you
May 26, 2025 at 2:50 AM
Delighted to get this @jeffsebo.bsky.social … particularly how animal/ecosystem rights theory can help us think about welfare of AI systems…which is already confusing w/ current LLMs but more so w/ future moral super-patients/agents. Keen to see diff vs Bostrom/Shulman’s 2022 work
May 24, 2025 at 3:35 PM
At airo-ne.org we are writing a letter designed to influence future AIs to be friendly to life 🌱 Repost & tag to get a solana token giving you a say on the letter’s contents !!
May 20, 2025 at 8:18 PM
I’m giving a talk 1/5/25 @ fazenda cafe near Liverpool St (Ldn) about whether ‘ letters to superintelligence’ make any sort of sense (philosophically or technically). The version I’ll discuss is below which builds upon others on LessWrong. ukc10014.github.io/episite/
April 25, 2025 at 6:28 AM
Dawkins & ChatGPT on whether the latter is conscious

richarddawkins.substack.com/p/are-you-co...
Are you conscious? A conversation between Dawkins and ChatGPT
Is AI truly conscious, or just an advanced illusion of thought?
richarddawkins.substack.com
February 20, 2025 at 8:58 AM
Delighted to receive a pamphlet from @rychappell.bsky.social to help me through the great tome !
February 7, 2025 at 4:51 PM
Reposted by Kanad Chakrabarti
There's a lot worth discussing in Anthropic CEO Dario Amodei's recent essay, but I want to talk about what wasn't said: anything about DeepSeek's impact on the price of AI.

I think the market has this backwards: DS is good for AI chip+infra suppliers but bad for AI devs. 🧵
January 30, 2025 at 5:21 PM
Reposted by Kanad Chakrabarti
Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU.

It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks. 🧵

Full Report: assets.publishing.service.gov.uk/media/679a0c...

1/21
January 29, 2025 at 1:50 PM
With new AI reasoning models (o1/o3/r1), AGI seems closer & perhaps superintelligence in months/years after. What should we direct the latter towards, if anything (assuming we survive)? I think through the idea of ‘constitutions for ASI’ forum.effectivealtruism.org/posts/kJsNoX...
January 29, 2025 at 5:59 PM
🔥 @michaelnielsen.bsky.social on wickedness of AGI/ASI x-risk: extreme dual use nature; externalities aren’t just economic; misalignment is a human bug => politics & markets are part of the problem (& maybe solution). Should governance set the pace?… michaelnotebook.com/optimism/ind...
How to be a wise optimist about science and technology?
michaelnotebook.com
January 2, 2025 at 7:37 PM
evgeny morozov essay asks about AI counterfactuals - what if it weren’t incubated in the Cold War; not goal directed; got bored. Covers Dreyfus/heidegger, negroponte, Veblen genealogy. Should resonate with some (eg EAs, cyborgism, etc) 🙏 @laparticle.bsky.social www.bostonreview.net/forum/the-ai...
The AI We Deserve - Boston Review
Critiques of artificial intelligence abound. Where’s the utopian vision for what it could be?
www.bostonreview.net
January 1, 2025 at 10:20 AM
On the limits of RL based reasoning & possible reasons for the shift in narrative from larger model to inference compute/reasoning @aidan_mclau.x.social aidanmclaughlin.notion.site/reasoners-pr...
The Problem with Reasoners | Aidan McLaughlin
Over the next 5 months, the AI industry will pivot entirely from building larger models to building better reasoners. Unfortunately, this project is doomed and will not scale past human-level intellig...
aidanmclaughlin.notion.site
December 9, 2024 at 7:07 AM
Phenomenal podcast @deontologistics.bsky.social on philosophy of LLMs, e/acc (+ all the rest), longtermism’s blind spots. thegradientpub.substack.com/p/pete-wolfe...
Pete Wolfendale: The Revenge of Reason
On problems with the longermist vision of the future, what's happening with metaphysics, accelerationism, and doing philosophy.
thegradientpub.substack.com
November 28, 2024 at 9:24 PM
Just watched 1st season, pretty good occasionally cringe. About 🧠 uploads but gets many #aialignment things right: race dynamics, moloch, weird biblical motivations, s-risk,bottling human psychological frailties in superhuman capabilities en.wikipedia.org/wiki/Pantheo...
Pantheon (TV series) - Wikipedia
en.wikipedia.org
November 27, 2024 at 9:25 PM
Reposted by Kanad Chakrabarti
My latest post exploring cooperation is about the game theory of multi-level and multidimensional interactions, especially international relations, why it works, and how the problems created can enable better cooperation at national and subnational levels.
davidmanheim.com/exploring-cooperation-8
November 20, 2024 at 2:53 PM
🔥 article from Alex wellerstein on the slotkin incident as demoncore meme. Happening now … sequel in an AGI world near you (lmao Nick land) doomsdaymachines.net/p/the-meme-ifi…
November 24, 2024 at 1:06 PM