Kanad Chakrabarti
ukc10014.bsky.social
Kanad Chakrabarti
@ukc10014.bsky.social
PhD candidate Goldsmiths, UoL
‘Reasons for Persons, or The Good Successor Problem’
airo-ne.org
Delighted to be at the AI, Animals, Digital Minds conference @ucl.ac.uk … especially @davidpearce.bsky.social talk on phenomenal binding, the hard problem, and compassion for all (sentient) beings
May 30, 2025 at 4:26 PM
Delighted to get this @jeffsebo.bsky.social … particularly how animal/ecosystem rights theory can help us think about welfare of AI systems…which is already confusing w/ current LLMs but more so w/ future moral super-patients/agents. Keen to see diff vs Bostrom/Shulman’s 2022 work
May 24, 2025 at 3:35 PM
At airo-ne.org we are writing a letter designed to influence future AIs to be friendly to life 🌱 Repost & tag to get a solana token giving you a say on the letter’s contents !!
May 20, 2025 at 8:18 PM
I’m giving a talk 1/5/25 @ fazenda cafe near Liverpool St (Ldn) about whether ‘ letters to superintelligence’ make any sort of sense (philosophically or technically). The version I’ll discuss is below which builds upon others on LessWrong. ukc10014.github.io/episite/
April 25, 2025 at 6:28 AM
Delighted to receive a pamphlet from @rychappell.bsky.social to help me through the great tome !
February 7, 2025 at 4:51 PM
With new AI reasoning models (o1/o3/r1), AGI seems closer & perhaps superintelligence in months/years after. What should we direct the latter towards, if anything (assuming we survive)? I think through the idea of ‘constitutions for ASI’ forum.effectivealtruism.org/posts/kJsNoX...
January 29, 2025 at 5:59 PM
🔥 article from Alex wellerstein on the slotkin incident as demoncore meme. Happening now … sequel in an AGI world near you (lmao Nick land) doomsdaymachines.net/p/the-meme-ifi…
November 24, 2024 at 1:06 PM