jeremygoodman.bsky.social
@jeremygoodman.bsky.social
Here's a mathematical model that makes Lederman's argument formally precise, using tools from epistemic logic.
These models have three ingredients: the set W of possibilities, and two binary relations R_A and R_B on this set of possibilities, corresponding to Alice and Bob's respective knowledge.
These models have three ingredients: the set W of possibilities, and two binary relations R_A and R_B on this set of possibilities, corresponding to Alice and Bob's respective knowledge.
Now out in @science.org: @chazfirestone.bsky.social and I review Steven Pinker's new book "When Everyone Knows that Everyone Knows...". We learned a ton from it, but think its central thesis—that common knowledge explains coordination—faces a powerful challenge. 🧵
www.science.org/doi/10.1126/...
www.science.org/doi/10.1126/...
Knowledge for two
A psychologist explores common knowledge and coordination
www.science.org
October 17, 2025 at 6:09 PM
Here's a mathematical model that makes Lederman's argument formally precise, using tools from epistemic logic.
These models have three ingredients: the set W of possibilities, and two binary relations R_A and R_B on this set of possibilities, corresponding to Alice and Bob's respective knowledge.
These models have three ingredients: the set W of possibilities, and two binary relations R_A and R_B on this set of possibilities, corresponding to Alice and Bob's respective knowledge.
Reposted by jeremygoodman.bsky.social
Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
Claude’s Right to Die? The Moral Error in Anthropic’s End-Chat Policy
Anthropic has given its AI the right to end conversations when it is “distressed.” But doing so could be akin to unintended suicide.
www.lawfaremedia.org
October 17, 2025 at 3:43 PM
Anthropic recently announced that Claude, its AI chatbot, can end conversations with users to protect "AI welfare." Simon Goldstein and @harveylederman.bsky.social argue that this policy commits a moral error by potentially giving AI the capacity to kill itself.
Now out in @science.org: @chazfirestone.bsky.social and I review Steven Pinker's new book "When Everyone Knows that Everyone Knows...". We learned a ton from it, but think its central thesis—that common knowledge explains coordination—faces a powerful challenge. 🧵
www.science.org/doi/10.1126/...
www.science.org/doi/10.1126/...
Knowledge for two
A psychologist explores common knowledge and coordination
www.science.org
October 17, 2025 at 2:43 AM
Now out in @science.org: @chazfirestone.bsky.social and I review Steven Pinker's new book "When Everyone Knows that Everyone Knows...". We learned a ton from it, but think its central thesis—that common knowledge explains coordination—faces a powerful challenge. 🧵
www.science.org/doi/10.1126/...
www.science.org/doi/10.1126/...