Jeff Sebo
@jeffsebo.bsky.social
Associate Professor of Environmental Studies, Director of the Center for Environmental and Animal Protection, Director of the Center for Mind, Ethics, and Policy, and Co-Director of the Wild Animal Welfare Program, New York University. jeffsebo.net
Yes, this is an important factor. As we note in the essay, risk assessments need to consider both probability and magnitude of harm. That means considering the probability that an ant can suffer *and* how much they can suffer, if at all. We focus more on probability here only for space reasons.
November 10, 2025 at 3:08 PM
Yes, this is an important factor. As we note in the essay, risk assessments need to consider both probability and magnitude of harm. That means considering the probability that an ant can suffer *and* how much they can suffer, if at all. We focus more on probability here only for space reasons.
Makes sense! I still see that as limiting the rights that future generations can have against us, as opposed to undermining the very possibility of such rights. (But this is not to dismiss the argument, since limiting the rights that future generations can have against us would still matter a lot!)
October 13, 2025 at 6:14 PM
Makes sense! I still see that as limiting the rights that future generations can have against us, as opposed to undermining the very possibility of such rights. (But this is not to dismiss the argument, since limiting the rights that future generations can have against us would still matter a lot!)
i see the the non-identity problem as addressing whether particular impacts count as harming future generations, not whether future generations can have rights that correspond to present duties. do you see it as ruling out the latter?
October 13, 2025 at 4:02 PM
i see the the non-identity problem as addressing whether particular impacts count as harming future generations, not whether future generations can have rights that correspond to present duties. do you see it as ruling out the latter?
20/
- The Emotional Alignment Design Policy arxiv.org/abs/2507.06263
- Is There a Tension between AI Safety and AI Welfare?
link.springer.com/article/10.1...
– What Will Society Think about AI Consciousness?
sciencedirect.com/science/arti...
– When an AI Seems Conscious
whenaiseemsconscious.org
- The Emotional Alignment Design Policy arxiv.org/abs/2507.06263
- Is There a Tension between AI Safety and AI Welfare?
link.springer.com/article/10.1...
– What Will Society Think about AI Consciousness?
sciencedirect.com/science/arti...
– When an AI Seems Conscious
whenaiseemsconscious.org
The Emotional Alignment Design Policy
According to what we call the Emotional Alignment Design Policy, artificial entities should be designed to elicit emotional reactions from users that appropriately reflect the entities' capacities and...
arxiv.org
September 19, 2025 at 5:31 PM
20/
- The Emotional Alignment Design Policy arxiv.org/abs/2507.06263
- Is There a Tension between AI Safety and AI Welfare?
link.springer.com/article/10.1...
– What Will Society Think about AI Consciousness?
sciencedirect.com/science/arti...
– When an AI Seems Conscious
whenaiseemsconscious.org
- The Emotional Alignment Design Policy arxiv.org/abs/2507.06263
- Is There a Tension between AI Safety and AI Welfare?
link.springer.com/article/10.1...
– What Will Society Think about AI Consciousness?
sciencedirect.com/science/arti...
– When an AI Seems Conscious
whenaiseemsconscious.org
19/ You can find my talk on AI welfare here:
tedxnewengland.com/speakers/jef...
Hope you enjoy! For more, see:
- The Moral Circle
wwnorton.com/books/978132...
– Moral Consideration for AI Systems by 2030
link.springer.com/article/10.1...
– Taking AI Welfare Seriously
arxiv.org/abs/2411.00986
tedxnewengland.com/speakers/jef...
Hope you enjoy! For more, see:
- The Moral Circle
wwnorton.com/books/978132...
– Moral Consideration for AI Systems by 2030
link.springer.com/article/10.1...
– Taking AI Welfare Seriously
arxiv.org/abs/2411.00986
tedxnewengland.com
September 19, 2025 at 5:31 PM
19/ You can find my talk on AI welfare here:
tedxnewengland.com/speakers/jef...
Hope you enjoy! For more, see:
- The Moral Circle
wwnorton.com/books/978132...
– Moral Consideration for AI Systems by 2030
link.springer.com/article/10.1...
– Taking AI Welfare Seriously
arxiv.org/abs/2411.00986
tedxnewengland.com/speakers/jef...
Hope you enjoy! For more, see:
- The Moral Circle
wwnorton.com/books/978132...
– Moral Consideration for AI Systems by 2030
link.springer.com/article/10.1...
– Taking AI Welfare Seriously
arxiv.org/abs/2411.00986
18/ If there are risks in both directions, then we should consider them both, not consider one while neglecting the other. And even if the risk of under-attribution is low now, it may increase fast. We can, and should, address current problems while preparing for future ones.
September 19, 2025 at 5:31 PM
18/ If there are risks in both directions, then we should consider them both, not consider one while neglecting the other. And even if the risk of under-attribution is low now, it may increase fast. We can, and should, address current problems while preparing for future ones.
17/ However, Suleyman also describes our work on moral consideration for near-future AI as “premature, and frankly dangerous,” implying that we should consider and mitigate over-attribution risks but not under-attribution risks at present. Here we disagree.
September 19, 2025 at 5:31 PM
17/ However, Suleyman also describes our work on moral consideration for near-future AI as “premature, and frankly dangerous,” implying that we should consider and mitigate over-attribution risks but not under-attribution risks at present. Here we disagree.