Peli Grietzer
banner
peligrietzer.bsky.social
Peli Grietzer
@peligrietzer.bsky.social
Mathematized philosophy of literature
peligrietzer.github.io
coolest take of a cool dad in 1992
November 14, 2025 at 6:59 PM
2. Rational deliberation with this form is 'natural' in a sense that's big-if-true in the realm of AI alignment
November 14, 2025 at 6:19 AM
1. Rational deliberations with the form 'I'm trying to be x-wise excellent, which involves promoting x-wise excellence through x-wise excellence' play an important role in the non-moral good, in morality, and in AI safety
November 14, 2025 at 6:19 AM
Overall, On Eudaimonia and Optimization is trying to make two big claims
November 14, 2025 at 6:19 AM
In part VI and the appendix I discuss corrigibility-related virtues relevant to AI safety in the narrow sense, and why the excellence-loop structure I'm describing is so helpful for thinking about them. I end by discussing some RL implementation prospects
November 14, 2025 at 6:19 AM
A key idea throughout is that I'm not just talking about quirks of some humanistic concept of rationality: it's the objective causal structure underlying mathematical insight or respectfulness that makes mathematical excellence or virtuous respectfulness workable concepts
November 14, 2025 at 6:19 AM
Part V tries to cash out the idea of a eudaimonically rational way to support eudaimonically rational beings. I develop a concept of benevolence-virtues that draws on a version of the 'x-wise excellence through x-wise excellence' loop we find in math or art or friendship
November 14, 2025 at 6:19 AM
In part IV, I argue that a eudaimonically rational way of aligning to the good of eudaimonically rational beings would be more robust, natural, and learnable than an Effective Altruism-style way of aligning to the good of eudaimonically rational beings
November 14, 2025 at 6:19 AM
In part III, I argue that some worries about AI alignment are unknowingly driven by how hard it is to interpret the good of beings who practice eudaimonic rationality as a utility function that would be legible to an Effective Altruism-style optimizer AI
November 14, 2025 at 6:19 AM
What does a mathematician try to do in math? I say she tries to be mathematically excellent, which involves promoting mathematical excellence through mathematical excellence, and that this structure is closely related to why 'mathematical excellence' can even be a concept
November 14, 2025 at 6:19 AM
I call this class of rational deliberations 'eudaimonic rationality,' and identify it with the form of (implicit or explicit) rationality that guides the efforts of a mathematician or artist or friend when they reflect on what to do in mathematics or in art or in friendship
November 14, 2025 at 6:19 AM
Parts I and II present a class of instances of rational deliberation that are very different from the kind of Effective Altruism-style optimization that many in the AI alignment world take for a paradigm of rational deliberation
November 14, 2025 at 6:19 AM
I thought it was very good!
November 13, 2025 at 2:58 PM
I take consciousness stuff of both the Chalmers and Husserl variety a lot of more seriously than most people I know, to a point that really surprises people
November 12, 2025 at 11:14 PM
Yeah, or at least half of it. Thread was also trying to point out some kind of possible Husserlian alternative that would be pro-AGI-integration in less facetious way, because it will locate the site of labour in the joint production of intuitive mind-world relations
November 12, 2025 at 11:09 PM