Lee Elkin
@lelkin.bsky.social
AI, Decision Theory, Epistemology, Ethics
lelkin.com
lelkin.com
My teachers in grad school left out that bit.
August 31, 2025 at 2:43 AM
My teachers in grad school left out that bit.
Agency is all the rage at the tech companies, so that’s a strong angle. My preliminary thoughts are on how misalignment could be advanced by neglecting wellbeing in case it turns out to be realized by systems (whether genuine or simulated).
August 9, 2025 at 8:37 PM
Agency is all the rage at the tech companies, so that’s a strong angle. My preliminary thoughts are on how misalignment could be advanced by neglecting wellbeing in case it turns out to be realized by systems (whether genuine or simulated).
We should chat about it once I get some ideas going.
August 9, 2025 at 11:26 AM
We should chat about it once I get some ideas going.
For sure! The algorithmic fairness stuff was some low hanging fruit since statistical fairness criteria and ensemble learning relate to my formal work. I'm starting to get into AI welfare, more long conceptual lines rather than formal, so that might be something if you have any interest there.
August 8, 2025 at 7:03 PM
For sure! The algorithmic fairness stuff was some low hanging fruit since statistical fairness criteria and ensemble learning relate to my formal work. I'm starting to get into AI welfare, more long conceptual lines rather than formal, so that might be something if you have any interest there.
In the case of fear, maybe there is an implicit conditional/positive correlation in the background, where the antecedent/conditioning variable is the reason, i.e., q -> p or pr(p | q) such that pr(p | q) > pr(p).
June 18, 2025 at 3:49 PM
In the case of fear, maybe there is an implicit conditional/positive correlation in the background, where the antecedent/conditioning variable is the reason, i.e., q -> p or pr(p | q) such that pr(p | q) > pr(p).
And a lack of engagement.
May 10, 2025 at 9:36 PM
And a lack of engagement.
Editors could also make positive suggestions where appropriate for desk rejects. PPA recently desk rejected a paper of mine, but said “we think this looks like a very good paper” but a better fit for another journal like Synthese or Episteme. That was helpful and encouraging.
April 10, 2025 at 2:30 PM
Editors could also make positive suggestions where appropriate for desk rejects. PPA recently desk rejected a paper of mine, but said “we think this looks like a very good paper” but a better fit for another journal like Synthese or Episteme. That was helpful and encouraging.
I always do that, especially if it’s clear that the reviewers skimmed the paper and complain about things that have been addressed.
April 10, 2025 at 2:25 PM
I always do that, especially if it’s clear that the reviewers skimmed the paper and complain about things that have been addressed.