Josh Fairfield
@joshfairfield.bsky.social
Legal scholar, tech law nerd, reluctant futurist
AI, crypto, digital property & community
Prof @ W&L | Author | Testifies sometimes | Consultant
Book 3 on the way. Author alignment: Lawful Thoughtful
https://joshfairfield.com/
AI, crypto, digital property & community
Prof @ W&L | Author | Testifies sometimes | Consultant
Book 3 on the way. Author alignment: Lawful Thoughtful
https://joshfairfield.com/
This was a great collaboration with you @amandareillyinnz.bsky.social.
AI will do as it’s trained. If that is to squeeze more profit from workers at the cost of their health, it will. But if we train it to benefit the people whose life experiences make it work, then we will have AI for good.
AI will do as it’s trained. If that is to squeeze more profit from workers at the cost of their health, it will. But if we train it to benefit the people whose life experiences make it work, then we will have AI for good.
May 17, 2025 at 1:30 AM
This was a great collaboration with you @amandareillyinnz.bsky.social.
AI will do as it’s trained. If that is to squeeze more profit from workers at the cost of their health, it will. But if we train it to benefit the people whose life experiences make it work, then we will have AI for good.
AI will do as it’s trained. If that is to squeeze more profit from workers at the cost of their health, it will. But if we train it to benefit the people whose life experiences make it work, then we will have AI for good.
This is going to end exactly like the google “glassholes” debacle. The crazy part is that we accept a degree of surveillance from smartphones in our own pockets that we would never accept from a surveillance device on someone else’s face.
May 16, 2025 at 8:09 PM
This is going to end exactly like the google “glassholes” debacle. The crazy part is that we accept a degree of surveillance from smartphones in our own pockets that we would never accept from a surveillance device on someone else’s face.
"How could this happen?" Say the people who trained the AI to make this happen. The deniability is the point.
April 10, 2025 at 10:04 PM
"How could this happen?" Say the people who trained the AI to make this happen. The deniability is the point.
Gives new and bleak meaning to "dehumanizing", doesn't it.
April 10, 2025 at 7:55 PM
Gives new and bleak meaning to "dehumanizing", doesn't it.
It's not worth it. The damage to people is real and the system won't even create the savings they're training the AI to try to find.
April 10, 2025 at 3:17 PM
It's not worth it. The damage to people is real and the system won't even create the savings they're training the AI to try to find.
4/4 ...then it will find every opportunity to eliminate the benefit, even where it is wrong. AI hallucinates. When the hidden optimization goal is reducing costs, AI hallucinates that people are abusing the system when they are not.
April 10, 2025 at 12:53 PM
4/4 ...then it will find every opportunity to eliminate the benefit, even where it is wrong. AI hallucinates. When the hidden optimization goal is reducing costs, AI hallucinates that people are abusing the system when they are not.
3/4 What's going wrong? The answer is hidden optimization goals. If an AI is told to make sure everyone who is eligible for a benefit gets it, it will do that. If it is told to kick everyone off of a benefit that it can, it will do that. If an AI has the hidden optimization goal of reducing costs...
April 10, 2025 at 12:52 PM
3/4 What's going wrong? The answer is hidden optimization goals. If an AI is told to make sure everyone who is eligible for a benefit gets it, it will do that. If it is told to kick everyone off of a benefit that it can, it will do that. If an AI has the hidden optimization goal of reducing costs...
2/4 And it worked out badly in India, on pretty much the same facts. Eligible people were kicked out without necessary benefits. www.amnesty.org/en/latest/ne...
India/Global: New technologies in automated social protection systems can threaten human rights
Governments must ensure automated social protection systems are fit for purpose and do not prevent people eligible for welfare from receiving it, Amnesty International said today as it published a tec...
www.amnesty.org
April 10, 2025 at 12:49 PM
2/4 And it worked out badly in India, on pretty much the same facts. Eligible people were kicked out without necessary benefits. www.amnesty.org/en/latest/ne...
1/4 This has worked out very badly for the UK, when they tried to hand over deciding who gets benefits or who should be sanctioned to AI. AI refused needed benefits to thousands of people who had no recourse in some cases, and showed bias in others. Example: www.theguardian.com/society/2024...
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach
www.theguardian.com
April 10, 2025 at 12:47 PM
1/4 This has worked out very badly for the UK, when they tried to hand over deciding who gets benefits or who should be sanctioned to AI. AI refused needed benefits to thousands of people who had no recourse in some cases, and showed bias in others. Example: www.theguardian.com/society/2024...