Njeng Pierre Yves
banner
njeng.bsky.social
Njeng Pierre Yves
@njeng.bsky.social
Ai enthousiast. Author, student of the future.
Et toi @lemonde.fr tu juges que quoi, mon gars ?
December 28, 2025 at 11:17 AM
One question.
July 29, 2025 at 7:50 PM
‘Trick pony,’ huh. Bold metaphor. Now go ahead and do the bold thing—name who you’re actually talking about. Shouldn’t be that hard if your point is solid.
April 18, 2025 at 3:22 PM
Who do you refer to acting like a trick pony ?
April 18, 2025 at 8:06 AM
Stay woke.
January 29, 2025 at 1:04 PM
Thoughts:
Fixing AI bias isn’t optional—it’s essential for fairness and functionality. Solutions include:

a. Diverse training datasets.
b. Regular audits for algorithmic fairness.
c. Transparent explainability models.

The question is: Are companies willing to prioritize ethical AI over profits?"
January 26, 2025 at 7:53 AM
5. The Cost of Ignoring Bias:
Biased AI harms individuals and exposes companies to lawsuits, reputational damage, and operational inefficiencies.
Example: #Discriminatory AI in HR may violate labor laws, leading to regulatory crackdowns.
January 26, 2025 at 7:52 AM
4. Real-World Examples:

In 2018, Amazon scrapped its hiring AI after it systematically devalued resumes with words like 'women’s.'

Facial recognition AI used in employee monitoring performs worse on non-white faces, leading to disproportionate scrutiny.
January 26, 2025 at 7:50 AM

3. Black Box Models Create Accountability Gaps:

#Deeplearning models make decisions that even their creators struggle to explain.
Example: An AI flagging candidates as 'low potential' without clear criteria leaves rejected candidates with no recourse.

Without transparency, bias thrives.
January 26, 2025 at 7:48 AM
2. Feedback Loops Reinforce Inequality:

#AI systems aren’t static—they adapt based on feedback. If biased decisions (e.g., over-surveilling minority employees) go uncorrected, the system 'learns' to repeat them.
Over time, AI bias gets worse, not better, unless actively mitigated.
January 26, 2025 at 7:47 AM
1. Biased Training Data:

AI learns from historical data, but what happens when that data reflects societal biases?
Hiring tools trained on company data that favors male candidates for technical roles reject equally qualified women.

Bias in, bias out. Garbage data creates discriminatory models.
January 26, 2025 at 7:46 AM