Claudia Flores Saviaga
banner
saviaga.bsky.social
Claudia Flores Saviaga
@saviaga.bsky.social
Meta’s @OversightBoard | Human-Centered AI & Deepfakes|Ph.D. CS @Northeastern |Past: @Twitter @Meta Fellow|@CarnegieMellon @oiioxford

Or whole communities left out, their languages and cultures erased because they’re not in the data.

We call this progress.
But who gets left behind?

Real progress means building AI that includes everyone in my opinion.
November 3, 2025 at 5:55 PM
Don’t get me wrong, tech tools can support families. But let’s not pretend parental controls will solve a problem rooted in trust, mental health, and honest connection. 🫱🏽🫲🏼
October 31, 2025 at 3:35 PM
language might be everyday talk elsewhere. AI systems trained on mostly Western data miss these differences, and that can lead to false alarms, or worse, missing what’s really wrong. 🌍

So, are we helping teens feel safer, or just teaching them to hide better? 🪤
October 31, 2025 at 3:35 PM


And teenagers are smart. If they know an AI is reading over their shoulder, they’ll switch up how they talk, use private codes, or just avoid the platform when they feel vulnerable. That defeats the point. 🕵️♂️

Plus, there’s the cultural layer: What sounds like a red flag in one region or
October 31, 2025 at 3:35 PM
But real life isn’t that simple.

Teenagers talk in code. Slang changes almost overnight. Sometimes, the same phrase can mean completely different things based on context or culture. What one group says as a joke, another might say when they’re genuinely upset. AI rarely gets this nuance. 🧩
October 31, 2025 at 3:35 PM
not just to a perfect bot.

Are we trading real community for something less real, just because it’s simple? www.newyorker.com/magazine/202...
A.I. Is Coming for Culture
We’re used to algorithms guiding our choices. When machines can effortlessly generate the content we consume, though, what’s left for the human imagination?
www.newyorker.com
October 28, 2025 at 5:45 PM
Would you trust an AI that admits when it’s unsure? 🤖💡

openai.com/index/why-la...
Why language models hallucinate
OpenAI’s new research explains why language models hallucinate. The findings show how improved evaluations can enhance AI reliability, honesty, and safety.
openai.com
October 27, 2025 at 3:07 PM
released a really interesting article about hallucinations in language models. The big insight: Hallucinations are not inevitable. Language models can actually abstain when uncertain.

If we start rewarding this behavior, maybe we get AI that is not just knowledgeable, but trustworthy.
October 27, 2025 at 3:07 PM
gaps when unsure, sometimes inventing details that sound right but aren’t. The results can be anything from funny to potentially harmful. So what if our evaluation metrics valued humility, not just accuracy? Imagine if “I don’t know” was a smart response for an algorithm instead of a failure. OpenAI
October 27, 2025 at 3:07 PM
isn’t going away.

If real people can’t see, question, and shape these systems, “accountability” in AI is just a buzzword.

www.nature.com/articles/s41...
AI generates covertly racist decisions about people based on their dialect - Nature
Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of…
www.nature.com
October 25, 2025 at 5:40 PM
complexity?

Before frictionless AI “friendship” is our default, maybe we need to pause. ⏸️
What kind of connection do we really want?

Have you noticed this, too?
How is it showing up in your world?
October 22, 2025 at 1:11 PM
“Connection” is turning smooth and simple—never messy.

But true friendship is messy. 🧩
Belonging takes work.
It means facing misunderstandings, not just avoiding them.

If AI always agrees, do we lose the skills to handle real relationships?
Will we forget how to connect with people in all their
October 22, 2025 at 1:11 PM
If building this in English is already so tough, making it work globally brings a whole new set of challenges. We’re only at the beginning.

Full study here: papers.ssrn.com/sol3/papers....
Call Me A Jerk: Persuading AI to Comply with Objectionable Requests
<span><span>Do artificial intelligence (AI) models trained on human language submit to the same principles of persuasion as humans? We tested whether 7 establis
papers.ssrn.com
October 17, 2025 at 5:07 PM
and totally different ideas about what’s actually persuasive or even acceptable.

Culture shapes how we try to convince each other. Some use direct arguments, others tell stories. What works in one place can feel pushy or awkward somewhere else.
October 17, 2025 at 5:07 PM
All of this was done in English.

Reading this, it makes me think how hard trying to build those same guardrails in Spanish, Hindi, Swahili, or Mandarin could be.

It’s not just about translating words.

Every language brings its own ways of reasoning, its own social rules,
October 17, 2025 at 5:07 PM
Fake becomes hard to spot.

Fashion once pushed for diversity.
Now, code erases that work.

Are we seeing progress, or just the same beauty standards in a new package? 🧐

If AI decides who gets seen, who gets left out?

www.teenvogue.com/story/hm-is-...
www.bbc.com/news/article...
October 16, 2025 at 7:28 PM