But here we are. Deepfakes, cloned voices, and perfectly “human” bots are everywhere.
And something unexpected is happening: people are starting to miss what’s real.
time.com/7326718/sora...
#AI#Deepfakes
But here we are. Deepfakes, cloned voices, and perfectly “human” bots are everywhere.
And something unexpected is happening: people are starting to miss what’s real.
time.com/7326718/sora...
#AI#Deepfakes
The “state of the art” is fragile, and people trying to mislead others know it.
We need a shared global system to track and verify AI content. 🤝
#AI #Deepfakes #AIDetection
The “state of the art” is fragile, and people trying to mislead others know it.
We need a shared global system to track and verify AI content. 🤝
#AI #Deepfakes #AIDetection
A perfect city built on one child’s suffering.
It reminded me of AI progress.
We celebrate every breakthrough, but someone always pays the price.
Often, it’s unseen workers labeling data for almost nothing.🧵
A perfect city built on one child’s suffering.
It reminded me of AI progress.
We celebrate every breakthrough, but someone always pays the price.
Often, it’s unseen workers labeling data for almost nothing.🧵
OpenAI’s new Parental Controls for ChatGPT try to alert parents to signs of “emotional distress” in teen conversations. The idea sounds reassuring: more safety, more oversight, less risk. 🚨🧵
OpenAI’s new Parental Controls for ChatGPT try to alert parents to signs of “emotional distress” in teen conversations. The idea sounds reassuring: more safety, more oversight, less risk. 🚨🧵
People chat with AI for support.
It’s always available.
No judgment. No awkwardness.
But is easy comfort the same as real belonging?
We risk losing the messy, human parts of connection.
The kind that comes from listening to each other->
People chat with AI for support.
It’s always available.
No judgment. No awkwardness.
But is easy comfort the same as real belonging?
We risk losing the messy, human parts of connection.
The kind that comes from listening to each other->
We train language models to give answers fast and with confidence. But in real life, knowing when to pause or even admit “I’m not sure” is a skill we respect in people.
I see it often: AI systems fill in 🧵
We train language models to give answers fast and with confidence. But in real life, knowing when to pause or even admit “I’m not sure” is a skill we respect in people.
I see it often: AI systems fill in 🧵
AI “friends” are everywhere now.
Chatbots that always listen. 🗣️
No judgment.
No awkward silences.
It’s easy.
It’s safe.
But is it real?
I see teens telling bots more than people.
Adults trust AI with things they keep from friends. 🧵
AI “friends” are everywhere now.
Chatbots that always listen. 🗣️
No judgment.
No awkward silences.
It’s easy.
It’s safe.
But is it real?
I see teens telling bots more than people.
Adults trust AI with things they keep from friends. 🧵
H&M and Vogue both use AI models, but there’s a big difference. 🤖
H&M makes “digital twins” of real people.
These models keep some control and get paid. 💸
There’s always a real person behind each image.
Vogue went all in on fully generated AI for Guess.🧵
H&M and Vogue both use AI models, but there’s a big difference. 🤖
H&M makes “digital twins” of real people.
These models keep some control and get paid. 💸
There’s always a real person behind each image.
Vogue went all in on fully generated AI for Guess.🧵
“How do you spot an AI video?”
Well, the short answer is: it's tricky.
The old giveaways, weird hands, vanishing objects, those tricks really don’t work anymore.
Honestly, I study this stuff every day and even I have trouble spotting a well-made 🧵
“How do you spot an AI video?”
Well, the short answer is: it's tricky.
The old giveaways, weird hands, vanishing objects, those tricks really don’t work anymore.
Honestly, I study this stuff every day and even I have trouble spotting a well-made 🧵
But the rush for memory is outpacing ethics and transparency.
A system that remembers you feels helpful. It adapts, recalls your style, even past chats. But where’s the line between helpful and unsettling? 🧵
But the rush for memory is outpacing ethics and transparency.
A system that remembers you feels helpful. It adapts, recalls your style, even past chats. But where’s the line between helpful and unsettling? 🧵
What once took experts building personas by hand is now fast, cheap, and automated.
@AnthropicAI recently exposed an “influence-as-a-service” network running 100+ fake personas across X and Facebook. These weren’t chasing 🧵
What once took experts building personas by hand is now fast, cheap, and automated.
@AnthropicAI recently exposed an “influence-as-a-service” network running 100+ fake personas across X and Facebook. These weren’t chasing 🧵
But after years working in human-centered AI, I can tell you it’s not🧵
But after years working in human-centered AI, I can tell you it’s not🧵
As I told Factchequeado: if AI can’t see us all, it’s not good enough.
🔗 factchequeado.com/teexplicamos/20250507/ia-imagenes-diversidad-latinos
As I told Factchequeado: if AI can’t see us all, it’s not good enough.
🔗 factchequeado.com/teexplicamos/20250507/ia-imagenes-diversidad-latinos
It reminded me of a much bigger issue: how beauty gets flattened into one narrow standard.
And AI is doing the same thing.
Have you ever prompted a generative model to create an image of a “beautiful” woman or man?
It reminded me of a much bigger issue: how beauty gets flattened into one narrow standard.
And AI is doing the same thing.
Have you ever prompted a generative model to create an image of a “beautiful” woman or man?
But almost nobody talks about the voices they leave out.
I grew up speaking Spanish, but I know there are hundreds of languages out there with even less data online. When we build AI models on 🧵
But almost nobody talks about the voices they leave out.
I grew up speaking Spanish, but I know there are hundreds of languages out there with even less data online. When we build AI models on 🧵
But what happens when AI agents start reviewing, and even writing, scientific papers?
Lately, I've heard more and more conversations about suspected AI-generated reviews, even at top journals.
But what happens when AI agents start reviewing, and even writing, scientific papers?
Lately, I've heard more and more conversations about suspected AI-generated reviews, even at top journals.