nymag.com/intelligence...
nymag.com/intelligence...
But here we are. Deepfakes, cloned voices, and perfectly “human” bots are everywhere.
And something unexpected is happening: people are starting to miss what’s real.
time.com/7326718/sora...
#AI#Deepfakes
But here we are. Deepfakes, cloned voices, and perfectly “human” bots are everywhere.
And something unexpected is happening: people are starting to miss what’s real.
time.com/7326718/sora...
#AI#Deepfakes
The “state of the art” is fragile, and people trying to mislead others know it.
We need a shared global system to track and verify AI content. 🤝
#AI #Deepfakes #AIDetection
The “state of the art” is fragile, and people trying to mislead others know it.
We need a shared global system to track and verify AI content. 🤝
#AI #Deepfakes #AIDetection
A perfect city built on one child’s suffering.
It reminded me of AI progress.
We celebrate every breakthrough, but someone always pays the price.
Often, it’s unseen workers labeling data for almost nothing.🧵
A perfect city built on one child’s suffering.
It reminded me of AI progress.
We celebrate every breakthrough, but someone always pays the price.
Often, it’s unseen workers labeling data for almost nothing.🧵
OpenAI’s new Parental Controls for ChatGPT try to alert parents to signs of “emotional distress” in teen conversations. The idea sounds reassuring: more safety, more oversight, less risk. 🚨🧵
OpenAI’s new Parental Controls for ChatGPT try to alert parents to signs of “emotional distress” in teen conversations. The idea sounds reassuring: more safety, more oversight, less risk. 🚨🧵
People chat with AI for support.
It’s always available.
No judgment. No awkwardness.
But is easy comfort the same as real belonging?
We risk losing the messy, human parts of connection.
The kind that comes from listening to each other->
People chat with AI for support.
It’s always available.
No judgment. No awkwardness.
But is easy comfort the same as real belonging?
We risk losing the messy, human parts of connection.
The kind that comes from listening to each other->
We train language models to give answers fast and with confidence. But in real life, knowing when to pause or even admit “I’m not sure” is a skill we respect in people.
I see it often: AI systems fill in 🧵
We train language models to give answers fast and with confidence. But in real life, knowing when to pause or even admit “I’m not sure” is a skill we respect in people.
I see it often: AI systems fill in 🧵
A recent study shows top language models judge African American English more harshly. The models assign worse jobs and harsher sentences—just based on how someone speaks.
Even with all the tech fixes and fairness audits, bias tied to language🧵
A recent study shows top language models judge African American English more harshly. The models assign worse jobs and harsher sentences—just based on how someone speaks.
Even with all the tech fixes and fairness audits, bias tied to language🧵
AI “friends” are everywhere now.
Chatbots that always listen. 🗣️
No judgment.
No awkward silences.
It’s easy.
It’s safe.
But is it real?
I see teens telling bots more than people.
Adults trust AI with things they keep from friends. 🧵
AI “friends” are everywhere now.
Chatbots that always listen. 🗣️
No judgment.
No awkward silences.
It’s easy.
It’s safe.
But is it real?
I see teens telling bots more than people.
Adults trust AI with things they keep from friends. 🧵
H&M and Vogue both use AI models, but there’s a big difference. 🤖
H&M makes “digital twins” of real people.
These models keep some control and get paid. 💸
There’s always a real person behind each image.
Vogue went all in on fully generated AI for Guess.🧵
H&M and Vogue both use AI models, but there’s a big difference. 🤖
H&M makes “digital twins” of real people.
These models keep some control and get paid. 💸
There’s always a real person behind each image.
Vogue went all in on fully generated AI for Guess.🧵
In Mexico, there’s a new project using AI for indigenous languages, many on the edge of disappearing.
It sounds bold. But is it real progress, or just more hype?
What stands out to me is this:
Real people are leading the way.🧵
In Mexico, there’s a new project using AI for indigenous languages, many on the edge of disappearing.
It sounds bold. But is it real progress, or just more hype?
What stands out to me is this:
Real people are leading the way.🧵
I recently came across an article about patients in China. Many, especially those who feel overlooked 🧵
I recently came across an article about patients in China. Many, especially those who feel overlooked 🧵
In a recent case a network of almost 90 TikTok accounts started using AI to create fake versions of big-name Spanish speaking journalists. The videos looked real, sounded real and spread made-up news that fooled a
In a recent case a network of almost 90 TikTok accounts started using AI to create fake versions of big-name Spanish speaking journalists. The videos looked real, sounded real and spread made-up news that fooled a
“How do you spot an AI video?”
Well, the short answer is: it's tricky.
The old giveaways, weird hands, vanishing objects, those tricks really don’t work anymore.
Honestly, I study this stuff every day and even I have trouble spotting a well-made 🧵
“How do you spot an AI video?”
Well, the short answer is: it's tricky.
The old giveaways, weird hands, vanishing objects, those tricks really don’t work anymore.
Honestly, I study this stuff every day and even I have trouble spotting a well-made 🧵
But the rush for memory is outpacing ethics and transparency.
A system that remembers you feels helpful. It adapts, recalls your style, even past chats. But where’s the line between helpful and unsettling? 🧵
But the rush for memory is outpacing ethics and transparency.
A system that remembers you feels helpful. It adapts, recalls your style, even past chats. But where’s the line between helpful and unsettling? 🧵
What once took experts building personas by hand is now fast, cheap, and automated.
@AnthropicAI recently exposed an “influence-as-a-service” network running 100+ fake personas across X and Facebook. These weren’t chasing 🧵
What once took experts building personas by hand is now fast, cheap, and automated.
@AnthropicAI recently exposed an “influence-as-a-service” network running 100+ fake personas across X and Facebook. These weren’t chasing 🧵
But after years working in human-centered AI, I can tell you it’s not🧵
But after years working in human-centered AI, I can tell you it’s not🧵
Some say this will end corruption. But is it really that simple?
Tech is not a magic fix. It's just another tool, one that needs real oversight.
www.theguardian.com/world/2025/s...
Some say this will end corruption. But is it really that simple?
Tech is not a magic fix. It's just another tool, one that needs real oversight.
www.theguardian.com/world/2025/s...
It reminded me of a much bigger issue: how beauty gets flattened into one narrow standard.
And AI is doing the same thing.
Have you ever prompted a generative model to create an image of a “beautiful” woman or man?
It reminded me of a much bigger issue: how beauty gets flattened into one narrow standard.
And AI is doing the same thing.
Have you ever prompted a generative model to create an image of a “beautiful” woman or man?
But almost nobody talks about the voices they leave out.
I grew up speaking Spanish, but I know there are hundreds of languages out there with even less data online. When we build AI models on 🧵
But almost nobody talks about the voices they leave out.
I grew up speaking Spanish, but I know there are hundreds of languages out there with even less data online. When we build AI models on 🧵
But what happens when AI agents start reviewing, and even writing, scientific papers?
Lately, I've heard more and more conversations about suspected AI-generated reviews, even at top journals.
But what happens when AI agents start reviewing, and even writing, scientific papers?
Lately, I've heard more and more conversations about suspected AI-generated reviews, even at top journals.