#AIharms
Does AI pose an existential risk? This new @theconversation.com article asks 5 experts, including ADM+S researchers Aaron Snoswell and Simon Coghlan.

Read all 5 expert opinions: bit.ly/48YwxR4

#AIrisks #AIharms #AI
October 13, 2025 at 3:42 AM
A Dayak activist. A Taiwanese student. A Javanese protester. A Thai man navigating HIV stigma.

Each turns to AI for support — and gets gaslit.

Fictional stories, real risks as US tech panders to the far right.

Read my dispatch:
www.margin-notes-squared.com/p/americas-w...
#AI #Meta #Woke #AIharms
America’s War with ‘Woke’
New AI Harms for the Global Majority
www.margin-notes-squared.com
August 17, 2025 at 5:58 AM
A Dayak activist. A Taiwanese student. A Javanese protester. A Thai man navigating HIV stigma.

Each turns to AI for support — and gets gaslit.

Fictional stories, real risks as US tech panders to the far right.

Read my dispatch:
www.margin-notes-squared.com/p/americas-w...
#AI #Meta #Woke #AIharms
America’s War with ‘Woke’
New AI Harms for the Global Majority
www.margin-notes-squared.com
August 17, 2025 at 5:58 AM
@joncounts

Hilarious here, a bit less when medical appointments wiretap are being transcribed thus into the patients' files.

There's an excellent paper unpacking all of the harms of AI transcription, excellent in part for its title starting with 'Careless Whispers: ..."

#ai #llm #aiharms
August 17, 2025 at 4:21 AM
Truly frightening stuff over on LinkedIn from @wolvendamien.bsky.social regarding "people whose "AI" generated after-visit healthcare "summaries" included erroneous bullshit ("hallucinations")." And I totally agree that "harm will fall first on the most marginalized."
#AI #AIHarms #AIEthics #AJI
[edited] I've now heard from two separate people whose "AI" generated after-visit healthcare "summaries" included erroneous bullshit ("hallucinations"). | Damien P. W...
[edited] I've now heard from two separate people whose "AI" generated after-visit healthcare "summaries" included erroneous bullshit ("hallucinations"). In one case, ...
www.linkedin.com
July 7, 2025 at 4:53 PM
Empirical evidence of Large Language Model's influence on human spoken communication
Artificial Intelligence (AI) agents now interact with billions of humans in natural language, thanks to advances in Large Language Models (LLMs) like ChatGPT. This raises the question of whether AI has the potential to shape a fundamental aspect of human culture: the way we speak. Recent analyses revealed that scientific publications already exhibit evidence of AI-specific language. But this evidence is inconclusive, since scientists may simply be using AI to copy-edit their writing. To explore whether AI has influenced human spoken communication, we transcribed and analyzed about 280,000 English-language videos of presentations, talks, and speeches from more than 20,000 YouTube channels of academic institutions. We find a significant shift in the trend of word usage specific to words distinctively associated with ChatGPT following its release. These findings provide the first empirical evidence that humans increasingly imitate LLMs in their spoken language. Our results raise societal and policy-relevant concerns about the potential of AI to unintentionally reduce linguistic diversity, or to be deliberately misused for mass manipulation. They also highlight the need for further investigation into the feedback loops between machine behavior and human culture.
arxiv.org
June 22, 2025 at 6:15 AM
This links to a number of studies worth checking out. You sound like ChatGPT | The Verge https://www. theverge.com/openai/686748/cha tgpt-linguistic-impact-common-word-usage # AI # LLM # AIHarms

Interest | Match | Feed
Origin
mastodon.acm.org
June 22, 2025 at 6:12 AM
This links to a number of studies worth checking out. You sound like ChatGPT | The Verge https://www. theverge.com/openai/686748/cha tgpt-linguistic-impact-common-word-usage # AI # LLM # AIHarms

Interest | Match | Feed
Origin
mastodon.acm.org
June 22, 2025 at 6:12 AM
Ample evidence in #criticalAI studies has revealed #AIharms on racialised people. AI models cause harm by transmitting discrimination, toxicity, misinformation, and negative stereotypes. what is lesser known is how people makes sense of and navigate these systems and harms. 5/n
November 20, 2024 at 12:28 PM