Giovanna Mascheroni
banner
giovannamas.bsky.social
Giovanna Mascheroni
@giovannamas.bsky.social

Media sociologist researching digital media, data & AI in children's lives. Dog lover (Eng. Springer Spaniel) TwitterRefugee. Projects: http://www.eukidsonline.net/, https://yskills.eu/, https://datachildfutures.it/
Also on @giovannamas@aoir.social .. more

Education 32%
Communication & Media Studies 22%

The "womanosphere" and “pastel QAnon” lure young women "into far-right conspiracies through content about motherhood and female-coded aesthetics. Some beauty and wellness influencers have proven to be a natural fit for this ecosystem." www.teenvogue.com/story/womano...
The 'Womanosphere' Is Coming for Teen Girls
How beauty and wellness influencers are part of a misinformation ecosystem pushing traditional values on girls.
www.teenvogue.com
I appreciate the work of these authors to show that this problem not only is still here but has grown:

www.nbcnews.com/tech/tech-ne...

But it is also quite frustrating 🧵>>
AI's capabilities may be exaggerated by flawed tests, according to new study
A study from the Oxford Internet Institute analyzed 445 tests used to evaluate AI models.
www.nbcnews.com

“the excessive visibility of a highly active minority at the tip of the iceberg can not only mislead social scientists, but also deceive social media users themselves.”https://www.techpolicy.press/what-a-new-study-reveals-about-the-productionconsumption-gap-on-social-media/
X is designed to radicalise people.

The algorithm promotes Elon Musk's agenda to promote racists and people who want violence bought - specifically - to the streets of Britain.

Members of Parliament, major institutions and the media should not be there.
news.sky.com/story/the-x-...
Elon Musk is boosting the British right - and this shows how
Elon Musk is boosting the British right - and this shows how
news.sky.com
Creepy crawlers collecting data for generative AI are making the internet work less well for everyone, write Article 19's Tanu I & Corinne Cath. AI crawlers slow sites, strain libraries, and push journalism behind paywalls, they write. But there are solutions, if AI firms choose to respect them.
Creepy AI Crawlers Are Turning the Internet into a Haunted House | TechPolicy.Press
The question is no longer whether AI crawlers are disrupting the internet, but what we can do about it, write Tanu I and Corinne Cath.
www.techpolicy.press

evidence is piling up that LLMs are culturally biased, since traiend on data from Western, Educated, Industrialized,
Rich, and Democratic (WEIRD) countries, thus unsuitable to understand people from different regions and cultures, study finds: coevolution.fas.harvard.edu/sites/g/file...
coevolution.fas.harvard.edu

AI companies are pulling user conversations for training, with great privacy risks hai.stanford.edu/news/be-care...
Be Careful What You Tell Your AI Chatbot | Stanford HAI
A Stanford study reveals that leading AI companies are pulling user conversations for training, highlighting privacy risks and a need for clearer policies.
hai.stanford.edu
🎤 The first keynote speech of the day is presented by Nick Couldry and titled “Media and the Corporatization of Everything“. The focus is on the question of how media and communication have become part of a comprehensive process over the past two decades!

#ZeMKI20 #ZeMKIAnniversary
A very comprehensive critique of the AI moment by @couldrynick.bsky.social to start off Day Two of the #ZeMKI2025 20th anniversary conference.

Liveblog here:
Fighting the Colonial Extractivism of Artificial Intelligence | Snurblog — Axel Bruns
The second day at the
snurb.info

Reposted by Axel Bruns

And our paper on talking politics with Communicative AI
And here’s my liveblog of our #ZeMKI2025 session on polarisation, which also features my paper on our #practicemapping approach:
ZeMKI 2025 | Snurblog — Axel Bruns
20th anniversary conference of the Zentrum für Medien-, Kommunikations- und Informationsforschung, Bremen, 23-24 Oct. 2025.
snurb.info
Wikipedia is seeing a significant decline in human traffic because more people are getting the information that’s on Wikipedia via generative AI chatbots that were trained on its articles and search engines that summarize them without actually clicking to the site

www.404media.co/wikipedia-sa...
Wikipedia Says AI Is Causing a Dangerous Decline in Human Visitors
“With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.”
www.404media.co
CA governor Gavin Newsom vetoed both of the major AI bills on his desk that Silicon Valley meaningfully opposed—one making it illegal for bosses to use AI to fire workers with no oversight, one requiring chatbot sellers to ensure their products do not harm children before marketing to them.
Silicon Valley's capture of our political institutions is all but complete
The tech lobby kills off two key California AI bills, and why it matters. Plus: How Sam Altman played Hollywood with Sora 2, organized mass social media deletions, and more.
www.bloodinthemachine.com

Newly emerging AI laws fail to provide adequate safeguards against emotional AI surveillance, only imposing narrow bans and restrictions that contain loopholes and broad exemptions. www.techpolicy.press/how-ai-power...
How AI-Powered Emotional Surveillance Can Threaten Personal Autonomy and Democracy | TechPolicy.Press
If we do not regulate emotional AI surveillance now, we might soon have to fake how we feel to protect our privacy, writes Oznur Uguz.
www.techpolicy.press

Reposted by Neil Selwyn

@neilselwyn.bsky.social “Young people spend way more time outside school, so really we should be talking about how parents and families regulate their children’s device use at home,” he says. “Unfortunately, this isn’t something that most politicians want to do." www.theguardian.com/society/2025...
Two years after school phone bans were implemented in Australia, what has changed?
Phone bans are now well-established in many Australian primary and secondary schools. Have they made a difference?
www.theguardian.com
Must-read piece in @theguardian.com (h/t @dpcarrington.bsky.social). In #ScienceUnderSiege, we describe how petrostates Saudi Arabia & Russia leveraged Musk's buyout of twitter so they could weaponize it for climate denial propaganda & attacks on renewables.
www.theguardian.com/technology/2...
Money talks: the deep ties between Twitter and Saudi Arabia
The long read: Saudi Arabia’s investment in Twitter increased its influence in Silicon Valley while being used at home to shut down critics of the regime
www.theguardian.com

"children living in areas with higher levels of societal inequality, ..., were linked to having a reduced surface area of the brain’s cortex, & altered connections between multiple regions of the brain ... regardless of their economic background. www.theguardian.com/science/2025...
Study links greater inequality to structural changes in children’s brains
Researchers say findings show inequality creates toxic environment and reducing it is ‘a public health imperative’
www.theguardian.com

"Trump is putting Europe under pressure to water down its digital rulebook. But now more than ever, Europe should hold large US tech firms accountable for anti-competitive market rigging, snooping on Europeans, and preying on our children" www.theguardian.com/commentisfre...
The EU has a secret weapon to counter Trump’s economic bullying. It’s time to use it | Johnny Ryan
The anti-coercion instrument, or ‘trade bazooka’, is designed to shield against foreign pressure, says civil liberties campaigner Johnny Ryan
www.theguardian.com
“One of the negative consequences AI is having on students is that it is hurting their ability to develop meaningful relationships with teachers, the report finds. Half of the students agree that using AI in class makes them feel less connected to their teachers.”
Rising Use of AI in Schools Comes With Big Downsides for Students
A report by the Center for Democracy and Technology looks at teachers' and students' experiences with the technology.
www.edweek.org
The Anthropic settlement list is up. Authors, check here to see if your books are included: www.anthropiccopyrightsettlement.com
ANT Homepage | ANT
www.anthropiccopyrightsettlement.com
If your institution requires you to use Blackboard for teaching (like me), be aware its parent company is broke and it's getting new private equity owners whose plans for the platform, and how they'll capitalize on it, remain unknown (bet it includes "AI") onedtech.philhillaa.com/p/anthology-...
Social media companies are letting hate flood our timelines.

Their algorithms amplify hate to play with our emotions & biases, keeping us glued to their platforms.

Call on social media giants & lawmakers to stop the spread of online hate. Take action today ⤵️
Take Action: Stand Against Online Hate
Call on social media giants and lawmakers to stop the spread of online hate. Add your name.
act.counterhate.com

On TikTok and YouTube “children may still be exposed to gambling content, violent images, far-right material and conspiracy theories simply by not logging in to the sites… ‘The law does not prevent under 16s from accessing or viewing content without an account.” www.theguardian.com/media/2025/s...
From zero to neo-Nazis: what under-16s may see under Australia’s social media ban, simply by not logging in
Guardian Australia test finds scrolling shortform videos while logged out of YouTube and TikTok quickly leads to gambling, violent and far-right content
www.theguardian.com

Beyond proving #hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem… binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers. www.computerworld.com/article/4059...
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limi...
www.computerworld.com
"In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits."

www.computerworld.com/article/4059...
OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limi...
www.computerworld.com
Great 50th anniversary issue of Communications. @goranbolin.bsky.social, @giovannamas.bsky.social, @blurky.bsky.social and others revisit and assess articles and reviews published in the journal in previous decades. Very stimulating discussions!

www.degruyterbrill.com/journal/key/...
Communications Volume 50 Issue 3
Volume 50, issue 3 of the journal Communications was published in 2025.
www.degruyterbrill.com

as AUS is about to introduce a #SocialMediaBan a trial of age verification sw reveals age and racial bias: Asian & indigenous youth are more likely to be miscategorised as above the age limit www.theguardian.com/news/2025/se... Read @eukidsonline.bsky.social statement www.lse.ac.uk/media-and-co...
Social media ban trial data reveals racial bias in age checking software: just how inaccurate is it?
Young people from Indigenous and Asian backgrounds are more likely to be miscategorised as over the age limit and older people as underaged, analysis finds
www.theguardian.com

Had fun writing a review of Sherry Turkle's Life on Screen almost 30 years later for the Jubilee issue of Communications The European Journal of Communication Research doi.org/10.1515/comm...
Turkle, S. (1997). Life on the screen: Identity in the age of the internet. Simon & Schuster. 352 pp.
Article Turkle, S. (1997). Life on the screen: Identity in the age of the internet. Simon & Schuster. 352 pp. was published on September 30, 2025 in the journal Communications (volume 50, issue 3).
doi.org

Actually studies (few, initial, certainly limited) suggest the opposite: that AI may reduce cognitive abilities