Social Computing Group - UZH
scg-uzh.bsky.social
Social Computing Group - UZH
@scg-uzh.bsky.social
Research group on Social Computing led by Prof. Dr. Anikó Hannák at the University of Zurich.
https://www.ifi.uzh.ch/en/scg.html
Reposted by Social Computing Group - UZH
Billions of people scroll every day. But what’s actually going viral on social media? 🤷‍♀️

We asked the platforms for one simple thing: their most-viewed posts.

They all said no 🙃

Help us demand transparency now:
mzl.la/3M8Eh9L
@algorithmwatch.org @scg-uzh.bsky.social @lessurligneurs.bsky.social
Demand the Data: What’s Really Going Viral? | Mozilla Foundation
Social media platforms aren’t sharing what goes viral on their sites. Tell YouTube, Meta, TikTok, LinkedIn, and X to disclose their most-viewed content. Sign now.
mzl.la
November 26, 2025 at 2:59 PM
Are you a pregnant women, a young parent or soon to become one? This paper led by Desheng Hu (He/Him), is definitely a must read !
Google AI overviews now reach over 2B users worldwide. But how reliable are they on high stakes topics - for instance, pregnancy and baby care?

We have a new paper - led by Desheng Hu, now accepted at @icwsm.bsky.social - exploring that and finding many issues

Preprint: arxiv.org/abs/2511.12920
🧵👇
Auditing Google's AI Overviews and Featured Snippets: A Case Study on Baby Care and Pregnancy
Google Search increasingly surfaces AI-generated content through features like AI Overviews (AIO) and Featured Snippets (FS), which users frequently rely on despite having no control over their presen...
arxiv.org
November 20, 2025 at 10:53 AM
New paper from our group member @nicolo-pagan.bsky.social and colleagues 🤓
LLMs are now widely used in social science as stand-ins for humans—assuming they can produce realistic, human-like text

But... can they? We don’t actually know.

In our new study, we develop a Computational Turing Test.

And our findings are striking:
LLMs may be far less human-like than we think.🧵
Computational Turing Test Reveals Systematic Differences Between Human and AI Language
Large language models (LLMs) are increasingly used in the social sciences to simulate human behavior, based on the assumption that they can generate realistic, human-like text. Yet this assumption rem...
arxiv.org
November 14, 2025 at 9:12 AM
About last week's conference 😏
Elsa Lichtenegger, Desheng Hu and @aurman21.bsky.social presented their ongoing work at the Search Engines and Society Network (SEASON) conference in Hamburg last week 🔍 🌐 .
October 6, 2025 at 9:14 AM
🚨 New Postdoc joining our group starting from today 🤠
Dr. Meirav Segal earned her PhD from the University of Oslo, where she worked on algorithmic fairness and recourse for allocation policies.
A very warm welcome to the team 😃
October 1, 2025 at 6:12 PM
About last week’s internal hackathon 😏
Last week, we -- the (Amazing) Social Computing Group, held an internal hackathon to work on our informally called “Cultural Imperialism” project.
September 17, 2025 at 8:24 AM
If you thought that LLMs are reliable annotators... we have bad news for you 🫣 🤷
Check out the new paper from our group members @joachimbaumann.bsky.social (freshly graduated 😜), @aurman21.bsky.social and colleagues 😎
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825
September 17, 2025 at 8:20 AM