Emilio Ferrara
banner
emilioferrara.bsky.social
Emilio Ferrara
@emilioferrara.bsky.social
Prof of Computer Science at USC
AI, social media, society, networks, data, and
HUMANS LABS http://www.emilio.ferrara.name
Thrilled to share our latest paper "Information Suppression in Large Language Models" is now published on Information Sciences!

To read more, see: www.sciencedirect.com/science/arti...

great work w/ @siyizhou.bsky.social
October 16, 2025 at 10:44 PM
April 8, 2025 at 3:55 PM
How does DeepSeek censorship work?

Here is a practical example: I asked it to discuss about my work (having studied censorship online by various countries).

DeepSeek at first starts to compose an accurate answer, even mentioning China’s online censorship efforts.
January 30, 2025 at 6:09 PM
When I was a grad student I looked up to giants of my discipline and never would have thought a day like this would ever come for me, happy to celebrate a personal milestone on this holiday! Academia and research are great. Thanks all!
December 25, 2024 at 12:07 AM
*importantly*, the same effect is evident for left-leaning users, who see overwhelmingly more like-minded voices in their timelines.

This shows the algorithm is contributing to create homogeneous timelines.

Some call this idea *echo chamber*. I’m less interested in naming, just characterizing it
November 18, 2024 at 6:35 AM
But what happens once a new user starts following a few partisan accounts?

Well, their timelines immediately become filled up uniquely with partisan content aligned with the user’s views!

Top conservative voices are amplified upwards of 50% more frequently than baseline in right-leaning users.
November 18, 2024 at 6:32 AM
A natural question arises: who are these users whose content is amplified?

Let’s take at the accounts that pop up most frequently in the timelines of new users (neutral group):

Aside from obv VIPs (Musk, major political figures), it’s evident that many conservative users are over represented.
November 18, 2024 at 6:29 AM
1. We track the diversity of accounts appearing on the timelines of our sock puppets.

High Gini inequality shows how all 4 groups see skewed recommendations concentrated among certain users.

Right-leaning users experience the highest exposure inequality, ie they see content from fewer users!
November 18, 2024 at 6:23 AM
Let’s understand how we set up our study!

We deployed 4 groups of sock puppet accounts: neutral (new accounts following no one), left/right leaning (accounts initialized to follow 10 random left/right political accounts), and balanced (half, 5L/5R).

We collect >100k tweets per day, for 3 weeks
November 18, 2024 at 6:17 AM
3. Additionally, when we study our control group, neutral accounts who do not follow anybody (akin to a newly-registered user account on the platform) we show a *default right-leaning bias* in content exposure.
November 18, 2024 at 6:06 AM
2. Both left- and right-leaning users encounter *amplified exposure* to accounts aligned with their own political stance and *reduced exposure* to opposing viewpoints.
November 18, 2024 at 6:04 AM
Let’s start with briefly summarizing the three most salient findings:

1. The X platform’s current recommendation system skews exposure toward a few high-popularity accounts for all users, with right-leaning users experiencing the most inequality.
November 18, 2024 at 6:03 AM
Finishing the year on a high note!
December 27, 2023 at 8:06 AM
Coverage of our latest work!

Online ‘likes’ for toxic social media posts prompt more − and more hateful − messages

https://theconversation.com/online-likes-for-toxic-social-media-posts-prompt-more-and-more-hateful-messages-218220
December 4, 2023 at 4:33 PM
MASSIVE day for our lab!

Julie Jiang and Herbert Chang made the Forbes 30 Under 30 list in the Science category!

I couldn't be prouder, so richly deserved!

www.forbes.com/30-under-30/...
November 28, 2023 at 6:21 PM
❤️‍🔥Here is a new paper I am really excited about🔥

Social Approval and Network Homophily as Motivators of Online Toxicity arxiv.org/abs/2310.07779

Online hate is driven by the pursuit of social approval: toxicity is homophilous and a user's propensity for it can be predicted by their social networks!
October 17, 2023 at 4:45 PM
Our latest work is out on EPJ Data Science!

Exposing influence campaigns in the age of LLMs: a behavioral-based AI approach to detecting state-sponsored trolls

epjdatascience.springeropen.com/articles/10....
October 12, 2023 at 10:57 PM
Our latest work is out on EPJ Data Science!

How does Twitter account moderation work? Dynamics of account creation and suspension on Twitter during major geopolitical events

rdcu.be/dnKVo
October 6, 2023 at 4:08 PM
🌈NEW PAPER!☄️

Start October with a scary reading! 🎃

In pure Halloween 👻 spirit, dive into the darker side of Generative AI and it’s nefarious applications!

#GenAI #ai #LLMs

Pls reshare!
arxiv.org/abs/2310.00737
October 3, 2023 at 10:18 AM