Prosocial Design Network
banner
prosocialdesign.bsky.social
Prosocial Design Network
@prosocialdesign.bsky.social
Bridging research and practice toward building healthy online spaces. prosocialdesign.org
New study suggests that cuing norms may not work in every context - and could backfire. Several RCTs offer strong evidence that signaling norms has prosocial outcomes, but norms-cuing may only work, e.g., w/ the right message or w/ knowledge of enforcement.

files-www.mis.mpg.de/mpi-typo3/So...
November 9, 2025 at 2:36 PM
Model science here. Finds that nudges work in a survey experiment - but then discovers mostly null effects when tested in a more ecologically valid setting.

Kudos to authors and editors for a) pipelining to ecologically valid settings and b) publishing null results.

tinyurl.com/5dmjr8v9
November 6, 2025 at 11:01 AM
Seriously impressed with the two high school students who built an ML-driven app to short-circuit teens' smartphone dopamine loop.
www.primeopenaccess.com/scholarly-ar...
October 11, 2025 at 6:53 PM
According to new Pew data, mass social media continues to be central to our national public square - with over 50% of Americans getting at least some of their news from social media sites, and YouTube & TikTok taking a larger share. (Only 2% of us here at Bsky.) www.pewresearch.org/journalism/f...
September 28, 2025 at 5:25 PM
A paper from over a year ago: research participants were only able to correctly identify AI agents from humans in an online discussion 42% of the time (even when told to expect AI agents). Odds are that percentage would be much lower today.
arxiv.org/html/2402.07...
September 4, 2025 at 1:23 PM
An RCT with 269 Danish teens shows that two smartphone interventions - forcing a brief breathing exercise or asking teens to plan the length of their session - dropped social media phone use 36%. A third intervention, asking them to reflect on their use, had no effect.
en.kfst.dk/media/5t4dsc...
August 25, 2025 at 11:29 AM
We love a literature review. This one looks at how tech interventions - everything from social media reduction to conversation robots - can reduce loneliness and isolation. 40 RCTs with mixed results - and group psychological interventions (below) for the win.
www.sciencedirect.com/science/arti...
July 12, 2025 at 10:05 AM
A year in the making, the Council on Technology and Social Cohesion's Blueprint on "Prosocial Tech Design Governance" is out - and gives platforms and policy makers a guide to building spaces that foster social cohesion, "the glue that holds society together."
toda.org/assets/files...
June 15, 2025 at 10:24 AM
Great new report from Devika Malik and the Toda Peace Institute on the state of prosocial regulation in the Global South. techandsocialcohesion.org/wp-content/u...
April 8, 2025 at 1:59 PM
New preprint takes a creative approach to reducing the spread of disinformation (and possibly outrage-laden posts) - by drawing people's attention to the emotional content of a post. Extra plus: it's non-Western study (this one's from Japan). arxiv.org/pdf/2503.24037
April 5, 2025 at 10:55 AM
Integrity Institute read through all those platform DSA filings, so we didn't have to. Great report, with good primers on transparency regulations (from EU, UK and Oz). drive.google.com/file/d/1MJHx...
March 2, 2025 at 12:51 PM
Community Notes are one of the most effective ways to reduce the spread of misinformation - but they have a scalability problem. This promising study gets around that scalability issue with an LLM that mimics Community Notes and - in tests - may even be more effective. arxiv.org/pdf/2411.06116
January 5, 2025 at 7:40 PM
Giving users explanations for why their content was removed is not only a good practice of procedural justice, it also leads to users learning norms and breaking fewer rules. So it's good to see Meta expand this new learn-rules->remove-warning feature. transparency.meta.com/en-gb/explai...
December 11, 2024 at 11:37 AM
We had a great conversation with Sarita Schoenebeck about about Trauma-Informed Design, a framework for prosocial design that starts from the position that tech should care about the people on their platforms. A concept we can get behind. The video and recap: www.prosocialdesign.org/blog/pro-soc...
December 9, 2024 at 1:43 PM
Online "moderation" is one of those terms that can mean very different things to those of us working in the prosocial design space. This great 2020 paper (we just learned of) parses out one key distinction - between policy-focused and community building moderation. dl.acm.org/doi/pdf/10.1...
December 5, 2024 at 3:55 PM
New research on Reddit's Post Guidance feature, which gives users in-time feedback on their post and community norms. A lot to get excited about: (1) public research using a (2) field experiment to test a (3) transparent and (4) proactive tool that (5) reduces mod workload. arxiv.org/pdf/2411.16814
December 1, 2024 at 5:40 PM
Prompting people to think of accuracy when sharing news is one of the most effective ways to reduce the spread of misinfo. Just came across a Twitter account with some eye-catching prompts (that Lin et al., 2024 found effective). Someone should grab 'em before they're gone.
x.com/thinkaccurac...
December 1, 2024 at 3:04 PM
Counterspeech (responding directly to users) can be an effective way to reduce toxic speech - if it can be scaled. LLMs present a way to do so effectively, but a new paper (field experiment on X) suggests they may not yet be up to the task (compared to stock messaging). arxiv.org/pdf/2411.14986
November 30, 2024 at 11:57 AM
Important - and inspiring - insights from a survey of 115 Reddit mods: 1) they value prosocial behavior 2) they take active steps to encourage and reward it and - listen up tool developers and researchers! - 3) they want more "explicit tools for positive reinforcement". dl.acm.org/doi/abs/10.1...
November 23, 2024 at 1:37 PM
Bridging-based ranking has been proposed as a way to counter the divisive tendencies of social media - by, e.g., up-ranking posts that get respect from left and right. A new paper suggests those rankers needs to be careful, though, to not also disappear tough topics. openreview.net/forum?id=Ayl...
November 22, 2024 at 2:39 PM
New study exploring the effectiveness of using LLMs as fact-checkers (in light of scalability constraints of humans) to reduce a) belief in false headlines and b) intention to share misinformation. Prognosis: not great.
www.matthewdeverna.com/docs/publica...
November 16, 2024 at 1:07 PM
Critical read for researchers conducting field experiments on social media platforms. As we try to understand prosocial design, we need to be sure we're doing so ethically (/prosocially). One way to know how to do that: ask participants. www.nature.com/articles/s41...
November 14, 2024 at 2:47 PM
New research suggests that downranking posts from extreme sources could reduce toxic animosity - and also make folks enjoy their platform experience more. osf.io/preprints/ps...
October 9, 2024 at 1:57 PM
Just came across this study with a simple approach to reducing misinformation: asking other users how they know their post is true. Yet to be tested in the field, and not clear how platforms could adopt this approach, but we like its promise! psycnet.apa.org/fulltext/202...
October 6, 2024 at 2:44 PM
Great graphics from the Social Media Governance Initiative giving an overview of Trust & Safety research over the past couple of decades (taken from the T&S Research archive). static1.squarespace.com/static/60a67...
October 3, 2024 at 11:53 AM