Cameron Martel
banner
cameronmartel.bsky.social
Cameron Martel
@cameronmartel.bsky.social
Assistant Professor at Johns Hopkins Carey Business School.
Studies misinformation & inauthentic behavior online.
A second survey exp found that minimal social connections foster a general norm of responding, such that ppl feel more obligated to respond - and think others expect them to respond more - to ppl who follow them, even outside the context of misinfo correction
April 14, 2025 at 2:28 PM
Exploratory analyses also show that in both survey & field exps, extreme partisanship moderates the effects of social connection on engagement - social connection increases engagement for co-partisans, but decreases engagement for politically extreme counter-partisans
April 14, 2025 at 2:28 PM
We next conducted a follow-up survey on MTurk to replicate effects in a more controlled setting (eg eliminate blocking of counter-partisan bots) & obtained similar results
April 14, 2025 at 2:28 PM
To account for this we (i) compare unaffected conditions (all but social counter-partisan) & (ii) perform principal stratification (weighting obs in unaffected conditions by p(success treat delivery) had they been in social counter-partisan condit)
April 14, 2025 at 2:28 PM
We sent corrections to 1,586 users & measured p(engage w correction):
(i) Among users in the co-partisan condition, social connection had a sig positive effect on engagement
(ii) Among users in the baseline (non-social) condition, no evidence of effect of shared partisanship on engagement
April 14, 2025 at 2:28 PM
Each user was then socially corrected by their randomly assigned bot. Social corrections were done via public reply to the tweet containing the debunked URL and included a link to the fact-check on @snopes.com
April 14, 2025 at 2:28 PM
We created human-looking bots & corrected users who shared debunked URLs

We randomized whether our bots
(i) were co-partisan or counter-partisan for the to-be-corrected user
(ii) followed the user & liked some of their tweets before correcting them (creating a minimal social connection)
April 14, 2025 at 2:28 PM
🚨New in @plosone.org🚨
Corrections of misinfo are often ignored. What can drive engagement?
Twitter field exp & survey followups find
-Social ties matter: users more likely to engage w corrections from accounts who followed user
-Shared partisanship had smaller effects on engagement
shorturl.at/0Ycdp
April 14, 2025 at 2:28 PM
We also found:
-Information & making friends were most mentioned as follow-back reasons
-Curiosity also oft mentioned, esp for counter-partisan follow-back
-Not wanting info (esp from counter-partisans) & id’ing account as a stranger were most mentioned rzns for ignoring accounts
October 16, 2024 at 4:21 PM
We found:
-50% of ppl in co-partisan condition who followed-back account mentioned same partisanship as motivation
-58% of ppl in counter-partisan condition who ignored account mentioned diff partisanship as motivation
October 16, 2024 at 4:21 PM
Ppl w ⬆️ issue polarization (& to lesser extent ⬆️out-party dislike) were less likely to follow-back neutral & counter-partisans, relative to co-partisans

In contrast, ppl w ⬆️ in-party affinity were more likely to follow-back neutral & co-partisans, relative to counter-partisans
October 16, 2024 at 4:20 PM
We find:
-Pref follow-back of co-partisan vs counter-partisan explicit bot accounts (ev of simple content prefs)
-Even *greater* pref follow-back in human-looking vs bot accounts (ev of additional social motivation)
October 16, 2024 at 4:19 PM
We examine this in a Twitter field exp. We created 3 *explicit bot* accounts and 3 *human-looking* accounts, varying only in their expressed party ID (⅓ Dem, Rep, Politically Neutral)
October 16, 2024 at 4:18 PM
🚨New in JEP:G🚨

Why do ppl preferentially reciprocate follows by co-partisans online? In a Twitter field exp & online survey exp we find:
-Both content *and* social prefs drive co-party tie-making
-Distinct roles for in-party pref & out-party dispref
dx.doi.org/10.1037/xge0...
October 16, 2024 at 4:16 PM
Similarly to our accuracy findings, warning labels still sig effective at reducing sharing intentions of false posts even for those distrusting of FCs - crucially again w *no* evidence of a backfire
September 5, 2024 at 5:12 PM
In our sharing exps, warning labels:
-Decrease sharing of false headlines on average (24.7%⬇️)
-Are similarly effective for those w greater trust in FCs
-Still work on those in lowest quartile trust in FCs
-Still work on those maximally distrusting of FCs (16.7%⬇️)
September 5, 2024 at 5:11 PM
In our accuracy exps, warning labels:
-Decrease belief in false headlines on average (27.6%⬇️)
-Are more effective for those w greater trust in FCs
-Still work on those in lowest quartile trust in FCs
-Still work on those maximally distrusting of FCs (12.9%⬇️)
September 5, 2024 at 5:09 PM
Ppl were randomized to a warning label treatment or control. In treatment, a high proportion (eg 2/3, 3/4, or all) of false headlines were labeled. In control, no labels were provided. 10 exps evaluated accuracy ratings of headlines, 11 evaluated headline sharing intentions
September 5, 2024 at 5:07 PM
As expected, we found more Republican-leaning participants were less trusting of FCs; this was increasingly the case for Reps w higher procedural news knowledge, analytic thinking, & web-use skill
September 5, 2024 at 5:05 PM
One of the most widely used interventions against misinfo (eg on FB, Instagram) is attaching warning labels from professional fact-checkers (FCs) to content the FCs have classified as false
September 5, 2024 at 5:02 PM
🚨New in Nature Human Behaviour🚨

Will misinfo warning labels backfire for ppl who distrust fact-checkers? No!

Labels reduce belief in & sharing of false news even for those highly distrusting of fact-checkers - warning labels are a key tool for platforms!
rdcu.be/dSHtF
September 5, 2024 at 5:01 PM
Finally in our survey studies we asked *why* people chose to block accounts. The most common reason for blocking was not wanting to see the content of the blocked account’s tweets & retweets; rather than blocking so blocked users cannot see one’s own account
May 28, 2024 at 8:32 PM
In a separate study on Lucid (N=3k) we found partisan diffs in reported blocking based on account *content* - eg Dems more likely to block users sharing false content or mean/nasty posts
May 28, 2024 at 8:31 PM
To gain more insight on mechanism, we aimed to replicate our field results in a survey experiment. We found that participants were 3x more likely to block a counter-partisan vs politically neutral profile; but we did not observe a partisan asymmetry in blocking
May 28, 2024 at 8:31 PM
Looking at *blocking* behavior, we found evidence of selective blocking. Users were 12x more likely to block counter-partisan than co-partisan accounts; & 4x more likely to block counter-partisan than neutral accounts
May 28, 2024 at 8:30 PM