(@techimpactpolicy.bsky.social). Formerly IU / Observatory on Social Media.
Computational social science, human-AI interaction, social media, trust and safety, etc.
🧨 matthewdeverna.com
Can LLMs with reasoning + web search reliably fact-check political claims?
We evaluated 15 models from OpenAI, Google, Meta, and DeepSeek on 6,000+ PolitiFact claims (2007–2024).
Short answer: Not reliably—unless you give them curated evidence.
arxiv.org/abs/2511.18749
goodauthority.org/news/podcast...
goodauthority.org/news/podcast...
✔️ Submission deadline: February 2nd, 2026
✔️Acceptance Notification: February 16th, 2026
View the guidelines here:
lnkd.in/eBHVe2tP
So I'm starting a live thread of new roles as I become aware of them - feel free to add / extend / share :
This image has been living in my mind rent-free for months.
So I'm starting a live thread of new roles as I become aware of them - feel free to add / extend / share :
🚨 If you're a fit for this job, I highly recommend applying!
Find more details via the link below.
www.linkedin.com/posts/yyahn_...
🚨 If you're a fit for this job, I highly recommend applying!
Find more details via the link below.
www.linkedin.com/posts/yyahn_...
We're thrilled to have Kate Starbird from the University of Washington's Department of Human Centered Design & Engineering speak at IC2S2 2026.
Registration is open with submissions opening on 12/15: ic2s2-2026.org
We're thrilled to have Kate Starbird from the University of Washington's Department of Human Centered Design & Engineering speak at IC2S2 2026.
Registration is open with submissions opening on 12/15: ic2s2-2026.org
California's AB 621, passed in October, may make them responsible for providing services to nonconsensual deepfake nude providers if they're notified and don't act.
Here's what the author of the law told me:
California's AB 621, passed in October, may make them responsible for providing services to nonconsensual deepfake nude providers if they're notified and don't act.
Here's what the author of the law told me:
There is no reason Meta can't get better at blocking ads like these, nor Apple from hosting the underlying apps. There is no reason for Google to provide single-sign on to 5 of the top 10 nudifiers, or Cloudflare to provide CDN services.
There is no reason Meta can't get better at blocking ads like these, nor Apple from hosting the underlying apps. There is no reason for Google to provide single-sign on to 5 of the top 10 nudifiers, or Cloudflare to provide CDN services.
I came out of it with uncharacteristic optimism that the concerted effort of the many people focusing on this problem may start yielding results in 2026. But it’s going to take work.
I came out of it with uncharacteristic optimism that the concerted effort of the many people focusing on this problem may start yielding results in 2026. But it’s going to take work.
journalqd.org/article/view...
@journalqd.bsky.social
journalqd.org/article/view...
@journalqd.bsky.social
Explore the 2026 conference here ➡️ ic2s2-2026.org
✔️ Submissions open December 15th
✔️ Keynotes will be announced between now and February
✔️ Full program of selected talks and tutorials will be available in late April
Explore the 2026 conference here ➡️ ic2s2-2026.org
✔️ Submissions open December 15th
✔️ Keynotes will be announced between now and February
✔️ Full program of selected talks and tutorials will be available in late April
The paper is an extension of our previous work presented at #ASONAM.
Don't miss it: www.sciencedirect.com/science/arti...
The paper is an extension of our previous work presented at #ASONAM.
Don't miss it: www.sciencedirect.com/science/arti...
Can LLMs with reasoning + web search reliably fact-check political claims?
We evaluated 15 models from OpenAI, Google, Meta, and DeepSeek on 6,000+ PolitiFact claims (2007–2024).
Short answer: Not reliably—unless you give them curated evidence.
arxiv.org/abs/2511.18749
Can LLMs with reasoning + web search reliably fact-check political claims?
We evaluated 15 models from OpenAI, Google, Meta, and DeepSeek on 6,000+ PolitiFact claims (2007–2024).
Short answer: Not reliably—unless you give them curated evidence.
arxiv.org/abs/2511.18749
www.science.org/doi/10.1126/...
www.science.org/doi/10.1126/...
🗓 Key Dates
➤ Peer-reviewed research articles due: December 1, 2025
➤ Commentaries due: March 1, 2026
➤ Publication date: April 2026
👉 Submit Your Paper
bit.ly/4oi8b97
#JOTS #TrustAndSafety
🗓 Key Dates
➤ Peer-reviewed research articles due: December 1, 2025
➤ Commentaries due: March 1, 2026
➤ Publication date: April 2026
👉 Submit Your Paper
bit.ly/4oi8b97
#JOTS #TrustAndSafety
📅 November 12 @ 12 PM ET
Petter Törnberg, University of Amsterdam
Register at osome.iu.edu/events/speak...
📅 November 12 @ 12 PM ET
Petter Törnberg, University of Amsterdam
Register at osome.iu.edu/events/speak...
Intuition says yes—but RCTs find only small short-term effects when users quit.
This new preprint argues these studies don’t prove social media isn’t to blame.
Here’s why: 3 reasons.
Intuition says yes—but RCTs find only small short-term effects when users quit.
This new preprint argues these studies don’t prove social media isn’t to blame.
Here’s why: 3 reasons.
“Question the Questions: Auditing Representation in Online Deliberative Processes”
In deliberative polls, participants propose questions for experts but only a few make it to the panel. How representative are those chosen questions of everyone’s interests? 🧵👇
“Question the Questions: Auditing Representation in Online Deliberative Processes”
In deliberative polls, participants propose questions for experts but only a few make it to the panel. How representative are those chosen questions of everyone’s interests? 🧵👇