Sarah Gilbert
banner
sarahagilbert.bsky.social
Sarah Gilbert
@sarahagilbert.bsky.social
Research Director, Citizens and Technology Lab, Cornell University, r/AskHistorians moderator

I study how to make the internet freer and safer— and how to help the helpers on the ground making that possible. She/her

https://citizensandtech.org/
Reposted by Sarah Gilbert
Hi Bluesky 👋

Small life update: I’m at the centre of a multimillion dollar lawsuit in Alberta launched by a failed candidate for Jason Kenney’s UCP

It may be the longest and most expensive media trial in Canadian history

I’m ready to go public about it and I want to share more details with you 🧵
November 7, 2025 at 2:44 PM
Reposted by Sarah Gilbert
This is appalling. There has to be a way to hold massive companies accountable for *knowingly* facilitating & profiting from criminal behavior. And yes, I understand the value of Section 230, but here we have a trillion $ company that profits by letting its users get scammed. Has to be a better way.
November 6, 2025 at 3:56 PM
Reposted by Sarah Gilbert
A lot of people immediately dismiss this as a Gen AI slop but its more than that. It's important to understand that where Grok failed and became MechaHitler because of alignment failures and reward hacking, Musk learned that he needed to align the data and reality itself rather than the model.
October 28, 2025 at 4:39 PM
Reposted by Sarah Gilbert
How can research on tech & society best support the public who democracy exists to serve?

I'm grateful to all for a conversation about what deep, long-term collaboration with the people of New York looks like, and to @sengonzalezny.bsky.social for challenging us to ground our work in community
October 24, 2025 at 3:26 PM
Reposted by Sarah Gilbert
defend science and the rule of law.
HRDAG remains a nonpartisan, nonpolitical organization. Our mission is focused on the Universal Declaration of Human Rights.

Today, we denounce violations of human rights occurring in the United States. hrdag.org/in-the-face-...
October 21, 2025 at 8:42 PM
Reposted by Sarah Gilbert
Sam Altman: "Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases."

I'd like to know what they've done to mitigate AI-induced psychosis, which I think is far more common that reported here.
Can AI chatbots trigger psychosis? What the science says
Chatbots can reinforce delusional beliefs, and, in rare cases, users have experienced psychotic episodes.
www.nature.com
October 15, 2025 at 12:46 AM
Reposted by Sarah Gilbert
My friend Rivi drew a remarkable graphic narrative for Inside Higher Ed, in which she uses her illustrations to let six Chinese graduate students in the U.S. tell their stories of visa chaos, political fears, and uncertain futures.

www.insidehighered.com/opinion/view...
Visa Chaos: A Graphic Narrative (opinion)
A graphic narrative written and drawn by Rivi Handler-Spitz.
www.insidehighered.com
September 30, 2025 at 4:01 PM
Reposted by Sarah Gilbert
1/
I’m thrilled (and a little proud 🤗) to share my first single-authored publication!
It’s now out open access in Information, Communication & Society:
🔗 doi.org/10.1080/1369...
Tackling (misleading) incivility online: a user-centric evaluation of different comment moderation strategies
Uncivil online comments, e.g., in the form of insults or misinformation, come with severe consequences for platforms, users, and democratic processes – making effective moderation essential. Howeve...
www.tandfonline.com
September 26, 2025 at 1:10 PM
Reposted by Sarah Gilbert
i love ta-nehisi coates so much
September 28, 2025 at 5:01 PM
Reposted by Sarah Gilbert
Not the point of this piece exactly but a great example of how chatbot validation could increase polarization

www.nytimes.com/2025/09/26/w...
September 27, 2025 at 12:23 PM
Reposted by Sarah Gilbert
I was introduced to Borsook's work in grad school and still cite it regularly as it remains relevant!
1/ A longtime Wired editor just wrote a mush-brained essay about how he totally missed the political rot of Silicon Valley (& still doesn't get it).

But in the late 1990s, a Wired journalist warned of a toxic ideology bubbling up from tech. Paulina Borsook has largely been erased. Let's change that
September 24, 2025 at 7:17 PM
Reposted by Sarah Gilbert
The story is not what Kimmel said but what others won’t say now for fear of state retribution. Authoritarian countries thrive on self censorship.
September 18, 2025 at 8:12 AM
Reposted by Sarah Gilbert
I've been saying this for yearsf: LLM-GENERATED SAMPLES ARE JUST FALSIFIED DATA. It's similar to just lying about what people said, or taking your own best guesses.

HEY FELLOW SOCIAL SCIENTISTS: STOP TRYING TO FIND WAYS TO AVOID ACTUALLY TALKING TO THE PEOPLE YOU'RE SUPPOSED TO BE STUDYING
Can large language models stand in for human participants?
Many social scientists seem to think so, and are already using "silicon samples" in research.

One problem: depending on the analytic decisions made, you can basically get these samples to show any effect you want.

THREAD 🧵
The threat of analytic flexibility in using large language models to simulate human data: A call to attention
Social scientists are now using large language models to create "silicon samples" - synthetic datasets intended to stand in for human respondents, aimed at revolutionising human subjects research. How...
arxiv.org
September 18, 2025 at 3:29 PM
Reposted by Sarah Gilbert
hi, cis folks!

the WSJ endangered trans lives with false reporting yesterday. it’s still accepted as truth by many, regardless of it being false. the article’s still up with a little note. it needs full retraction.

it's not even a phone call—it's email. you can do this.

contact info below.🧵⬇️
September 12, 2025 at 9:04 PM
Reposted by Sarah Gilbert
Wikipedia editors trying to fend off the onslaught of AI crap have crowdsourced some telltale signs of LLM-generated writing; it might be handy for editors and proofreaders generally. Thanks to @ellenrykers.com for pointing me to it. en.wikipedia.org/wiki/Wikiped...
Wikipedia:Signs of AI writing - Wikipedia
en.wikipedia.org
August 31, 2025 at 11:58 PM
If you're following this story in the NYT by @kashhill.bsky.social, I recommend @schancellor.bsky.social's post that breaks down why AI is bad at therapy. Pretty much everything she she warns about shows up in this story, which is heartbreaking

notatechdemo.substack.com/p/why-ai-can...
August 27, 2025 at 1:22 PM
Reposted by Sarah Gilbert
What is it going to take to get companies to care about this and do something? It's been a tough day reading this great report by @kashhill.bsky.social on the risks of chatbots.

www.nytimes.com/2025/08/26/t...
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
www.nytimes.com
August 27, 2025 at 2:28 AM
Reposted by Sarah Gilbert
The full-day, hybrid workshop is "Matters: Ethical Challenges in Research with Online Communities" led by Matthew Zent plus @schancellor.bsky.social, @cfiesler.bsky.social, @sarahagilbert.bsky.social, @estellesmithphd.bsky.social and more.
sites.google.com/umn.edu/cont...
Context Matters - CSCW'25
Context Matters: Ethical Challenges in Research with Online Communities
sites.google.com
August 4, 2025 at 8:18 PM
Reposted by Sarah Gilbert
Reposted by Sarah Gilbert
✨ MAKE YOUR RESEARCH FINDINGS ACCESSIBLE ✨

I promise you don't have to make TikToks. Maybe a Bluesky thread or a LinkedIn post?

Or my favorite option... blog your papers!

cfiesler.medium.com/why-and-how-...

Lay audiences unable to access scientific literature want to learn from you. 🥰
Why (and how) academics should blog their papers
Encouragement and tips for writing accessible, public-facing versions of your research papers
cfiesler.medium.com
July 30, 2025 at 3:52 PM
Reposted by Sarah Gilbert
My intro to digital history class at GMU this Fall is themed around the Salem Witchcraft Trials. It was ideally not going to be an immersive learning experience
Faculty who wrote to defend their president and object to a DOJ investigation of their university...are now being investigated by the DOJ.
The most banal defense of free speech and academic freedom will trigger the full wrath of the US government now.
www.nytimes.com/2025/07/28/u...
Faculty Support of George Mason’s President Draws Federal Investigation
www.nytimes.com
July 29, 2025 at 3:57 PM