Data & Society
banner
datasociety.bsky.social
Data & Society
@datasociety.bsky.social
Data & Society is a nonprofit research institute that studies the social implications of data-centric technologies, automation, and AI.
Pinned
We’re thrilled to be hosting a 4-part fall event series in collaboration with the New York Public Library, exploring the social implications of AI & its impacts on democracy, the environment, & human labor. Learn more and RSVP to attend live in NYC or via livestream! datasociety.net/events/under...
Microsoft and Google say data centers will create thousands of jobs in Chile, but an analysis of permit filings by @restofworld.org shows only a small number of potential positions — and most are not skilled IT jobs, but those in security and cleaning. restofworld.org/2025/data-ce...
Microsoft, Google say their data centers create thousands of jobs. Their permit filings say otherwise
Chile and tech giants promise economy-wide impact but permits show fewer onsite jobs after construction.
restofworld.org
November 10, 2025 at 6:09 PM
This Thursday, 11/13! D&S Director of Research @alicetiara.bsky.social will take part in a panel discussion about the opportunities and challenges of using research to support tech policy and advocacy, and share strategies for ethical, effective research that centers marginalized communities.
Advocating with Evidence: Lessons for Tech Researchers in Civil Society.
Join us on November 13 for an online discussion featuring @alicetiara.bsky.social @mkgerchick.bsky.social @jordanaut.bsky.social and me.

Organized by @cdt.org @datasociety.bsky.social @aclu.org

cdt.org/event/advoca...
November 10, 2025 at 3:42 PM
AI chatbots used to be pitched as impartial tools for answering questions, but bespoke chatbots like Arya (an “unapologetic right-wing nationalist Christian AI model”) are now being programmed explicitly to reflect the biases of their creators. www.nytimes.com/2025/11/04/b...
Right-Wing Chatbots Turbocharge America’s Political and Cultural Wars
www.nytimes.com
November 7, 2025 at 7:42 PM
To understand the complex impacts of data centers, we need to look at the relationship between social and algorithmic harms, and "and other forms of embodied harms that communities are dealing with," D&S’s @tamigraph.bsky.social tells @yamachiodi.bsky.social. www.blogs.unicamp.br/geict/2025/1...
Down the Rabbit Hole of AI: Data Centers and the Material Impacts of the ‘Cloud’ – An interview with Tamara Kneese
www.blogs.unicamp.br
November 7, 2025 at 2:07 PM
Support Data & Society. Together, we can build a future where AI is shaped by the public, not just the powerful. datasociety.net/donate/
November 6, 2025 at 7:43 PM
Chatbot-enabled letters to the editor are flooding the world’s scientific journals, “putting at risk a part of scientific publishing that editors say is needed to sharpen research findings and create new directions for inquiry.” www.nytimes.com/2025/11/04/s...
The Editor Got a Letter From ‘Dr. B.S.’ So Did a Lot of Other Editors.
www.nytimes.com
November 6, 2025 at 6:07 PM
In contrast with Wikipedia’s mission to “set knowledge free,” Elon Musk's "competitor" Grokipedia represents “the use of technological power to re-exert top-down authority over information and knowledge," @antisomniac.bsky.social writes. www.techpolicy.press/with-grokipe...
With Grokipedia, Top-Down Control of Knowledge Is New Again | TechPolicy.Press
Ryan McGrady asks, who is Grokipedia for, other than its owner?
www.techpolicy.press
November 6, 2025 at 2:47 PM
Researching the use of chatbots for mental health support, D&S's @briana-v.bsky.social heard from many people who expected less stigma from chatbots than from therapists, “which is really interesting, given what we know about machine-learning bias,” she tells @undark.org. undark.org/2025/11/04/c...
Researchers Weigh the Use of AI for Mental Health
Chatbots weren't designed for mental health, but they are increasingly used for therapy. What are the risks an benefits?
undark.org
November 5, 2025 at 8:36 PM
New workshop! We invite researchers and practitioners to join us in examining how scientific reasoning and imagination are being reconfigured as AI systems become a part of the everyday practice of science. Learn more and apply by December 8. datasociety.net/announcement...
November 5, 2025 at 3:49 PM
AI data centers are straining already fragile power and water infrastructures in communities around the world, leading to blackouts and water shortages. “Data centers are where environmental and social issues meet,” says Rosi Leonard, an environmentalist with @foeireland.bsky.social.
From Mexico to Ireland, Fury Mounts Over a Global A.I. Frenzy
www.nytimes.com
November 4, 2025 at 6:40 PM
As research shows how AI contributes to de-skilling, stewardship of the technology must ensure "that the capacities in which our humanity resides — judgment, imagination, understanding — stay alive in us," and that we don't lose sight of which of them matter. www.theatlantic.com/ideas/archiv...
The Age of De-Skilling
Will AI stretch our minds—or stunt them?
www.theatlantic.com
November 4, 2025 at 3:07 PM
At @alltechishuman.bsky.social’s Responsible Tech Summit, D&S exec director Janet Haven talked about critical role that grounded, empirical research on AI plays in countering industry hype and creating policy that reflects the technology’s real impacts on people. www.youtube.com/watch?v=tnex...
AI Governance Under Pressure: Regulation, Risk, and the Race to Deploy | Responsible Tech Summit
YouTube video by All Tech Is Human
www.youtube.com
November 3, 2025 at 8:19 PM
Companies haven't done nearly enough to prevent their tech from being used to make violent threats, D&S’s @alicetiara.bsky.social says; most guardrails are “more like a lazy traffic cop than a firm barrier — you can get a model to ignore them and work around them.” www.nytimes.com/2025/10/31/b...
A.I. Is Making Death Threats Way More Realistic
www.nytimes.com
November 3, 2025 at 4:06 PM
“To leave our students to their own devices — which is to say, to the devices of AI companies — is to deprive them of...the means to understand the world they live in or navigate it effectively,” Anastasia Berg writes. www.nytimes.com/2025/10/29/o...
Opinion | Why Even Basic A.I. Use Is So Bad for Students
www.nytimes.com
October 31, 2025 at 6:15 PM
In the latest in a series of reflections, @megyoung0.bsky.social & @tamigraph.bsky.social write about AIMLab's community-based algorithmic impact assessment of San José’s computer vision pilot program — and what happened when news broke about it while it was underway. datasociety.net/points/the-u...
October 31, 2025 at 3:04 PM
Reposted by Data & Society
ICE treating facial recognition matches as definitive IDs to detain individuals is shockingly irresponsible and dangerous. Facial recognition is a highly temperamental technology, with accuracy varying significantly based on an array of factors. www.404media.co/ice-and-cbp-...
October 29, 2025 at 9:04 PM
AI use is opening a widening gap across party lines, with one side poised to exploit the technology in ways that could create a systemic advantage in next year’s midterm elections and beyond. Bruce Schneier and Nathan E. Sanders argue that it doesn’t have to be this way. time.com/7321098/ai-2...
time.com
October 30, 2025 at 5:59 PM
Reposted by Data & Society
🚨NEW PAPER ALERT 🚨 @tamigraph.bsky.social @briana-v.bsky.social and I discuss wellness chatbots, their labor implications, & "private vibes" vs "privacy". We call for critical human-AI communication scholarship that takes into account context & political economy: link.springer.com/article/10.1...
A chatbot for the soul: mental health care, privacy, and intimacy in AI-based conversational agents - Communication and Change
Artificial intelligence-based conversational agents—chatbots—are increasingly integrated into telehealth platforms, employee wellness programs, and mobile applications to address structural gaps in me...
link.springer.com
October 29, 2025 at 1:34 PM
As LLMs manufacture a form of scholarship, presentation doesn’t mean what it used to. D&S's Ranjit Singh looks at how this threatens the open-access research repository arXiv, how its founder is fighting to sustain its credibility, & what researchers can do to help. datasociety.net/points/on-ar...
October 30, 2025 at 2:32 PM
📣 Our method for conducting community-based algorithmic impact assessments is now available! We’ve just launched a new section on our website where you can find an extensive toolkit, documentation of our pilots, and a series of reflections on lessons learned. datasociety.net/research/alg...
October 29, 2025 at 7:10 PM
TONIGHT at 6:30 pm ET! In-person tickets are sold out, but you can still join us online!
Tomorrow! Next in our series w NYPL, D&S ED Janet Haven will talk to @cmcilwain.bsky.social, @juliaangwin.com & @catherinebracy.com about the power and potential of AI in the *public* interest. In-person spots are going fast; reserve yours or join the livestream! www.showclix.com/event/unders...
October 29, 2025 at 3:26 PM
Scientific research and data analysis are essential tools for holding those in power accountable. We stand with @hrdag.org in denouncing the growing attacks on science and human rights in the US.
The time for silence is over.

US attacks on science and human rights include:

☑️ Attacks on academic institutions
☑️ Targeting of philanthropic human rights organizations
☑️ Sanctions against ICC judges and prosecutors

hrdag.org/2025/10/21/h...
HRDAG Takes a Stand Against Tyranny in the United States
by Patrick Ball and Megan Price Today the Human Rights Data Analysis Group (HRDAG) publicly denounces the growing attacks on science and human rights in the United States. We reaffirm our commitment…
hrdag.org
October 29, 2025 at 2:01 PM
Reposted by Data & Society
Since moving to civil society, I've noticed a huge dearth of support for researchers working in the space - although there are tons of us! Come join this D&SxACLUxCDT event where we talk about the opportunities and challenges of research to support tech policy & advocacy!
Advocating with Evidence: Lessons for Tech Researchers in Civil Society.
Join us on November 13 for an online discussion featuring @alicetiara.bsky.social @mkgerchick.bsky.social @jordanaut.bsky.social and me.

Organized by @cdt.org @datasociety.bsky.social @aclu.org

cdt.org/event/advoca...
October 28, 2025 at 2:34 PM
"I know fictional characters when I see them. ChatGPT is one. The problem is that it has no author,” writes novelist @vauhinivara.bsky.social. If OpenAI has only tenuous control over the narrator it has unleashed, she asks, who is responsible for its output? www.theatlantic.com/books/2025/1...
Why So Many People Are Seduced by ChatGPT
What makes OpenAI’s chatbot so dangerous? It’s a fictional character without an author.
www.theatlantic.com
October 28, 2025 at 5:28 PM
Tomorrow! Next in our series w NYPL, D&S ED Janet Haven will talk to @cmcilwain.bsky.social, @juliaangwin.com & @catherinebracy.com about the power and potential of AI in the *public* interest. In-person spots are going fast; reserve yours or join the livestream! www.showclix.com/event/unders...
October 28, 2025 at 3:13 PM