Abigail Jacobs
azjacobs.bsky.social
Abigail Jacobs
@azjacobs.bsky.social
Asst Prof of Information @ UMich thinking about assumptions built into AI
Reposted by Abigail Jacobs
If you have any interest in the future of AI, please join us for another really insightful conversation with @alondra.bsky.social. She's brilliant but better yet, she's right!
The Trump administration’s laissez-faire approach to big tech might be a mirage. @alondra.bsky.social joins @djrothkopf.bsky.social to explore the administration’s relationship with big tech and the profound effects that are already underway. podcasts.apple.com/us/podcast/d...

youtu.be/IngPqaIQlWA
October 28, 2025 at 7:18 PM
AI as governance -- @himself.bsky.social on how AI reshapes markets, bureaucracy, democracy...and culture. Very happy ot see this getting the mainstream social science treatment.
www.annualreviews.org/content/jour... I can't believe I missed this paper coming out!
AI as Governance
Political scientists have had remarkably little to say about artificial intelligence (AI), perhaps because they are dissuaded by its technical complexity and by current debates about whether AI might ...
www.annualreviews.org
October 28, 2025 at 11:48 PM
Reposted by Abigail Jacobs
Feeling so excited + grateful to be representing this paper at #ICML! Please stop by to talk about how to do more valid measurement for evaling gen AI systems!

Work led by the incomparable @hannawallach.bsky.social and @azjacobs.bsky.social as a part of Microsoft’s AI and Society initiative!!
If you're at @icmlconf.bsky.social this week, come check out our poster on "Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge" presented by the amazing @afedercooper.bsky.social from 11:30am--1:30pm PDT on Weds!!! icml.cc/virtual/2025...
ICML Poster Position: Evaluating Generative AI Systems Is a Social Science Measurement ChallengeICML 2025
icml.cc
July 15, 2025 at 8:15 PM
“If ___ ran a mini nuclear power plant” seems like a strong vibe for the day
February 26, 2025 at 4:28 PM
Reposted by Abigail Jacobs
As ever, Tressie McMillan Cottom has the most astute analysis of how to read Musk's behavior. www.nytimes.com/2025/02/12/o...
Opinion | Look Past Elon Musk’s Chaos. There’s Something More Sinister at Work.
Everything is content.
www.nytimes.com
February 12, 2025 at 12:56 PM
big day to submit an article on how "efficiency" is used to undermine legitimacy of the administrative state

www.nytimes.com/2025/02/11/u... (gift link)
At Oval Office, Musk Makes Broad Claims of Federal Fraud Without Proof (Gift Article)
The billionaire, whose federal cost-cutting team has been operating in secrecy, asserted that he had uncovered waste and fraud across the bureaucracy, without providing evidence.
www.nytimes.com
February 12, 2025 at 9:18 PM
Reposted by Abigail Jacobs
"there's a lot of qualitative work that goes into designing quantitative metrics" -- @azjacobs.bsky.social

"how do we translate between benchmark performance and what it will really be like to use a model" -- Su Lin Blodgett
Super interesting panel discussion taking place right now at the Evaluating Evaluations workshop at @neuripsconf.bsky.social with amazing panelists @abeba.bsky.social, @azjacobs.bsky.social, Su Lin Blodgett, and Lee Wan Sie!!! #NeurIPS2024
December 15, 2024 at 6:16 PM
Reposted by Abigail Jacobs
"Overall, the starting list constitutes at best a narrow coverage of the risks the technology is likely to pose, & at worst a (partial) red herring poised to direct significant risk mitigation efforts to building on inappropriate foundations." @yjernite.bsky.social et al on the AI Act Systemic Risks
🇪🇺: Lots to like in the first draft of the EU GPAI Code of Practice, especially re transparency - the Systemic Risks part OTOH is concerning for both smaller developers and external stakeholders.

I wrote more on this topic ahead of the next draft. TLDR: more attention...
1/2👇
December 12, 2024 at 7:32 PM
Reposted by Abigail Jacobs
Evaluating Generative AI Systems is a Social Science Measurement Challenge: arxiv.org/abs/2411.10939

TL;DR: The ML community would benefit from learning from and drawing on the social sciences when evaluating GenAI systems.
Evaluating Generative AI Systems is a Social Science Measurement Challenge
Across academia, industry, and government, there is an increasing awareness that the measurement tasks involved in evaluating generative AI (GenAI) systems are especially difficult. We argue that thes...
arxiv.org
December 14, 2024 at 8:15 PM
Reposted by Abigail Jacobs
New paper on why machine "unlearning" is much harder than it seems is now up on arXiv: arxiv.org/abs/2412.06966 This was a huuuuuge cross-disciplinary effort led by @msftresearch.bsky.social FATE postdoc @grumpy-frog.bsky.social!!!
Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice
We articulate fundamental mismatches between technical methods for machine unlearning in Generative AI, and documented aspirations for broader impact that these methods could have for law and policy. ...
arxiv.org
December 14, 2024 at 12:55 AM
Reposted by Abigail Jacobs
The AI Interdisciplinary Institute at the University of Maryland (AIM) is hiring

40 new faculty members

in all areas of AI, particularly:
- accessibility,
- sustainability,
- social justice, and
- learning;

building on computational, humanistic, or social scientific approaches to AI.

>
November 13, 2024 at 12:38 PM
Reposted by Abigail Jacobs
Working on #bias & #discrimination in #NLP? Passionate about integrating insights from different disciplines? And do you want to discuss current limitations of #LLM bias mitigation work? 🤖
👋Join the workshop New Perspectives on Bias and Discrimination in Language Technology 4&5 Nov in #Amsterdam!
Workshop: New Perspectives on Bias and Discrimination in Language Technology.
Workshop: New Perspectives on Bias and Discrimination in Language Technology.
wai-amsterdam.github.io
August 7, 2024 at 2:21 PM
Not enough people are worried about this
May 8, 2024 at 3:44 AM
Reposted by Abigail Jacobs
I'm excited to share that the journal version of our paper, "An archival perspective on pretraining data", is now available (open access) from Patterns!

This project was led by @madesai.bsky.social, along with Irene Pasquetto, @azjacobs.bsky.social, and myself

www.cell.com/patterns/ful...

1/n
An archival perspective on pretraining data
Large language models depend crucially on the data they are trained on. The authors consider how these pretraining datasets, like archives, are diverse, sociocultural collections that mediate knowledg...
www.cell.com
April 1, 2024 at 4:47 PM
New executive order just dropped, with lots of roles for sociotechnical work around AI with relatively fast turnaround… but also a bat flying around the White House logo because the executive order is also spooky 🦇🦇
www.whitehouse.gov/briefing-roo...
Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence ...
By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered as follows:      Section 1.  Purpose.  Artificial intelligence (A...
www.whitehouse.gov
October 31, 2023 at 2:56 AM
Reposted by Abigail Jacobs
One of the many, many concepts we discuss in our upcoming report is Measurement Modeling as it relates to AI governance. Discussed this with @azjacobs.bsky.social earlier this year.
October 15, 2023 at 5:43 PM
Spent the last week at #FAccT2023 - filled with some exciting work and exciting scholars on measurement, policy, social impacts emerging from ML. Happy to see this letter come out of it:
https://facctconference.org/2023/harm-policy.html
ACM FAccT - Statement on AI Harms and Policy
facctconference.org
June 17, 2023 at 9:53 PM
Felt cute, might delete later
June 17, 2023 at 9:50 PM