Gillian Hadfield
ghadfield.bsky.social
Gillian Hadfield
@ghadfield.bsky.social
Economist and legal scholar turned AI researcher focused on AI alignment and governance. Prof of government and policy and computer science at Johns Hopkins where I run the Normativity Lab. Recruiting CS postdocs and PhD students. gillianhadfield.org
Pinned
A great wide-ranging conversation on AI ethics and safety with me, @mdredze.bsky.social @ruchowdh.bsky.social and of course @karaswisher.bsky.social. I don’t think putting ethics and safety together is a contradiction in terms! podcasts.apple.com/us/podcast/o...
AI Ethics and Safety — A Contradiction in Terms?
Podcast Episode · On with Kara Swisher · 01/02/2025 · 53m
podcasts.apple.com
I finally got a chance to meet @thomaslfriedman.bsky.social, whose book The World Is Flat inspired my own Rules for a Flat World. I had a great conversation with him and Andrew Freedman about the challenge we find the world facing: how do we build rules for AI that work in a complex world?
October 31, 2025 at 10:39 PM
Human cooperation evolved through complex norms and institutions. Now we're introducing powerful new AI actors into our economic systems. At a recent workshop hosted at ASU we explored what evolution teaches us about getting the rules right.
October 27, 2025 at 10:06 PM
Grateful to keynote at #COLM2025. Here's what we're missing about AI alignment: Humans don’t cooperate just by aggregating preferences, we build social processes and institutions to generate norms that make it safe to trade with strangers. AI needs to play by these same systems, not replace them.
October 15, 2025 at 11:00 PM
Using debate among AI agents has been proposed as a promising strategy for improving AI reasoning capabilities. Our new research shows that this strategy can often have the opposite effect - and the implications for AI deployment are significant. (1/10) arxiv.org/abs/2509.05396
Talk Isn't Always Cheap: Understanding Failure Modes in Multi-Agent Debate
While multi-agent debate has been proposed as a promising strategy for improving AI reasoning ability, we find that debate can sometimes be harmful rather than helpful. The prior work has exclusively…
arxiv.org
September 23, 2025 at 5:06 PM
My lab @johnshopkins is recruiting research and communications professionals, and AI postdocs to advance our work ensuring that AI is safe and aligned to human well-being worldwide. 1/5
Jobs
I have postdoc and staff openings for our lab at the Johns Hopkins University in either Baltimore, MD or Washington, DC.Postdoctoral FellowWe are hiring an interdisciplinary scholar with a track re…
gillianhadfield.org
June 16, 2025 at 6:15 PM
Six years ago @jackclarksf.bsky.social and I proposed regulatory markets as a new model for AI governance that would attract more investment---money and brains—in a democratically legitimate way, fostering AI innovation while ensuring these powerful technologies don’t 1/2
June 12, 2025 at 12:32 AM
Reposted by Gillian Hadfield
In this insightful interview, AIhub ambassador Kumar Kshitij Patel met @ghadfield.bsky.social‬, keynote speaker at ‪@ijcai.org, to find out more about her interdisciplinary research, career trajectory, AI alignment, and her thoughts on AI systems in general.

aihub.org/2025/05/22/i...
Interview with Gillian Hadfield: Normative infrastructure for AI alignment - ΑΙhub
aihub.org
May 23, 2025 at 2:47 PM
Reposted by Gillian Hadfield
Our latest monthly digest features:
-Ananya Joshi on healthcare data monitoring
-AI alignment with @ghadfield.bsky.social
-Onur Boyar on drug and material design
-Object state classification with Filippos Gouidis
aihub.org/2025/05/30/a...
AIhub monthly digest: May 2025 – materials design, object state classification, and real-time monitoring for healthcare data - ΑΙhub
aihub.org
June 4, 2025 at 3:13 PM
Everyone, including those who think we're building powerful AI to improve lives for everyone, should take seriously how poorly our current economic indicators (unemployment, earnings, inflation) capture the well-being of low- and moderate-income folks. www.politico.com/news/magazin...
Voters Were Right About the Economy. The Data Was Wrong.
Here’s why unemployment is higher, wages are lower and growth less robust than government statistics suggest.
www.politico.com
February 15, 2025 at 3:58 PM
Reposted by Gillian Hadfield
I was at this meeting Mon, and the quality & seriousness of discussion made it a highlight. But Fu Ying is right that forging the cooperation needed, even limited to the extreme risks that threaten everyone, is becoming ever harder. We must keep trying.
www.scmp.com/news/china/d...
China, US should fight rogue AI risks together, despite tensions: ex-diplomat
Open-source AI models like DeepSeek allow collaborators to find security vulnerabilities more easily, Fu Ying tells Paris’ AI Action Summit.
www.scmp.com
February 14, 2025 at 12:15 PM
Do we think Musk is using treasury payments data to train, fine tune or do inference on AI models? @caseynewton.bsky.social
February 4, 2025 at 9:20 PM
Video from our tutorial @NeurIPSConf 2024 is up! @dhadfieldmenell @jzl86 @rstriv and I explore how frameworks from economics, institutional and political theory, and biological and cultural evolution can advance approaches to AI alignment neurips.cc/virtual/2024...
NeurIPS Tutorial Cross-disciplinary insights into alignment in humans and machinesNeurIPS 2024
neurips.cc
January 26, 2025 at 7:34 PM
Reposted by Gillian Hadfield
“We haven’t created the infrastructure to integrate [agents] into all the rules and structures we have to make sure our markets behave well,” says CS faculty @ghadfield.bsky.social, an expert in #AI governance.
AI Agents with More Autonomy Than Chatbots Are Coming. Some Safety Experts Are Worried
Systems that operate on behalf of people or corporations are the latest product from the AI boom, but these “agents” may present new and unpredictable risks
www.scientificamerican.com
January 15, 2025 at 3:59 PM
The most immediate need for #AISafety is more visibility by government into frontier models—we can’t determine if or how to regulate without it. The registration requirement Tino Cuéllar and Tim O’Reilly and I proposed last year is aimed at this goal. This white paper develops the proposal in depth
AI Model Registries advocates for the establishment of AI registries and demonstrates how registries from other industries provide a path for #AI models (FDA, REACH, etc). #AISafety

Elliot McKernon
Gwyn Glasser
Deric Cheng
@ghadfield.bsky.social

DL for all the details or enjoy a summary 🧵
January 14, 2025 at 9:34 PM
As someone who lost her house in the 1991 Oakland Hills fire-if you want to help friends and neighbors, focus on logistical help-figuring out housing and child care, insurance claims, rebuilding processes (eventually), job claims—forms forms forms and process. And child care while they do above!
January 13, 2025 at 3:42 AM
Grad students, postdocs, early career researchers in CS, economics and more—consider applying to this fantastic summer school to connect with Cooperative AI www.cooperativeai.com/summer-schoo...
Cooperative AI
www.cooperativeai.com
January 11, 2025 at 3:42 AM
A great wide-ranging conversation on AI ethics and safety with me, @mdredze.bsky.social @ruchowdh.bsky.social and of course @karaswisher.bsky.social. I don’t think putting ethics and safety together is a contradiction in terms! podcasts.apple.com/us/podcast/o...
AI Ethics and Safety — A Contradiction in Terms?
Podcast Episode · On with Kara Swisher · 01/02/2025 · 53m
podcasts.apple.com
January 2, 2025 at 6:04 PM
Reposted by Gillian Hadfield
Despite media representations to the contrary, "women remain underrepresented among faculty in nearly all academic fields...A large-scale survey of the same faculty indicates that the reasons faculty leave are gendered, even for institutions, fields, and career ages in which retention rates are not"
Women leave academia at higher rates than men at every career stage, and attrition is especially high among three groups: tenured faculty, women in non-STEM fields, and women employed at less prestigious institutions, a #ScienceAdvances analysis finds.
Gender and retention patterns among U.S. faculty
Women faculty are more likely to leave their jobs than men, most often due to workplace climate, rather than work-life balance.
scim.ag
December 23, 2024 at 4:36 PM
I’m hiring! Postdocs to work on alignment, normative reasoning and infrastructure; strong admin manager to strategize for and coordinate research and policy work. Strategic comms position closed but could reopen. Policy researcher opening soon. See gillianhadfield.org
December 27, 2024 at 7:26 PM
Reposted by Gillian Hadfield
Podcast in the wild. Gillian Hadfield, Mark Drezde, Rumman Chowdhury talking #AIsafety with Kara Swisher at Hopkins Bloomberg Center in DC. #jhu. Good conversation that spanned variety of issues and perspectives. Well done.
December 18, 2024 at 2:37 PM
General audience perspective on where agentic AI is heading with input from me, Bengio, Russell, Gabriel and Savarese www.scientificamerican.com/article/what...
AI Agents with More Autonomy Than Chatbots Are Coming. Some Safety Experts Are Worried
Systems that operate on behalf of people or corporations are the latest product from the AI boom, but these “agents” may present new and unpredictable risks
www.scientificamerican.com
December 12, 2024 at 7:16 PM
Didn’t know I had a life goal of being on a list with Ayesha Curry and Joni Mitchell but guess I did! streetsoftoronto.com/these-are-to...
These are Toronto's most inspiring women of 2024
These are Toronto's most inspiring women of 2024 - Streets Of Toronto
streetsoftoronto.com
December 11, 2024 at 11:50 PM
Today!
My tutorial with @dhadfieldmenell.bsky.social @jzleibo.bsky.social and Rakshit Trivedi on "Cross-disciplinary insights for alignment in humans and machines" is Tuesday at 1:30 Pacific; scroll down to bottom of this long list of other JHU papers and workshops!
Congratulations to all @johnshopkins.bsky.social researchers participating in #NeurIPS2024! Check out all @johnshopkins.bsky.social accepted papers, tutorials, and workshops at ai.jhu.edu/news/johns-h....
December 10, 2024 at 3:55 PM
Reposted by Gillian Hadfield
My tutorial with @dhadfieldmenell.bsky.social @jzleibo.bsky.social and Rakshit Trivedi on "Cross-disciplinary insights for alignment in humans and machines" is Tuesday at 1:30 Pacific; scroll down to bottom of this long list of other JHU papers and workshops!
December 5, 2024 at 3:31 PM