Mia Hoffmann
@miahoffmann.bsky.social
AI governance, harms and assessment | Research fellow @csetgeorgetown.bsky.social
🤖✨ New report with @partnershipai.bsky.social!
AI agents pose new risks. Monitoring is essential to ensure effective oversight and intervention when needed. Our paper presents a framework for real-time failure detection that takes into account stakes, reversibility and affordances of agent actions.
AI agents pose new risks. Monitoring is essential to ensure effective oversight and intervention when needed. Our paper presents a framework for real-time failure detection that takes into account stakes, reversibility and affordances of agent actions.
September 11, 2025 at 4:35 PM
🤖✨ New report with @partnershipai.bsky.social!
AI agents pose new risks. Monitoring is essential to ensure effective oversight and intervention when needed. Our paper presents a framework for real-time failure detection that takes into account stakes, reversibility and affordances of agent actions.
AI agents pose new risks. Monitoring is essential to ensure effective oversight and intervention when needed. Our paper presents a framework for real-time failure detection that takes into account stakes, reversibility and affordances of agent actions.
Reposted by Mia Hoffmann
✨New Analysis✨
Can the new EU AI Code of Practice change the global AI safety landscape?
As companies like Anthropic, OpenAI, and Google sign on, CSET’s @miahoffmann.bsky.social explores the code’s Safety and Security chapter. cset.georgetown.edu/article/eu-a...
Can the new EU AI Code of Practice change the global AI safety landscape?
As companies like Anthropic, OpenAI, and Google sign on, CSET’s @miahoffmann.bsky.social explores the code’s Safety and Security chapter. cset.georgetown.edu/article/eu-a...
AI Safety under the EU AI Code of Practice — A New Global Standard? | Center for Security and Emerging Technology
To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply ...
cset.georgetown.edu
July 30, 2025 at 2:00 PM
✨New Analysis✨
Can the new EU AI Code of Practice change the global AI safety landscape?
As companies like Anthropic, OpenAI, and Google sign on, CSET’s @miahoffmann.bsky.social explores the code’s Safety and Security chapter. cset.georgetown.edu/article/eu-a...
Can the new EU AI Code of Practice change the global AI safety landscape?
As companies like Anthropic, OpenAI, and Google sign on, CSET’s @miahoffmann.bsky.social explores the code’s Safety and Security chapter. cset.georgetown.edu/article/eu-a...
Reposted by Mia Hoffmann
Yesterday's new AI Action Plan has a lot worth discussing!
One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."
This could be cause for concern.
One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."
This could be cause for concern.
July 24, 2025 at 6:55 PM
Yesterday's new AI Action Plan has a lot worth discussing!
One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."
This could be cause for concern.
One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."
This could be cause for concern.
Reposted by Mia Hoffmann
⚖️ New Explainer!
Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?
In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?
In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
May 28, 2025 at 2:02 PM
⚖️ New Explainer!
Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?
In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?
In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
Reposted by Mia Hoffmann
💡Funding opportunity—share with your AI research networks💡
Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.
Full details ➡️ cset.georgetown.edu/wp-content/u...
Summary ⬇️
Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.
Full details ➡️ cset.georgetown.edu/wp-content/u...
Summary ⬇️
May 19, 2025 at 4:59 PM
💡Funding opportunity—share with your AI research networks💡
Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.
Full details ➡️ cset.georgetown.edu/wp-content/u...
Summary ⬇️
Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.
Full details ➡️ cset.georgetown.edu/wp-content/u...
Summary ⬇️
Today, @csetgeorgetown.bsky.social published our recommendations for the U.S. AI Action Plan. One of them is a CSET evergreen: implement an AI incident reporting regime for AI used by the federal government. Why? Short answer: because we can learn a ton from incidents! Long answer: 👇
March 17, 2025 at 2:30 PM
Today, @csetgeorgetown.bsky.social published our recommendations for the U.S. AI Action Plan. One of them is a CSET evergreen: implement an AI incident reporting regime for AI used by the federal government. Why? Short answer: because we can learn a ton from incidents! Long answer: 👇
Reposted by Mia Hoffmann
🚨We're hiring — only a few days left to apply!🚨
CSET is looking for a Media Engagement Specialist to amplify our research. If you're a strategic communicator who can craft press releases, media pitches, & social content, apply by March 17, 2025! cset.georgetown.edu/job/media-en...
CSET is looking for a Media Engagement Specialist to amplify our research. If you're a strategic communicator who can craft press releases, media pitches, & social content, apply by March 17, 2025! cset.georgetown.edu/job/media-en...
Media Engagement Specialist | Center for Security and Emerging Technology
The Center for Security and Emerging Technology, under the School of Foreign Service, is a research organization focused on studying the security impacts of emerging technologies, supporting academic ...
cset.georgetown.edu
March 14, 2025 at 2:27 PM
🚨We're hiring — only a few days left to apply!🚨
CSET is looking for a Media Engagement Specialist to amplify our research. If you're a strategic communicator who can craft press releases, media pitches, & social content, apply by March 17, 2025! cset.georgetown.edu/job/media-en...
CSET is looking for a Media Engagement Specialist to amplify our research. If you're a strategic communicator who can craft press releases, media pitches, & social content, apply by March 17, 2025! cset.georgetown.edu/job/media-en...
Reposted by Mia Hoffmann
What: CSET Webinar 📺
When: Tuesday, 3/25 at 12PM ET 📅
What’s next for AI red-teaming? And how do we make it more useful?
Join Tori Westerhoff, Christina Liaghati, Marius Hobbhahn, and CSET's @dr-bly.bsky.social * @jessicaji.bsky.social for a great discussion: cset.georgetown.edu/event/whats-...
When: Tuesday, 3/25 at 12PM ET 📅
What’s next for AI red-teaming? And how do we make it more useful?
Join Tori Westerhoff, Christina Liaghati, Marius Hobbhahn, and CSET's @dr-bly.bsky.social * @jessicaji.bsky.social for a great discussion: cset.georgetown.edu/event/whats-...
What’s Next for AI Red-Teaming? | Center for Security and Emerging Technology
On March 25, CSET will host an in-depth discussion about AI red-teaming — what it is, how it works in practice, and how to make it more useful in the future.
cset.georgetown.edu
March 12, 2025 at 3:11 PM
What: CSET Webinar 📺
When: Tuesday, 3/25 at 12PM ET 📅
What’s next for AI red-teaming? And how do we make it more useful?
Join Tori Westerhoff, Christina Liaghati, Marius Hobbhahn, and CSET's @dr-bly.bsky.social * @jessicaji.bsky.social for a great discussion: cset.georgetown.edu/event/whats-...
When: Tuesday, 3/25 at 12PM ET 📅
What’s next for AI red-teaming? And how do we make it more useful?
Join Tori Westerhoff, Christina Liaghati, Marius Hobbhahn, and CSET's @dr-bly.bsky.social * @jessicaji.bsky.social for a great discussion: cset.georgetown.edu/event/whats-...
Reposted by Mia Hoffmann
What does the EU's shifting strategy mean for AI?
CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.
Read it now 👇
CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.
Read it now 👇
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
March 10, 2025 at 2:17 PM
What does the EU's shifting strategy mean for AI?
CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.
Read it now 👇
CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.
Read it now 👇
Reposted by Mia Hoffmann
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's apparent shift on AI policy could change the global landscape for AI governance.
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
buff.ly
March 10, 2025 at 1:28 PM
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's apparent shift on AI policy could change the global landscape for AI governance.
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
March 10, 2025 at 1:42 PM
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Reposted by Mia Hoffmann
CSET is hiring 📢
We’re hiring a software engineer to support @emergingtechobs.bsky.social. Help build high-quality public tools and datasets to inform critical decisions on emerging tech issues.
Interested or know someone who would be? Learn more and apply 👇 cset.georgetown.edu/job/software...
We’re hiring a software engineer to support @emergingtechobs.bsky.social. Help build high-quality public tools and datasets to inform critical decisions on emerging tech issues.
Interested or know someone who would be? Learn more and apply 👇 cset.georgetown.edu/job/software...
Software Engineer | Center for Security and Emerging Technology
The Center for Security and Emerging Technology (CSET), under the School of Foreign Service, is hiring a Software Engineer. The Software Engineer will be a generalist who can flex between full-stack w...
cset.georgetown.edu
March 3, 2025 at 8:09 PM
CSET is hiring 📢
We’re hiring a software engineer to support @emergingtechobs.bsky.social. Help build high-quality public tools and datasets to inform critical decisions on emerging tech issues.
Interested or know someone who would be? Learn more and apply 👇 cset.georgetown.edu/job/software...
We’re hiring a software engineer to support @emergingtechobs.bsky.social. Help build high-quality public tools and datasets to inform critical decisions on emerging tech issues.
Interested or know someone who would be? Learn more and apply 👇 cset.georgetown.edu/job/software...
There have been a ton of AI policy developments coming out of the EU these past weeks, but one deeply concerning one is the withdrawal of the AI Liability Directive (AILD) by the European Commission. Here’s why:
February 13, 2025 at 3:35 PM
There have been a ton of AI policy developments coming out of the EU these past weeks, but one deeply concerning one is the withdrawal of the AI Liability Directive (AILD) by the European Commission. Here’s why:
Reposted by Mia Hoffmann
@miahoffmann.bsky.social , @ojdaniels.bsky.social, and I wrote a piece on key AI governance areas to watch in 2025 with the upcoming AI Action Summit in mind. Check it out here! thebulletin.org/2025/02/will...
Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?
AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.
thebulletin.org
February 7, 2025 at 3:00 AM
@miahoffmann.bsky.social , @ojdaniels.bsky.social, and I wrote a piece on key AI governance areas to watch in 2025 with the upcoming AI Action Summit in mind. Check it out here! thebulletin.org/2025/02/will...
Reposted by Mia Hoffmann
Will the Paris #AIActionSummit set a unified approach to AI governance—or just be another conference?
A new article from @miahoffmann.bsky.social, @minanrn.bsky.social, and @ojdaniels.bsky.social.
A new article from @miahoffmann.bsky.social, @minanrn.bsky.social, and @ojdaniels.bsky.social.
Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?
AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.
thebulletin.org
February 6, 2025 at 3:47 PM
Will the Paris #AIActionSummit set a unified approach to AI governance—or just be another conference?
A new article from @miahoffmann.bsky.social, @minanrn.bsky.social, and @ojdaniels.bsky.social.
A new article from @miahoffmann.bsky.social, @minanrn.bsky.social, and @ojdaniels.bsky.social.
Reposted by Mia Hoffmann
With the government portion of the AI Action Summit next week, @minanrn.bsky.social, @miahoffmann.bsky.social and I wrote for @thebulletin.org about some key AI governance questions for the year ahead thebulletin.org/2025/02/will...
Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?
AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.
thebulletin.org
February 6, 2025 at 11:56 AM
With the government portion of the AI Action Summit next week, @minanrn.bsky.social, @miahoffmann.bsky.social and I wrote for @thebulletin.org about some key AI governance questions for the year ahead thebulletin.org/2025/02/will...
Yesterday, the EU AI Act’s first few provisions came into effect. The General Provisions and the prohibitions of unacceptable risk AI systems are applicable from now on. Here’s what that means:
February 3, 2025 at 3:51 PM
Yesterday, the EU AI Act’s first few provisions came into effect. The General Provisions and the prohibitions of unacceptable risk AI systems are applicable from now on. Here’s what that means:
US leadership in AI has been a goal of the past Trump & Biden administrations. But that concept of leadership focused too much on “AGI” and too little on AI diffusion. The DeepSeek release - a model that was immediately widely adopted - is a reminder to adjust these priorities. Here’s why:
January 29, 2025 at 7:51 PM
US leadership in AI has been a goal of the past Trump & Biden administrations. But that concept of leadership focused too much on “AGI” and too little on AI diffusion. The DeepSeek release - a model that was immediately widely adopted - is a reminder to adjust these priorities. Here’s why:
Reposted by Mia Hoffmann
As someone who has reported on AI for 7 years and covered China tech as well, I think the biggest lesson to be drawn from DeepSeek is the huge cracks it illustrates with the current dominant paradigm of AI development. A long thread. 1/
January 27, 2025 at 2:12 PM
As someone who has reported on AI for 7 years and covered China tech as well, I think the biggest lesson to be drawn from DeepSeek is the huge cracks it illustrates with the current dominant paradigm of AI development. A long thread. 1/
Do you care about AI? Wonder what it means for the workforce? Worried about biorisk or tech competition with China? Curious about AI governance?
If you answered Yes to any of these, check out our Starter Pack and follow my brilliant colleagues working on these topics! bsky.app/starter-pack...
If you answered Yes to any of these, check out our Starter Pack and follow my brilliant colleagues working on these topics! bsky.app/starter-pack...
January 23, 2025 at 9:04 PM
Do you care about AI? Wonder what it means for the workforce? Worried about biorisk or tech competition with China? Curious about AI governance?
If you answered Yes to any of these, check out our Starter Pack and follow my brilliant colleagues working on these topics! bsky.app/starter-pack...
If you answered Yes to any of these, check out our Starter Pack and follow my brilliant colleagues working on these topics! bsky.app/starter-pack...
Reposted by Mia Hoffmann
"Internal company documents... show that Amazon health and safety personnel recommended relaxing enforcement of the production quotas to lower injury rates, but that senior executives rejected the recommendations apparently because they worried about the effect on the company’s performance."
Amazon Disregarded Internal Warnings on Injuries, Senate Investigation Claims (Gift Article)
A staff report by the Senate labor committee, led by Bernie Sanders, uncovered evidence of internal concern about high injury rates at the e-commerce giant.
www.nytimes.com
December 16, 2024 at 12:16 PM
"Internal company documents... show that Amazon health and safety personnel recommended relaxing enforcement of the production quotas to lower injury rates, but that senior executives rejected the recommendations apparently because they worried about the effect on the company’s performance."
Reposted by Mia Hoffmann
“Denied by AI,” the multi-part STAT News investigation of how #UnitedHealthcare used an opaque algorithmic system to deny care to people who needed it is a #mustread www.statnews.com/2023/03/13/m...
December 6, 2024 at 5:42 AM
“Denied by AI,” the multi-part STAT News investigation of how #UnitedHealthcare used an opaque algorithmic system to deny care to people who needed it is a #mustread www.statnews.com/2023/03/13/m...
Reposted by Mia Hoffmann
“An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.”
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of ‘hurt first, fix later’ approach
www.theguardian.com
December 6, 2024 at 6:19 PM
“An internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.”
Reposted by Mia Hoffmann
🧬New Report🧬
There are many steps in the pathway to biological harm, including risks posed by AI. CSET Fellow @stephbatalis.bsky.social offers a suite of corresponding policy and governance tools to help mitigate biorisk.
Read more here 👇 cset.georgetown.edu/publication/...
There are many steps in the pathway to biological harm, including risks posed by AI. CSET Fellow @stephbatalis.bsky.social offers a suite of corresponding policy and governance tools to help mitigate biorisk.
Read more here 👇 cset.georgetown.edu/publication/...
Anticipating Biological Risk: A Toolkit for Strategic Biosecurity Policy | Center for Security and Emerging Technology
Artificial intelligence (AI) tools pose exciting possibilities to advance scientific, biomedical, and public health research. At the same time, these tools have raised concerns about their potential t...
cset.georgetown.edu
December 5, 2024 at 3:43 PM
🧬New Report🧬
There are many steps in the pathway to biological harm, including risks posed by AI. CSET Fellow @stephbatalis.bsky.social offers a suite of corresponding policy and governance tools to help mitigate biorisk.
Read more here 👇 cset.georgetown.edu/publication/...
There are many steps in the pathway to biological harm, including risks posed by AI. CSET Fellow @stephbatalis.bsky.social offers a suite of corresponding policy and governance tools to help mitigate biorisk.
Read more here 👇 cset.georgetown.edu/publication/...