Mia Hoffmann
miahoffmann.bsky.social
Mia Hoffmann
@miahoffmann.bsky.social
AI governance, harms and assessment | Research fellow @csetgeorgetown.bsky.social
Check out the paper here:
partnershiponai.org/resource/pri...

Thanks to my co-authors and @partnershipai.bsky.social especially for leading the charge on this timely work!
Prioritizing Real-Time Failure Detection in AI Agents - Partnership on AI
A new PAI report argues that we need real-time failure detection to ensure AI agents can be monitored and stopped when needed.
partnershiponai.org
September 11, 2025 at 4:35 PM
11) And if you’re now curious about CSET’s other recommendations for the AI Action Plan, you can check out the full response to the RFI here: cset.georgetown.edu/publication/...
CSET's Recommendations for an AI Action Plan | Center for Security and Emerging Technology
In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and max...
cset.georgetown.edu
March 17, 2025 at 2:30 PM
10) If you’re still doubting the benefits of AI incident tracking, come by the Massive Data Institute’s event on "AI Hazards: Understanding AI Incidents" today at 3pm, and let me and my fabulous co-panelists convince you in person! mdi.georgetown.edu/events/tswee...
Tech & Society Week 2025 — AI Hazards: Understanding AI Incidents - Massive Data Institute
On Monday, March 17, 2025 from 3:00 to 4:00pm in Fisher Colloquium in Hariri Building on the Hilltop Campus, we will be hosting a panel discussion on AI incidents during Tech & Society Week 2025. The ...
mdi.georgetown.edu
March 17, 2025 at 2:30 PM
Finally, and critically: central data collection and dissemination of lessons learned means that harms only have to occur once for everyone to mitigate their risk. This prevents recurrence and builds user and consumer confidence, which is essential for widespread AI adoption.
March 17, 2025 at 2:30 PM
Incident tracking also reveals new, unexpected AI failure modes that we aren’t yet mitigating against. Over time, systematic data collection can help detect emerging risks and new types of harms, a critical benefit given the fast pace of AI innovation and deployment.
March 17, 2025 at 2:30 PM
Over time, incident data can be used to evaluate the effectiveness of new safety policies and regulation through before and after comparisons. This helps refine governance policies through a direct feedback loop.
March 17, 2025 at 2:30 PM
Using real-world data on what works and what doesn’t to guide AI safety research will help us innovate quicker and build reliable systems that are safe to deploy faster. In this way, incident reporting can help prioritize and direct AI safety research to where it is most effective.
March 17, 2025 at 2:30 PM
AI incidents also shed light on the effectiveness of existing safety efforts. We might learn where current technical standards or risk management processes are insufficient to protect people from harm, revealing critical gaps that can be addressed by AI safety research.
March 17, 2025 at 2:30 PM
For instance, we can learn about *how* the use of AI results in harm, e.g. through misuse, user error or AI failure. This information helps channel resources to the right kinds of safety efforts, since preventing misuse requires different measures than addressing operator error.
March 17, 2025 at 2:30 PM
Why should the government do this?
What makes AI risk management so tricky is predicting how deploying an AI system can go wrong. AI incidents are a rich source of information about AI harms, harm mechanisms, AI failure modes and more. Leveraging those insights can make AI use safer.
March 17, 2025 at 2:30 PM
Broadly speaking, an AI incident reporting regime has 4 core parts:
1) Incident detection;
2) Reporting to oversight bodies and inclusion in incident database;
3) Performance of impact assessments and root cause analyses; and
4) Dissemination of lessons learned
March 17, 2025 at 2:30 PM
First, a definition. AI incidents are situations in which a deployed AI system is implicated in harm, e.g. when an AI recruiting tool makes a biased hiring decision. Incidents are varied and often take unexpected forms, so go check out the AIID for more real-world examples! incidentdatabase.ai
Welcome to the Artificial Intelligence Incident Database
The starting point for information about the AI Incident Database
incidentdatabase.ai
March 17, 2025 at 2:30 PM
Thirdly, and most importantly, this decision reveals that the new European Commission is buying into the false narrative of innovation versus regulation which already dominates - and paralyzes - US tech policy.
February 13, 2025 at 3:35 PM