Mina Narayanan
banner
minanrn.bsky.social
Mina Narayanan
@minanrn.bsky.social
Research Analyst @CSETGeorgetown | AI governance and safety | Views my own
Policymakers must move beyond rhetoric to govern AI. 🏛️ A new @csetgeorgetown.bsky.social report from jessicaji.bsky.social, @vikramvenkatram.bsky.social, Ngor Luong, & myself presents an approach to help policymakers analyze assumptions about AI cset.georgetown.edu/publication/... 🧵
AI Governance at the Frontier | Center for Security and Emerging Technology
This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational ele...
cset.georgetown.edu
November 12, 2025 at 9:23 PM
Reposted by Mina Narayanan
Check out my new @csetgeorgetown.bsky.social report, written alongside @minanrn.bsky.social, @jessicaji.bsky.social, and Ngor Luong!

cset.georgetown.edu/publication/...

Identifying assumptions can help policymakers make informed, flexible decisions about AI under uncertainty.
AI Governance at the Frontier | Center for Security and Emerging Technology
This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational ele...
cset.georgetown.edu
November 12, 2025 at 8:34 PM
Reposted by Mina Narayanan
What’s taken shape in the four months since the release of the AI Action Plan? 🧵👇

In the latest @csetgeorgetown.bsky.social ETO AGORA roundup, four CSET experts dig into the Plan’s policy impact and what’s next for AI governance. eto.tech/blog/agora-a...
Revisiting the AI Action Plan: AGORA roundup #3 – Emerging Technology Observatory
Updates from ETO's AI governance tracker
eto.tech
November 6, 2025 at 6:23 PM
Check out the second @csetgeorgetown.bsky.social @emergingtechobs.bsky.social blog from @sonali-sr.bsky.social and myself where we explore the strategies, risks, and harms addressed by AI-related laws enacted by Congress between Jan 2020 and March 2025 🧵1/6 eto.tech/blog/ai-laws...
July 29, 2025 at 6:15 PM
Shared some thoughts on the AI Action Plan's recs around shaping state-level AI activity last week -- essentially, the plan's attempt to pressure states to abandon AI restrictions risks hurting U.S. national security www.defenseone.com/technology/2...
How the White House AI plan helps, and hurts, in the race against China
While one tech advocate called the new plan “a critical component” of efforts to outpace China, another criticized it as a “Silicon Valley wishlist.”
www.defenseone.com
July 29, 2025 at 12:30 AM
Reposted by Mina Narayanan
Yesterday's new AI Action Plan has a lot worth discussing!

One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."

This could be cause for concern.
July 24, 2025 at 6:55 PM
Check out the first blog in a 2 part series from @sonali-sr.bsky.social and myself where we use data from @csetgeorgetown.bsky.social @emergingtechobs.bsky.social AGORA to explore ✨AI-related legislation that was enacted by Congress between January 2020 and March 2025✨
eto.tech/blog/ai-laws... 🧵1/3
July 23, 2025 at 1:39 PM
Check out the latest AGORA roundup from @emergingtechobs.bsky.social , which highlights some overlooked AI provisions in the Big Beautiful Bill!
✨ The AI moratorium has been struck down ⚡ but what else does the Big Beautiful Bill have to say about AI? Check out the latest AGORA update 📷 to learn about the provisions on border security, Medicare, and more! Link in thread 🧵👇
July 2, 2025 at 7:52 PM
The 10 yr moratorium on state AI laws will hurt U.S. nat'l security & innovation if enacted. In our piece in @thehill.com , @jessicaji.bsky.social , @vikramvenkatram.bsky.social , & I argue that states support the very infrastructure needed for a vibrant U.S. AI ecosystem
thehill.com/opinion/tech...
thehill.com
June 19, 2025 at 10:33 PM
Reposted by Mina Narayanan
Banning state-level AI regulation is a bad idea!

One crucial reason is that states play a critical role in building AI governance infrastructure.

Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!

thehill.com/opinion/tech...
thehill.com
June 18, 2025 at 6:52 PM
Reposted by Mina Narayanan
Amidst all the discussion about AI safety, how exactly do we figure out whether a model is safe?

There's no perfect method, but safety evaluations are the best tool we have.

That said, different evals answer different questions about a model!
⚖️ New Explainer!

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?

In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
May 28, 2025 at 2:31 PM
@ifp.bsky.social recently published a searchable database of all AI Action Plan submissions, many of which cover topics that overlap with CSET's submission! Check out CSET's recs here: cset.georgetown.edu/publication/... and compare it to others here: www.aiactionplan.org
AI Action Plan Database
A database of recommendations for OSTP's AI Action Plan.
www.aiactionplan.org
May 19, 2025 at 5:14 PM
Reposted by Mina Narayanan
CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.

cset.georgetown.edu/publication/...
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology
Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...
cset.georgetown.edu
April 17, 2025 at 4:05 PM
Reposted by Mina Narayanan
Have you heard of the Bayh-Dole Act? It's niche, but an incredibly important factor in the U.S. innovation ecosystem!

For the National Interest, @jack-corrigan.bsky.social and I discuss a potential change that could benefit public access to medical drugs.

nationalinterest.org/blog/techlan...
Trump Should Not Abandon March-In Rights
Moving forward with the Biden administration’s guidance could deliver lower drug prices and allow more Americans to reap the benefits of public science. In late 2023, the federal government published ...
nationalinterest.org
April 28, 2025 at 6:08 PM
Reposted by Mina Narayanan
What does the EU's shifting strategy mean for AI?

CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.

Read it now 👇
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
March 10, 2025 at 2:17 PM
Check out @csetgeorgetown.bsky.social's response to the AI Action Plan RFI! We recommend that the administration support key enablers of U.S. tech prowess, including access to AI talent, foundational AI standards & evaluations, & open markets & research ecosystems cset.georgetown.edu/publication/...
CSET's Recommendations for an AI Action Plan | Center for Security and Emerging Technology
In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and max...
cset.georgetown.edu
March 17, 2025 at 3:27 PM
Reposted by Mina Narayanan
Check out our paper on the quality of interpretability evaluations of recommender systems:

cset.georgetown.edu/publication/...

Led by @minanrn.bsky.social and Christian Schoeberl!

@csetgeorgetown.bsky.social
February 19, 2025 at 8:45 PM
[1/6] Discourse around AI evaluations has focused a lot on testing LLMs for catastrophic risks. In a new @csetgeorgetown.bsky.social report, Christian Schoeberl, @timrudner.bsky.social, and I explore another side of AI evals: evals of claims about the trustworthiness of AI systems
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology
Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...
cset.georgetown.edu
February 20, 2025 at 7:52 PM
Reposted by Mina Narayanan
Will the Paris #AIActionSummit set a unified approach to AI governance—or just be another conference?

A new article from @miahoffmann.bsky.social, @minanrn.bsky.social, and @ojdaniels.bsky.social.
Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?
AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.
thebulletin.org
February 6, 2025 at 3:47 PM
@miahoffmann.bsky.social , @ojdaniels.bsky.social, and I wrote a piece on key AI governance areas to watch in 2025 with the upcoming AI Action Summit in mind. Check it out here! thebulletin.org/2025/02/will...
Will the Paris artificial intelligence summit set a unified approach to AI governance—or just be another conference?
AI innovations and governments’ preferences can make international consensus on governance at the Paris Summit challenging.
thebulletin.org
February 7, 2025 at 3:00 AM
Reposted by Mina Narayanan
We're hiring 📢

CSET is looking for a Research Fellow to analyze topics related to the development, deployment, and operations of AI & ML tools in the national security space.

Interested or know someone who would be? Learn more and apply 👇 cset.georgetown.edu/job/research...
Research Fellow - Applications | Center for Security and Emerging Technology
The Center for Security and Emerging Technology at Georgetown University (CSET) is seeking applications for a Research Fellow to support our Applications Line of Research. This role will analyze topic...
cset.georgetown.edu
February 4, 2025 at 6:34 PM
The administration should continue investing in AI evaluations, standards, and risk management. These enable us to build better AI systems and more accurately assess their performance -- especially pertinent considerations in light of DeepSeek developments and talk about which models lead the pack
February 4, 2025 at 1:51 AM
Interested in data-driven analyses that explore topics at the intersection of AI, emerging technology, and national security? If so, follow my colleagues from @csetgeorgetown.bsky.social and check out the CSET Starter Pack: bsky.app/starter-pack...
January 22, 2025 at 9:34 PM
I shared some thoughts with GZERO on AI trends in 2025. This year, I'll also be watching how different interests in Trump’s coalition shape AI policy in 2025 and whether AI governance bodies established under Biden persist into the Trump administration
www.gzeromedia.com/gzero-ai/5-a...
5 AI trends to watch in 2025
Artificial intelligence is bound to have a big year again in 2025.
www.gzeromedia.com
January 13, 2025 at 6:34 PM