Mina Narayanan
banner
minanrn.bsky.social
Mina Narayanan
@minanrn.bsky.social
Research Analyst @CSETGeorgetown | AI governance and safety | Views my own
By adopting our analytic approach, U.S. policymakers + researchers can move away from rhetorical debates about AI governance & better prepare the United States for a range of possible AI futures cset.georgetown.edu/publication/...
AI Governance at the Frontier | Center for Security and Emerging Technology
This report presents an analytic approach to help U.S. policymakers deconstruct artificial intelligence governance proposals by identifying their underlying assumptions, which are the foundational ele...
cset.georgetown.edu
November 12, 2025 at 9:23 PM
Our work also demonstrates that policymakers & researchers alike can leverage assumptions to more precisely understand disagreements & shared views among stakeholders
November 12, 2025 at 9:23 PM
Our case study demonstrates that policymakers can take action in an uncertain & rapidly changing environment by addressing common assumptions across proposals
November 12, 2025 at 9:23 PM
We apply these questions to 5 US AI governance proposals from academia, industry, civil society, & the state & federal government & find that most proposals view AI-enabling talent & AI processes/frameworks as important enablers of AI governance
November 12, 2025 at 9:23 PM
Assumptions that are shared across proposals are effectively enablers of the success of multiple proposals, whereas unique assumptions may indicate different perspectives or areas where consensus-building may be challenging
November 12, 2025 at 9:23 PM
Our approach involves deriving unique & shared assumptions across proposals by answering 3 questions:
November 12, 2025 at 9:23 PM
In other words, Congress is still in the early days of governing AI but so far seems more focused on understanding and harnessing AI’s potential than addressing its downsides. Make sure to take a deeper dive into our analysis here 🧵6/6 eto.tech/blog/ai-laws...
Exploring AI legislation in Congress with AGORA: Risks, Harms, and Governance Strategies – Emerging Technology Observatory
Using AGORA to explore AI legislation enacted by U.S. Congress since 2020
eto.tech
July 29, 2025 at 6:15 PM
Fewer legislative docs directly tackle risks or undesirable consequences from AI (such as harm to infrastructure) than propose strategies such as government support, convening, or institution-building 🧵5/6
July 29, 2025 at 6:15 PM
Very few enactments leverage performance requirements, pilots, new institutions, or other governance strategies that place concrete requirements on AI systems or represent investments in maturing or scaling up AI capabilities 🧵4/6
July 29, 2025 at 6:15 PM
Most of Congress’s 147 enactments focus on commissioning studies of AI systems, assessing their impacts, providing support for AI-related activities, convening stakeholders, & developing additional AI-related governance docs 🧵3/6
July 29, 2025 at 6:15 PM
We find that Congress has enacted many AI-related laws & provisions which are focused more on laying the groundwork to harness AI’s potential – often in nat'l sec contexts – than placing concrete demands on AI or directly tackling their specific, undesirable consequences 🧵2/6
July 29, 2025 at 6:15 PM
Stay tuned for the second blog, which examines the governance strategies, risk-related concepts, and harms covered by this legislation! 🧵3/3
July 23, 2025 at 1:39 PM
We find that, contrary to conventional wisdom, Congress has enacted many AI-related laws and provisions — most of which apply to military and public safety contexts 🧵2/3
July 23, 2025 at 1:39 PM