Clara Langevin
banner
cclangevin.bsky.social
Clara Langevin
@cclangevin.bsky.social
AI Policy Specialist at @scientistsorg.bsky.social I think about AI policy in the US and Brazil- all views are my own. Lifelong Cruzeirense 💙🦊💫
Excited to join tonight!
February 25, 2025 at 9:07 PM
Got great energy x AI policy ideas! Submit a proposal, and we will help you cultivate it from a seedling idea to a full-fledged memo with an advocacy plan!
February 21, 2025 at 8:45 PM
12/ Transparency isn’t just good policy—it’s essential for AI use in government.
February 10, 2025 at 9:49 PM
11/ Bottom line:
📉 If the AI Use Case Inventory disappears, we lose a critical tool for public trust.
📉 More AI in government + less transparency = a perfect storm for harm and erosion of trust in emerging tech.
February 10, 2025 at 9:49 PM
10/ Case in point: The Netherlands ‘ AI-driven fraud detection system wrongly cut off benefits to vulnerable families.
Without transparency, there’s little oversight to prevent similar failures in the U.S. www.politico.eu/article/dutc...
Dutch scandal serves as a warning for Europe over risks of using algorithms
The Dutch tax authority ruined thousands of lives after using an algorithm to spot suspected benefits fraud — and critics say there is little stopping it from happening again.
www.politico.eu
February 10, 2025 at 9:49 PM
9/ But what happens if Trump II abolishes or weakens the inventory?
It’s already clear the administration plans to increase AI use for fraud detection—one of the riskiest AI applications. www.nytimes.com/2025/02/03/t...
Musk Allies Discuss Deploying A.I. to Find Budget Savings
A top official at the General Services Administration said artificial intelligence could be used to identify waste and redundancies in federal contracts.
www.nytimes.com
February 10, 2025 at 9:48 PM
Why does this matter? The AI Use Case Inventory is one of the most important transparency tools for AI in government.
It allows civil society to track federal AI deployments, identify risks, and hold agencies accountable. Without it, the public is left in the dark
February 10, 2025 at 9:48 PM
7/ If an AI system impacts rights or safety, agencies must also disclose:

⚠️ Risk management & independent evaluations

⚠️ Potential harm & mitigation efforts

⚠️ Whether people can opt-out in favor of a human decision-maker
February 10, 2025 at 9:47 PM
6/ This new guidance required agencies to report much more information, including:
📊 Intended purpose & expected benefits

📊 AI system outputs & development details

📊 Privacy, bias, & safety risks

📊 Transparency measures & public impact
February 10, 2025 at 9:46 PM
5/ Originally, agencies reported basic details on their AI use cases. But the Biden administration greatly expanded this. Under OMB Guidance M-24-10, Biden broadened the definition of AI (aligning with the John S. McCain NDAA of 2019).
February 10, 2025 at 9:43 PM
4/ Most importantly, EO 13960 created the AI Use Case Inventory, a transparency tool requiring agencies to disclose AI systems they use or plan to use.
February 10, 2025 at 9:42 PM
3/ To implement this, EO 13960 directed:

📌 OMB to create a policy roadmap for AI adoption

📌 Agencies to inventory their AI use cases

📌 GSA to recruit AI experts via the Presidential Innovation Fellows program

📌 OPM to explore rotational programs for AI expertise
February 10, 2025 at 9:41 PM
2/ In 2020, Trump issued EO 13960, setting 9 guiding principles for AI in federal agencies—prioritizing lawful, effective, secure, transparent, and accountable AI use.
February 10, 2025 at 9:41 PM
1/ We don’t know exactly what Trump II will do, but it’s shaping up to be very different from Trump I. So let’s look at how the first Trump administration approached AI in government—and what that could mean for the AI Use Case Inventory.
February 10, 2025 at 9:38 PM
A key part of these memos? The AI Use Case Inventory, which in the final days of the Biden Admin documented 1,700 federal AI use cases. 🧵
February 10, 2025 at 9:37 PM
Reposted by Clara Langevin
The federal govt’s increasing reliance on CAI/PII is outpacing its ability to regulate it – putting your data in the wrong hands.

As AI systems become increasingly integrated into government processes, protecting fundamental constitutional rights cannot be an afterthought.
fas.org/publication/...
Public Comment on Executive Branch Agency Handling of CAI
The federal government is responsible for ensuring the safety and privacy of the processing of personally identifiable information within commercially available information used for the development an...
fas.org
December 18, 2024 at 8:17 PM
Reposted by Clara Langevin
Recommendation 3. Build Government Capacity for the Use of Privacy Enhancing Technologies to Bolster Anonymization Techniques
December 18, 2024 at 8:17 PM
Reposted by Clara Langevin
Recommendation 2. Expand Privacy Impact Assessments (PIA) to Incorporate Additional Requirements and Periodic Evaluations
December 18, 2024 at 8:17 PM
Reposted by Clara Langevin
FedRAMP should add CAI/PII to the mix, requiring datasets be assessed on the following information (see screenshot)

Bonus: FedRAMP authorizations are strictly enforced, offering a level of rigor that voluntary assessments just can’t match
December 18, 2024 at 8:17 PM