Owen J. Daniels
banner
ojdaniels.bsky.social
Owen J. Daniels
@ojdaniels.bsky.social
Writing on AI, security, & democracy. Associate Director of Analysis & Andrew W. Marshall Fellow at CSET. Probably behind deadline.

Working on a book on AI and military affairs for Polity Press.

https://cset.georgetown.edu/staff/owen-daniels/
Reposted by Owen J. Daniels
@ojdaniels.bsky.social, @jessicaji.bsky.social, @jacob-feldgoise.bsky.social, and @alicrawford.bsky.social weigh in on:
• Open questions raised by the Plan
• Security-related recommendations
• Export controls
• Workforce priorities
November 6, 2025 at 6:23 PM
Reposted by Owen J. Daniels
As AI continues to reshape our world, CSET remains committed to providing data-driven analysis on the security implications of emerging technologies.

Read our full response: cset.georgetown.edu/publication/...
CSET's Recommendations for an AI Action Plan | Center for Security and Emerging Technology
In response to the Office of Science and Technology Policy's request for input on an AI Action Plan, CSET provides key recommendations for advancing AI research, ensuring U.S. competitiveness, and max...
cset.georgetown.edu
March 17, 2025 at 1:28 PM
DeepResearch prompt: "'The End of History and the Last Man' but make it AI"
February 20, 2025 at 7:34 PM
I've left a lot out: the profit opportunities and risks of increasingly agentic systems got a lot of air; emerging research on scheming, sabotage, and survival instincts of LLMs and frontier models was prominent; and practical ethics policy ideas abounded. Looking forward to sharing more ideas soon
February 13, 2025 at 4:41 PM
Across the risk spectrum, the question arose time and again: where do we actually need AI solutions? Is it actually helpful to have AI to try to help us find common ground around political disagreements, for example? Do we want more tech in our democratic processes? www.science.org/doi/10.1126/...
AI can help humans find common ground in democratic deliberation
Finding agreement through a free exchange of views is often difficult. Collective deliberation can be slow, difficult to scale, and unequally attentive to different voices. In this study, we trained a...
www.science.org
February 13, 2025 at 4:41 PM
France’s announcement at the summit that it was tapping its nuclear power industry for data centers grabbed headlines, but nuclear power is not necessarily a panacea for all of AI’s energy issues. It remains a globally significant space to watch. thebulletin.org/2024/12/ai-g...
AI goes nuclear
Big tech is turning to old reactors (and new ones) to power the energy-hungry data centers that artificial intelligence systems need. But the downsides of nuclear power—like potential nuclear weapons ...
thebulletin.org
February 13, 2025 at 4:41 PM
Environmental and energy concerns will only continue to grow with scaling, and rightfully earned much discussion. Even with model innovation’s like DeepSeek R1, which is cheaper and more efficient to train, consumption for inference will remain high.
February 13, 2025 at 4:41 PM
The AISIs have different structures and stakeholders and are attuned to particular research ecosystems, meaning they're not 1-1 matches from one nation to the next, but they can still facilitate exchange. They'll obviously face some geopolitical headwinds amid tech competition.
February 13, 2025 at 4:41 PM
Despite disappointment at executive messaging, the AI Safety Institutes leading safety work at the national level could be ideal vehicles for developing and disseminating testing, evaluation, and safety best practices. Saw some impressive presentations at side events www.aisi.gov.uk/work/safety-...
Safety cases at AISI | AISI Work
As a complement to our empirical evaluations of frontier AI models, AISI is planning a series of collaborations and research projects sketching safety cases for more advanced models than exist today, ...
www.aisi.gov.uk
February 13, 2025 at 4:41 PM
JD Vance's comments on Europe's "excessive regulation" were well covered, but EC Pres von der Leyen and Macron also championed getting out of the private sector's way. My colleague @miahoffmann.bsky.social wrote a thread about why this attitude could be troubling for Europe bsky.app/profile/miah...
There have been a ton of AI policy developments coming out of the EU these past weeks, but one deeply concerning one is the withdrawal of the AI Liability Directive (AILD) by the European Commission. Here’s why:
February 13, 2025 at 4:41 PM