Jakob Mökander
jakobmokander.bsky.social
Jakob Mökander
@jakobmokander.bsky.social
Director of science & tech policy at the Tony Blair Institute. International research fellow at Yale Digital Ethics Center. Views my own
This matters. Without public support, the government will struggle to deliver on the AI action plan and its wider growth agenda

To build trust in AI, the government must ensure that AI systems are beneficial and safe, as well as center its messaging around AI on how it improves social outcomes

5/6
September 22, 2025 at 6:25 AM
🛟 38% of UK adults cite lack of trust in AI content as the main barrier to adoption

4/6
September 22, 2025 at 6:25 AM
📉 More UK adults view AI as a risk for the economy (39%) than an opportunity (20%)

3/6
September 22, 2025 at 6:25 AM
New polling by TBI and @ipsosintheuk.bsky.social show that:

👫 While 25% of UK adults use generative AI weekly, nearly half have never used it

2/6
September 22, 2025 at 6:25 AM
With these 4 embodied AI risk categories in mind, we analyzed existing policies (in US, EU, and UK) and found critical gaps

While a good starting point, current frameworks for industrial robots and autonomous vehicles are insufficient to address the full range of risks EAI systems pose

3/4
September 4, 2025 at 5:51 PM
The core problem: Alongside excitement and opportunity, Embodied AI pose severe risks

EAI inherits traditional AI risks (privacy, bias, security etc) - and pose new ones, e.g. related to physical harm, mass surveillance and displacement of manual labor

We identify 4 key EAI risk categories

2/4
September 4, 2025 at 5:51 PM
🚨 NEW PAPER 🚨: Embodied AI (incl. AI-powered drones, self-driving cars and robots) is here, but policies are lagging. We analyzed the EAI risks and found significant gaps in governance

arxiv.org/pdf/2509.00117

Co-authors Jared Perlo @fbarez.bsky.social Alex Robey & @floridi.bsky.social

1\4
September 4, 2025 at 5:51 PM
Talk about sovereign AI is often unclear what it means for data, models and compute

The UK doesn’t need to build everything, but it must build enough infrastructure to deploy AI where it matters, to ensure resilience, and to anchor a domestic ecosystem that delivers for the public and the economy
July 29, 2025 at 7:33 AM
Great conversation on #AI and #sustainability at @politico.eu tech summit earlier this week

Key takes:
✅ Let’s shift from apathy to action
✅ Tech is part of green solutions
✅ European leadership is needed

Read more about Tony Blair Institute’s work on climate & energy led by Lindy Fursman in 🧵
May 16, 2025 at 11:09 AM
March 18, 2025 at 9:47 AM
The question political leaders face is not how to ”regulate AI” but how to “govern well in the age of AI”

This requires bold visions, infrastructure investments & good governance to enable AI uptake

Panel w/ HE Josephine Teo, HE Paula Inngabire, Matt Clifford and Teresa Carlson

2/4
February 17, 2025 at 2:28 PM
Cross-sectoral dialogue on AI is rare

Last week in Paris, the Tony Blair Institute convened 200 global leaders to explore how AI can be harnessed for economic growth & social progress

Thanks @yoshuabengio.bsky.social, @alondra.bsky.social & Fu Ying for insights + lively debate on AI safety

🧵 1/4
February 17, 2025 at 2:28 PM
Beyond policy recommendations, the paper provides a benchmarking of existing efforts to regulate AI

While useful, this is a simplification. Most AI regulations combine technology & sector-specific elements; centralized & decentralized controls; and procedural & substantive requirements

3/4
February 7, 2025 at 7:28 AM
Lots of great work is done on model evals

Robust AI governance also requires
- responsible dev. practices
- monitoring of downstream applications
- feedback loops btw different controls

 link.springer.com/article/10.1...

Co-authors JonasSchuett
@hannahrosekirk.bsky.social
@floridi.bsky.social
November 26, 2024 at 11:46 AM