Risto Uuk
banner
ristouuk.bsky.social
Risto Uuk
@ristouuk.bsky.social
Head of EU Policy and Research at the Future of Life Institute | PhD Researcher at KU Leuven | Systemic risks from general-purpose AI
I’m excited to share that @lodelauwaert.bsky.social and I are writing a book tentatively titled 𝘛𝘩𝘦 𝘈𝘐 𝘚𝘢𝘧𝘦𝘵𝘺 𝘌𝘯𝘥𝘨𝘢𝘮𝘦! It will be published by Wiley.
October 9, 2025 at 5:39 PM
Proton is so good 🙏
July 4, 2025 at 9:29 AM
I'm excited to share that I’m joining the Stanford Digital Economy Lab for a research visit. I have been very impressed by the intellectual environment at Stanford University every time I’ve visited. I’m very grateful for this opportunity.
February 25, 2025 at 6:53 PM
Apple and Big Tech trade associations: “EU is really missing out on all of this great innovation because of regulation”

Said innovation:
January 16, 2025 at 1:33 PM
Happy New Year!

My 2024 was frankly very good. I had almost no setbacks in my career or personal life, and nearly everything I wanted to achieve I did. I recognize that this is rare, lucky, privileged, and not to be taken for granted.

🧵
January 3, 2025 at 11:43 AM
This year, I made minimal donations to effective charities as I prioritized paying off my student debt. Having achieved this goal this month, I have now set up automated recurring donations. I'm currently donating 10% of my income to Giving What We Can and Effective Altruism Estonia.

🧵
December 24, 2024 at 11:11 AM
Take a look at the 𝗙𝗟𝗜 𝗔𝗜 𝗦𝗮𝗳𝗲𝘁𝘆 𝗜𝗻𝗱𝗲𝘅 𝟮𝟬𝟮𝟰, which evaluates the safety practices of six leading general-purpose AI companies. This work has been covered by IEEE Spectrum, TIME, CNBC, TechCrunch, and many other outlets.

futureoflife.org/document/fli...
December 20, 2024 at 9:12 AM
• Ethics of Increasing AI Capabilities symposium in 𝗛𝗮𝗻𝗼𝘃𝗲𝗿, 𝗚𝗲𝗿𝗺𝗮𝗻𝘆
• TIC Summit 2024: What does it take to build trust in AI? in 𝗕𝗿𝘂𝘀𝘀𝗲𝗹𝘀, 𝗕𝗲𝗹𝗴𝗶𝘂𝗺
• The European Parliament voting on the AI Act in 𝗦𝘁𝗿𝗮𝘀𝗯𝗼𝘂𝗿𝗴, 𝗙𝗿𝗮𝗻𝗰𝗲
December 19, 2024 at 9:29 AM
𝘏𝘰𝘯𝘰𝘳𝘢𝘣𝘭𝘦 𝘮𝘦𝘯𝘵𝘪𝘰𝘯𝘴:
• LSE Data Science Institute’s event The AI Revolution and Future of Data Science in 𝗟𝗼𝗻𝗱𝗼𝗻, 𝗨𝗞
• Regulating Under Uncertainty: Governance Options for Generative AI event at 𝗦𝘁𝗮𝗻𝗳𝗼𝗿𝗱 𝗨𝗻𝗶𝘃𝗲𝗿𝘀𝗶𝘁𝘆, 𝗖𝗮𝗹𝗶𝗳𝗼𝗿𝗻𝗶𝗮
• Operationalizing General-Purpose AI in the EU AI Act in 𝗟𝗲𝘂𝘃𝗲𝗻, 𝗕𝗲𝗹𝗴𝗶𝘂𝗺
December 19, 2024 at 9:29 AM
3. Bay Area Alignment Workshop in 𝗦𝗮𝗻𝘁𝗮 𝗖𝗿𝘂𝘇, 𝗖𝗮𝗹𝗶𝗳𝗼𝗿𝗻𝗶𝗮

The workshop was particularly valuable as I formed several new connections that have already led to academic collaborations.
December 19, 2024 at 9:29 AM
2. 8th Annual CHAI Workshop in 𝗣𝗮𝗰𝗶𝗳𝗶𝗰 𝗚𝗿𝗼𝘃𝗲, 𝗖𝗮𝗹𝗶𝗳𝗼𝗿𝗻𝗶𝗮

The main reason I liked the conference was that my paper Effective Mitigations for Systemic Risks from General-Purpose AI essentially grew out of the event. I ran a workshop there and really appreciated the input from the participants.
December 19, 2024 at 9:29 AM
1. International Workshop on Risk and Governance of Generative Artificial Intelligence in 𝗛𝗼𝗻𝗴 𝗞𝗼𝗻𝗴, 𝗖𝗵𝗶𝗻𝗮

I liked the workshop because it was very well run, I learned a lot about the AI ethics and safety ecosystem of China, and Hong Kong offered many unique experiences.
December 19, 2024 at 9:29 AM
I recently received a lovely email from researchers (lightly edited, below). I think that in the AI governance space, we sometimes dismiss our work too quickly if it's not entirely novel. Given how few people work in this field, I doubt that a single paper on any topic captures the complete truth.
December 11, 2024 at 8:30 AM
Future of Life Institute (FLI) is looking for a Head of U.S. Policy, ideally in Washington D.C, to work on US AI policy. The application deadline is December 22. The salary for this role is 150k to 240k.

Apply here: jobs.lever.co/futureof-lif...
December 6, 2024 at 3:14 PM
For some reason cannot open this link, unfortunately. Could you share the original one?
December 1, 2024 at 7:32 PM
In Hong Kong, I gave a talk about risks from AI. I started with an overview of our work @fliorg.bsky.social, then presented key findings from the MIT AI risk repository, and ended with the results from our systemic risk taxonomy project. I also spoke on a panel about the governance of generative AI.
November 29, 2024 at 10:13 AM
Maybe I misunderstood this part then.
November 29, 2024 at 5:46 AM
One surprising takeaway from my trip to Hong Kong. The AI ethics and safety community here seems to know a lot about EU AI policy, whereas the US AI safety community (not ethics, though) tends to care little about it. This may be a false generalization due to my specific experiences.
November 26, 2024 at 12:53 PM
Speaking at and attending the 2024 International Workshop on Risk and Governance of Generative Artificial Intelligence in Hong Kong has been absolutely excellent. Very well run and insightful talks and conversations. I'm very glad I came.
November 25, 2024 at 1:21 PM
Seem like reasonable recommendations:
November 21, 2024 at 3:21 PM
I look forward to giving a talk in Hong Kong next week on the topic of risks from AI. If I have any connections from Hong Kong, please reach out and maybe we can meet.
November 20, 2024 at 8:19 AM
Useful to know:
November 16, 2024 at 2:52 PM
We are very grateful for all the feedback we’ve received and for the many experts that responded to our survey despite their busy schedules. We appreciate it! Thanks, Jonas Schuett, Laura Weidinger, Markus Anderljung, Mauritz Kelchtermans, Seth Lazar for feedback.
November 15, 2024 at 8:24 AM
Experts believe a wide range of mitigation measures are effective & technically feasible. Top-rated measures include:

• Safety incident reporting & security information sharing
• Pre-deployment risk assessments
• Third-party pre-deployment model audits

Graphs in the paper:
November 15, 2024 at 8:24 AM
We surveyed 76 domain experts about the perceived effectiveness of 27 proposed risk mitigation measures for the following four key risk areas:

• Disruptions to critical sectors
• Negative effects on democratic processes
• CBRN risks
• Harmful bias and discrimination
November 15, 2024 at 8:24 AM