Tim G. J. Rudner
banner
timrudner.bsky.social
Tim G. J. Rudner
@timrudner.bsky.social
Assistant Professor, University of Toronto.
Junior Research Fellow, Trinity College, Cambridge.
AI Fellow, Georgetown University.

Probabilistic Machine Learning, AI Safety & AI Governance.

Prev: Oxford, Yale, UC Berkeley, NYU.

https://timrudner.com
Pinned
I was honored to be named a Rising Star in Generative AI by @umassamherst.bsky.social!

The goal of my research is to create robust and transparent machine learning models, and

**I'm on the faculty job market.**

Thank you to @nyudatascience.bsky.social for the great article!
📢The Information Society Project (@yaleisp.bsky.social) at Yale Law School is recruiting a new batch of *Resident Fellows*!

It's a great community and a good opportunity for anyone interested in the intersection of *AI governance and the law*.

Deadline: Dec 31
Apply: law.yale.edu/isp/join-us#...
October 16, 2025 at 3:42 PM
📢 Exciting opportunity:

@csetgeorgetown.bsky.social is hiring a Research or Senior Fellow to help lead their **frontier AI policy research efforts**.

I've been working with CSET since 2019 and continue to be impressed by the quality and impact of CSET's work!

cset.georgetown.edu/job/research...
[Research or Senior] Fellow - Frontier AI | Center for Security and Emerging Technology
The Center for Security and Emerging Technology (CSET) is currently seeking candidates to lead our Frontier AI research efforts, either as a Research Fellow or Senior Fellow (depending on experience)....
cset.georgetown.edu
October 16, 2025 at 3:38 PM
Reposted by Tim G. J. Rudner
“It actually doesn’t take much to be considered a difficult woman. That’s why there are so many of us.”
― Jane Goodall

💙 RIP to a real one. My childhood hero
October 2, 2025 at 2:56 AM
Reposted by Tim G. J. Rudner
Today's Lawfare Daily is a @scalinglaws.bsky.social episode, produced with @utexaslaw.bsky.social, where @kevintfrazier.bsky.social spoke to @gushurwitz.bsky.social and @neilchilson.bsky.social about how academics can overcome positively contribute to the work associated with AI governance.
The Ivory Tower and AI (Live from IHS's Technology, Liberalism, and Abundance Conference)
YouTube video by Lawfare
youtu.be
October 1, 2025 at 1:39 PM
It was a pleasure speaking at @yaleisp.bsky.social yesterday!
This week’s surprise Ideas Lunch examined “formal guarantees” and artificial intelligence. Thank you to @timrudner.bsky.social for a magnificent presentation!
September 26, 2025 at 10:50 AM
Reposted by Tim G. J. Rudner
Tomorrow’s ISP Ideas Lunch update:

We’re excited to host @timrudner.bsky.social (U. Toronto & Vector Institute). He’ll speak on “formal guarantees” in AI + key AI safety concepts!
September 25, 2025 at 1:53 AM
Reposted by Tim G. J. Rudner
Our new lab for Human & Machine Intelligence is officially open at Princeton University!

Consider applying for a PhD or Postdoc position, either through Computer Science or Psychology. You can register interest on our new website lake-lab.github.io (1/2)
September 8, 2025 at 1:59 PM
Reposted by Tim G. J. Rudner
Democracy rewired is a 5-part series exploring how AI is reshaping democratic values — from individual agency to global sovereignty. The big question: can AI strengthen democracy?
August 25, 2025 at 6:55 PM
I'm thrilled to join the Schwartz Reisman Institute for Technology and Society as a Faculty Affiliate!
August 16, 2025 at 4:14 PM
Reposted by Tim G. J. Rudner
Congrats! CDS PhD Student Vlad Sobal, Courant PhD Student Kevin Zhang, CDS Faculty Fellow timrudner.bsky.social, CDS Profs @kyunghyuncho.bsky.social and @yann-lecun.bsky.social, and Brown's Randall Balestriero won the Best Paper Award at ICML's 'Building Physically Plausible World Models' Workshop!
August 12, 2025 at 4:12 PM
Reposted by Tim G. J. Rudner
CDS Faculty Fellow @timrudner.bsky.social served as general chair for the 7th Symposium on Advances in Approximate Bayesian Inference, held April alongside ICLR 2025.

The symposium explored connections between probabilistic machine learning and AI safety, NLP, RL, and AI for science.
July 17, 2025 at 7:11 PM
Reposted by Tim G. J. Rudner
CDS Faculty Fellow Tim G. J. Rudner (@timrudner.bsky.social) and colleagues at CSET — @emmyprobasco.bsky.social, @hlntnr.bsky.social, and Matthew Burtell — examine responsible AI deployment in military decision-making.

Read our post on their policy brief: nyudatascience.medium.com/ai-in-milita...
AI in Military Decision Support: Balancing Capabilities with Risk
CDS Faculty Fellow Tim G. J. Rudner and colleagues at CSET outline responsible practices for deploying AI in military decision-making.
nyudatascience.medium.com
May 14, 2025 at 7:23 PM
The result in this paper I'm most excited about:

We showed that planning in world model latent space allows successful zero-shot generalization to *new* tasks!

Project website: latent-planning.github.io

Paper: arxiv.org/abs/2502.14819
May 7, 2025 at 9:26 PM
Reposted by Tim G. J. Rudner
#1: Can Transformers Learn Full Bayesian Inference In Context? with @arikreuter.bsky.social @timrudner.bsky.social @vincefort.bsky.social
May 1, 2025 at 12:36 PM
Reposted by Tim G. J. Rudner
Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI 🤖🧠
Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!
April 30, 2025 at 12:02 AM
Congratulations to the #AABI2025 Proceedings Track Best Paper Award recipients!
April 29, 2025 at 8:55 PM
Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!
April 29, 2025 at 8:54 PM
We concluded #AABI2025 with a panel discussion on

**The Role of Probabilistic Machine Learning in the Age of Foundation Models and Agentic AI**

Thanks to Emtiyaz Khan, Luhuan Wu, and @jamesrequeima.bsky.social for participating!
April 29, 2025 at 8:49 PM
.@jamesrequeima.bsky.social gave the third invited talk of the day at #AABI2025!

**LLM Processes**
April 29, 2025 at 8:41 PM
Luhuan Wu is giving the second invited talk of the day at #AABI2025!

**Bayesian Inference for Invariant Feature Discovery from Multi-Environment Data**

Watch it on our livestream: timrudner.com/aabi2025!
April 29, 2025 at 4:02 AM
Emtiyaz Khan is giving the first invited talk of the day at #AABI2025!
April 29, 2025 at 1:55 AM
We just kicked off #AABI2025 at NTU in Singapore!

We're livestreaming the talks here: timrudner.com/aabi2025!

Schedule: approximateinference.org/schedule/

#ICLR2025 #ProbabilisticML
April 29, 2025 at 1:47 AM
Make sure to get your tickets to #AABI2025 if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic ML, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#ProbabilisticML #Bayes #UQ #ICLR2025 #AABI2025
AABI 2025 · Luma
7th Symposium on Advances of Approximate Bayesian Inference (AABI) https://approximateinference.org/schedule
lu.ma
April 18, 2025 at 3:42 AM
Reposted by Tim G. J. Rudner
Make sure to get your tickets to AABI if you are in Singapore on April 29 (just after #ICLR2025) and interested in probabilistic modeling, inference, and decision-making!

Tickets (free but limited!): lu.ma/5syzr79m
More info: approximateinference.org

#Bayes #MachineLearning #ICLR2025 #AABI2025
April 13, 2025 at 7:43 AM
Reposted by Tim G. J. Rudner
CDS Faculty Fellow @timrudner.bsky.social, with @minanrn.bsky.social & Christian Schoeberl, analyzed AI explainability evals, finding a focus on system correctness over real-world effectiveness. They call for the creation of standards for AI safety evaluations.

cset.georgetown.edu/publication/...
Putting Explainable AI to the Test: A Critical Look at AI Evaluation Approaches | Center for Security and Emerging Technology
Explainability and interpretability are often cited as key characteristics of trustworthy AI systems, but it is unclear how they are evaluated in practice. This report examines how researchers evaluat...
cset.georgetown.edu
April 17, 2025 at 4:05 PM