Cathy Wu
cathywu.bsky.social
Cathy Wu
@cathywu.bsky.social
AI & Transportation | MIT Associate Professor

Interests: AI for good, sociotechnical systems, machine learning, optimization, reinforcement learning, public policy, gov tech, open science.

Science is messy and beautiful.
http://www.wucathy.com
This project was 4 years in the making and it's finally out!

We found that controlling vehicle speeds to mitigate traffic across a city can cut carbon emissions between 11 and 22 percent. To do this, we used deep reinforcement learning to optimize one million eco-driving scenarios. 🚗🤖🧠
August 11, 2025 at 6:47 PM
This is the way to cross a freeway. 😍

Breathtaking walkway across SR-520 in Redmond, WA at Microsoft Campus🚶‍♀️🚲🛣️
June 2, 2025 at 5:39 AM
📣 I will be giving two talks this week at the TRB DATA conference in Seattle, which explores the intersection of transportation, artificial Intelligence, and data analysis. I love these practitioner + researcher conferences. 🤖🧠👇
May 26, 2025 at 6:02 PM
📣 #RERITE 's first conference presentation will take place tomorrow at the Transportation Research Symposium at Rotterdam in The Netherlands! 🚂🛣️

We use Large Language Models (LLMs) to measure the state of data & code availability in transportation research. Join to learn: 👇
May 25, 2025 at 9:35 PM
What an utter delight to be taking Sound Transit’s new light rail line!!! (2 Line) 🚉 🌆 #transit
May 23, 2025 at 11:27 PM
Taizhou's industrial zones www.etaizhou.gov.cn/industrialpa...
May 16, 2025 at 4:59 PM
5GAA's map of V2X use cases towards cooperative driving

Courtesy CCAT www.youtube.com/watch?v=fn_V...
May 10, 2025 at 1:12 PM
Excited to announce IntersectionZoo, a benchmark that uses a real-world traffic problem to test generalization progress in deep reinforcement learning, particularly multi-agent contextual RL. 🤖🧠

MIT News coverage: news.mit.edu/2025/new-too...
Benchmark: intersectionzoo-docs.readthedocs.io
May 5, 2025 at 9:22 PM
Find us at the TRB Annual Meeting 2025 this week in DC! PhD student Shreyaa Raghavan and I will share recent developments in reproducibility and new highway traffic research. Hope to see you there! #TRBAM
January 6, 2025 at 2:57 PM
Reproducible research and open science at #TRBAM! Stop by and thank the authors and organizers for contributing their work openly for the community to build upon.

Curated by the REproducible Research In Transportation Engineering (RERITE) Working Group

Are we missing something? Leave a comment!
January 5, 2025 at 3:59 PM
Yes, ready for some thinking and reading time during the holidays!! Got a list of research questions and books critiquing my field loaded up and ready to go.
December 22, 2024 at 5:00 PM
12/n This was joint work with my students @jhcho.bsky.social, Vindula, and Sirui. But not only that! This work builds on years of sweat and tears in trying to applying RL, and we hope that this brings some optimism to others as well.
n=12
December 8, 2024 at 6:13 PM
9/n We like multi-agent traffic tasks, so we tried those out too. Up to 25x sample efficiency improvements! 🚦🚗 Btw, 25x means training 4 tasks to match the performance of training 100 tasks. I cannot express how excited we are to try this out on more applications.
December 8, 2024 at 6:13 PM
8/n It works surprisingly well! Up to 50x sample efficiency improvements over typical training (brown vs orange, green)! Sequential oracle (pink) chooses the single best training task at each step, given full knowledge. And some heuristic baselines (random, greedy, equidistant).
December 8, 2024 at 6:13 PM
7/n Using this model, we choose training tasks––one at a time––that best improve overall generalization performance across contexts. We leverage ideas of optimism in the face of uncertainty (UCB) and Bayesian optimization to design a suitable acquisition function. Then iterate!
December 8, 2024 at 6:13 PM
6/n The core idea is to explicitly model zero-shot generalization, hence Model-based Transfer Learning (MBTL). We model 1) how well a given RL method would solve a task (Gaussian Process) and 2) the performance loss when applied to another task (linear in context similarity).
December 8, 2024 at 6:13 PM
3/n Surprisingly (?), if we use typical training paradigms, independent training is brittle (orange) and multi-task training is even worse (green). Even on “simple” tasks like Pendulum! I cannot convey how frustrating this is as someone who wants to apply RL to the real-world. 😡
December 8, 2024 at 6:13 PM
2/n OK, let's back up. In deep reinforcement learning (RL), we are almost always interested in solving a range of tasks. As such, we consider contextual RL, where the space of tasks is parameterized.
December 8, 2024 at 6:13 PM
👼 As an applied RL researcher, this is the most optimistic I have been about RL in years. It feels like seeing the light at the end of the tunnel when RL training starts working reliably. Without a ton of compute or tuning. Very excited for what is to come. Here is what we did👇
December 8, 2024 at 6:13 PM
Got it, that's a great insight. Revised!
December 2, 2024 at 4:07 PM