Ben Eysenbach
ben-eysenbach.bsky.social
Ben Eysenbach
@ben-eysenbach.bsky.social
Assistant professor at Princeton CS working on reinforcement learning and AI/ML.
Site: https://ben-eysenbach.github.io/
Lab: https://princeton-rl.github.io/
🤖Excited to share SLAP,
@yijieisabelliu.bsky.social 's new algorithm using RL to provide better skills for planning!

Check out the website for code, videos, and pre-trained models: github.com/isabelliu0/S...
November 5, 2025 at 4:27 PM
Kids spend years playing with blocks, building spatial+arithmetic skills. Today, AI models just read.

While AI research often conflates reasoning with language models, block-building lets us study how embodied reasoning might emerge from exploration and trial-and-error learning!
Can AI models build a world which today's generative models can only dream of?

Presenting BuilderBench (website : t.co/H7wToslhXG).

Details below 🧵⬇️
October 16, 2025 at 11:21 PM
Reposted by Ben Eysenbach
🚨 Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind

📣 Call for: Findings (4- or 8-page) + Tutorials tracks

🎙️ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social

🌐 Learn more: data-brain-mind.github.io
August 4, 2025 at 3:28 PM
Check out @raj-ghugare.bsky.social's new paper on the surprising effectiveness of normalizing flows (NF) in RL 🚀

This project changed my mind in 2 ways:
1/ Diffusion policies, flow-models, and EBMs have become ubiquitous in RL. Turns out NFs can perform as well -- no ODEs/SDEs required!
Normalizing Flows (NFs) check all boxes for RL: exact likelihoods (imitation learning), efficient sampling (real-time control), and variational inference (Q-learning)! Yet they are overlooked over more expensive and less flexible contemporaries like diffusion models.

Are NFs fundamentally limited?
June 5, 2025 at 6:21 PM
tldr: increase the depth of your RL networks by several orders of magnitude.

Our new paper shows that very very deep networks are surprisingly useful for RL, if you use resnets, layer norm, and self-supervised RL!

Paper, code, videos: wang-kevin3290.github.io/scaling-crl/
1/ While most RL methods use shallow MLPs (~2–5 layers), we show that scaling up to 1000-layers for contrastive RL (CRL) can significantly boost performance, ranging from doubling performance to 50x on a diverse suite of robotic tasks.

Webpage+Paper+Code: wang-kevin3290.github.io/scaling-crl/
March 21, 2025 at 4:17 PM
Excited to share new work led by @vivekmyers.bsky.social and @crji.bsky.social that proves you can learn to reach distant goals by solely training on nearby goals. The key idea is a new form of invariance. This invariance implies generalization w.r.t. the horizon.
Reinforcement learning agents should be able to improve upon behaviors seen during training.
In practice, RL agents often struggle to generalize to new long-horizon behaviors.
Our new paper studies *horizon generalization*, the degree to which RL algorithms generalize to reaching distant goals. 1/
February 6, 2025 at 1:13 AM