Daniel Palenicek
banner
daniel-palenicek.bsky.social
Daniel Palenicek
@daniel-palenicek.bsky.social
PhD Researcher in Robot #ReinforcementLearning 🤖🧠 at IAS TU Darmstadt and hessian.AI advised by Jan Peters. Former intern at BCAI and Huawei R&D UK.
I'm super excited to have been named an #NVIDIA Graduate Fellowship Finalist! 💚

Huge thanks to my supervisor @jan-peters.bsky.social and all my collaborators.

Can't wait to join the NVIDIA Seattle Robotics Lab for my internship next summer! 🤖

blogs.nvidia.com/blog/graduat...
NVIDIA Awards up to $60,000 Research Fellowships to PhD Students
The Graduate Fellowship Program announced the latest awards of up to $60,000 each to 10 Ph.D. students involved in research that spans all areas of computing innovation.
blogs.nvidia.com
December 13, 2025 at 4:26 PM
Had a really great time presenting our #NeurIPS paper at the poster session today. Thanks to everyone who stopped by.

If you are interested in sample-efficient #RL, check out our work:

Scaling Off-Policy Reinforcement Learning with Batch and Weight Normalization
🚀 New preprint "Scaling Off-Policy Reinforcement Learning with Batch and Weight Normalization"🤖

We propose CrossQ+WN, a simple yet powerful off-policy RL for more sample-efficiency and scalability to higher update-to-data ratios. 🧵 t.co/Z6QrMxZaPY

#RL @ias-tudarmstadt.bsky.social
https://arxiv.org/abs/2502.07523v2
t.co
December 6, 2025 at 5:44 AM
🚀 New preprint! Introducing XQC— a simple, well-conditioned actor-critic that achieves SOTA sample efficiency in #RL
✅ ~4.5× fewer parameters than SimbaV2
✅ Scales to vision-based RL
👉 arxiv.org/pdf/2509.25174

Thanks to Florian Vogt @joemwatson.bsky.social @jan-peters.bsky.social
October 2, 2025 at 3:48 PM
🚀 New preprint "Scaling Off-Policy Reinforcement Learning with Batch and Weight Normalization"🤖

We propose CrossQ+WN, a simple yet powerful off-policy RL for more sample-efficiency and scalability to higher update-to-data ratios. 🧵 t.co/Z6QrMxZaPY

#RL @ias-tudarmstadt.bsky.social
https://arxiv.org/abs/2502.07523v2
t.co
May 23, 2025 at 12:50 PM
Check out our latest work, where we train an omnidirectional locomotion policy directly on a real quadruped robot in just a few minutes based on our CrossQ RL algorithm 🚀
Shoutout @nicobohlinger.bsky.social, Jonathan Kinzel.

@ias-tudarmstadt.bsky.social @hessianai.bsky.social
⚡️ Do you think training robot locomotion needs large scale simulation? Think again!

We train an omnidirectional locomotion policy directly on a real quadruped in just a few minutes 🚀
Top speeds of 0.85 m/s, two different control approaches, indoor and outdoor experiments, and more! 🤖🏃‍♂️
March 19, 2025 at 10:34 AM