Daniel Palenicek
banner
daniel-palenicek.bsky.social
Daniel Palenicek
@daniel-palenicek.bsky.social
PhD Researcher in Robot #ReinforcementLearning 🤖🧠 at IAS TU Darmstadt and hessian.AI advised by Jan Peters. Former intern at BCAI and Huawei R&D UK.
Read the full preprint here:
👉 arxiv.org/pdf/2509.25174
Code coming soon.
We’d love feedback & discussion! 💬
arxiv.org
October 2, 2025 at 3:48 PM
Key takeaway:
Well-conditioned optimization > raw scale.

XQC proves principled architecture choices can outperform larger, more complex ones
October 2, 2025 at 3:48 PM
📊 Results across 70 tasks (55 proprioception + 15 vision-based):

⚡️ Matches/outperforms SimbaV2, BRO, BRC, MRQ, and DRQ-V2

🌿~4.5× fewer parameters and 1/10 FLOP/s of SimbaV2

💪Especially strong on the hardest tasks: HumanoidBench, DMC Hard & DMC Humanoids from pixels
October 2, 2025 at 3:48 PM
This leads to XQC, a streamlined extension of Soft Actor-Critic with
✅ only 4 hidden layers
✅ BN after each linear layer
✅ WN projection
✅ CE critic loss

Simplicity + principled design = efficiency ⚡️
October 2, 2025 at 3:48 PM
🔑 Insight: A simple synergy—BatchNorm + WeightNorm + Cross-Entropy loss—makes critics dramatically more well-conditioned.

➡️Result: Stable effective learning rates and smoother optimization.
October 2, 2025 at 3:48 PM
Instead of "bigger is better," we ask:
Can better conditioning beat scaling?

By analyzing the Hessian eigenspectrum of critic networks, we uncover how different architectural choices shape optimization landscapes.
October 2, 2025 at 3:48 PM
If you're working on RL stability, plasticity, or sample efficiency, this might be relevant for you.

We'd love to hear your thoughts and feedback!

Come talk to us at RLDM in June in Dublin (rldm.org)
RLDM | The Multi-disciplinary Conference on Reinforcement Learning and Decision Making
rldm.org
May 23, 2025 at 12:50 PM
📚 TL;DR: We combine BN + WN in CrossQ for stable high-UTD training and SOTA performance on challenging RL benchmarks. No need for network resets, no critic ensembles, no other tricks... Simple regularization, big gains.

Paper: t.co/Z6QrMxZaPY
https://arxiv.org/abs/2502.07523v2
t.co
May 23, 2025 at 12:50 PM
⚖️ Simpler ≠ Weaker: Compared to SOTA baselines like BRO our method:
✅ Needs 90% fewer parameters (~600K vs. 5M)
✅ Avoids parameter resets
✅ Scales stably with compute.

We also compare strongly to the concurrent SIMBA algorithm.

No tricks—just principled normalization. ✨
May 23, 2025 at 12:50 PM
🔬 The Result: CrossQ + WN scales reliably with increasing UTD—no more resets, no critic ensembles, no other tricks.
We match or outperform SOTA on 25 continuous control tasks from DeepMind Control Suite & MyoSuite, including dog 🐕 and humanoid🧍‍♂️tasks across UTDS.
May 23, 2025 at 12:50 PM
➡️ With growing weight norm, the effective learning rate decreases, and learning slows down/stops.

💡Solution: After each gradient update, we rescale parameters to the unit sphere, preserving plasticity and keeping the effective learning rate stable.
May 23, 2025 at 12:50 PM
🧠Key Idea: BN improves sample efficiency, but fails to reliably scale with complex tasks & high UTDs due to growing weight norms.
However, BN regularized networks are scale invariant w.r.t. their weights; yet, the gradient scales inversely proportional (Van Laarhoven 2017)
May 23, 2025 at 12:50 PM
🔍 Background: Off-policy RL methods like CrossQ (Bhatt* & Palenicek* et al. 2024) are sample-efficient but struggle to scale to high update-to-data (UTD) ratios.

We identify why scaling CrossQ fails—and fix it with a surprisingly effective tweak: Weight Normalization (WN). 🏋️
May 23, 2025 at 12:50 PM