Skander Moalla
banner
skandermoalla.bsky.social
Skander Moalla
@skandermoalla.bsky.social
PhD @Caglar Gulcehre Lab for AI Research (CLAIRE) @EPFL. Deep Reinforcement Learning, RLHF, foundation models.
ML Research Template (https://github.com/CLAIRE-Labo/python-ml-research-template)
I’m really proud of this work! It’s been an amazing collaboration with @simonmatrenok.bsky.social and @caglarai.bsky.social

📰 Paper: arxiv.org/abs/2507.08068
Hidden gems and open questions in the 30+ page appendix💎
🧑‍💻 Code: github.com/CLAIRE-Labo/...
🌐 Blog: claire-labo.github.io/quantile-rewar
Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions
Aligning large language models with pointwise absolute rewards has so far required online, on-policy algorithms such as PPO and GRPO. In contrast, simpler methods that can leverage offline or off-poli...
arxiv.org
July 15, 2025 at 6:45 PM
What do these optimal policies look like? 👀
We show equivalence of a family of transformations allowing us to qualitatively interpret the quantile reward optimal as a Best-of-N policy 🎯
Empirically, each transformation brings different dynamics, and it's an open question to compare all of them! 🕵️
July 15, 2025 at 6:45 PM
QRPO is a framework. You can shape the optimal policy! 🎛️
We derive a framework around QRPO for using transformations on top of the quantile reward.
Each transformation reshapes the reward distribution and affects the properties of the optimal policy, while having a tractable partition function.
July 15, 2025 at 6:45 PM
And we show that for relatively high beta, with good data, the probabilities increase as predicted 💯
July 15, 2025 at 6:45 PM
For QRPO, this is not a mystery anymore; we know exactly where the probabilities should move, and we explain how it's normal for them to decrease when the regularization (beta) is very low.
This is simply because the target policy is much further away from the training support 🎯
July 15, 2025 at 6:45 PM
Is QRPO still subject to the "chosen probabilities decreasing" problem?
Our understanding of the KL-regularized closed-form solution gives insights into the "DPO chosen probabilities decreasing" problem! 🤔
July 15, 2025 at 6:45 PM
💬 The reward model we use has been trained to be robust to length bias, and we see that this is preserved in QRPO and REBEL, which use rewards.
But when compressed to preferences for DPO and SimPO, it leads to the typical length bias trend, despite the reduction in mean length.
July 15, 2025 at 6:45 PM
🥇 QRPO achieves top performance in chat and coding compared to DPO, REBEL, and SimPO, each capturing a different way to learn from the reward signal (preference, reward difference, length normalization).
July 15, 2025 at 6:45 PM
Obviously, nothing comes for free, but we give you a great deal! 🤝

* QRPO does not need many reference rewards to estimate quantiles: for high-quality offline datasets, 1-3 are enough!

* And you can scale this number for off-policy data generated from the reference model! 📈
July 15, 2025 at 6:45 PM
3️⃣ We can transform the reward distribution to make it known. It's uniform for reward quantiles! 🔑

🚀 The result: Quantile Reward Policy Optimization!

QRPO transforms rewards to quantile rewards for which we derive Z, and can then fit the closed-form optimal RL solution with a simple regression! 📉
July 15, 2025 at 6:45 PM
1️⃣ The “infinite sum over all possible LLM generations” argument is a myth. We rewrite the partition function Z in terms of rewards, revealing that Z is given by the moment generating function (MGF) of the reward distribution!

2️⃣ Knowing the reward distribution => knowing the MGF => knowing Z 🔐
July 15, 2025 at 6:45 PM
We tackle the infamous “... partition function is known to be intractable...” problem 🧐
This is the problem that limits DPO-like methods to pairwise data. We solve it thanks to 3 insights! 💡
July 15, 2025 at 6:45 PM