Soufiane Hayou
soufianehayou.bsky.social
Soufiane Hayou
@soufianehayou.bsky.social
Asst Professor at Johns Hopkins (AMS and DSAI). Previously: Simons Institute, Oxford stats, Polytechnique. I like to scale up things!

https://www.soufianehayou.com/
Shoutout to my collaborators Nikhil Ghosh and Bin Yu for their help with this project.
June 30, 2025 at 9:26 PM
✅ PLoP Consistently outperforms other strategies (Attn, MLP)
✅ Works across different post-training scenarios: supervised fine-tuning, reinforcement learning
✅ Minimal computational overhead
In the worst case, it ties with the best manual approach. Usually, it's better.
June 30, 2025 at 9:26 PM
NFN measures the alignment between each module (in the pretrained model) and the finetuning task. NFN is a cheap metric that can be calculated in one forward pass. It is based on a large width analysis of module-data alignment and is well suited for LoRA finetuning.
June 30, 2025 at 9:26 PM
Our solution: PLoP (Precise LoRA Placement) 🎯
Instead of guessing, it automatically identifies the optimal modules for LoRA placement based on a notion of module-data alignment that we call NFN (Normalised Feature Norms).
June 30, 2025 at 9:26 PM
❌ Original LoRA paper: "Prioritize attention"
❌ Other papers: "Actually, put them in MLP"
❌ Everyone: just guessing and trying common target modules
June 30, 2025 at 9:26 PM