Ambroise Odonnat
ambroiseodt.bsky.social
Ambroise Odonnat
@ambroiseodt.bsky.social
Ph.D. student in Machine Learning at Inria.
Website: https://ambroiseodt.github.io/
Blog: https://logb-research.github.io
Pinned
🚨So, you want to predict your model's performance at test time?🚨

💡Our NeurIPS 2024 paper proposes 𝐌𝐚𝐍𝐨, a training-free and SOTA approach!

📑 arxiv.org/pdf/2405.18979
🖥️https://github.com/Renchunzi-Xie/MaNo

1/🧵(A surprise at the end!)
Reposted by Ambroise Odonnat
SKADA-Bench : Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities, has been published published in TMLR today 🚀. It was a huge team effort to design (and publish) an open source fully reproducible DA benchmark 🧵1/n. openreview.net/forum?id=k9F...
SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods...
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift. While many...
openreview.net
July 29, 2025 at 12:54 PM
🚀 We are happy to organize the BERT²S workshop @neuripsconf.bsky.social 2025 on Recent Advances in Time Series Foundation Models.
🌐 berts-workshop.github.io
📜Submit by August 22
🎓Speakers and panelists: Chenghao Liu, Mingsheng Long, Zoe Piran, Danielle C. Maddix, Ameet Talwalkar, Qingsong Wen
July 22, 2025 at 2:41 PM
June 24, 2025 at 4:07 PM
🚀 Very happy to be presenting Large Language Models as Markov Chains at Cohere Labs on June 19th at 6 pm CET (Paris time)!!

Huge thanks to Andrej Jovanović @cohere.com @cohereforai.bsky.social for the invitation 🤗

Paper: arxiv.org/pdf/2410.02724
Learn more: cohere.com/events/Coher...
June 13, 2025 at 7:54 AM
Reposted by Ambroise Odonnat
Skada Sprint Alert: Contribute to Domain Adaptation in Python

📖 Machine learning models often fail when the data distribution changes between training and testing. That’s where Domain Adaptation comes in — helping models stay reliable across domains.
May 20, 2025 at 9:30 AM
🤗Thanks a lot @haeggee.bsky.social and @mjaggi.bsky.social for having me in the MLO group at EPFL @icepfl.bsky.social to present "Large Language Models as Markov Chains".

Slides are available on my website (link in thread).

🎉 New experiments with Llama and Gemma models in the updated paper!
February 28, 2025 at 1:03 PM
🤗 Very happy to have (humbly) contributed to this work!

This is a collab with the usual open-source suspects from Inria, @polytechniqueparis.bsky.social and @univparissaclay.bsky.social.

Check it out if you are interested in open-source reproducible research 😇
🚀 I’m pleased to announce a new preprint!

"SKADA-Bench: Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities"

📢 Check it out & contribute!
📜 Paper: arxiv.org/abs/2407.11676
💻 Code: github.com/scikit-adapt...
February 12, 2025 at 4:09 PM
Reposted by Ambroise Odonnat
🚀 Policy gradient methods like DeepSeek’s GRPO are great for finetuning LLMs via RLHF.

But what happens when we swap autoregressive generation for discrete diffusion, a rising architecture promising faster & more controllable LLMs?

Introducing SEPO !

📑 arxiv.org/pdf/2502.01384

🧵👇
February 4, 2025 at 3:42 PM
🚀Proud to share our work on the training dynamics in Transformers with Wassim Bouaziz & @viviencabannes.bsky.social @Inria @MetaAI

📝Easing Optimization Paths arxiv.org/pdf/2501.02362 (accepted @ICASSP 2025 🥳)

📝Clustering Heads 🔥https://arxiv.org/pdf/2410.24050

🖥️ github.com/facebookrese...

1/🧵
February 4, 2025 at 11:56 AM
Happy to see Disentangled In-Context Learning accepted at ICLR 2025 🥳

Make zero-shot reinforcement learning with LLMs go brrr 🚀

🖥️ github.com/abenechehab/...

📜 arxiv.org/pdf/2410.11711

Congrats Abdelhakim (abenechehab.github.io) for leading it, always fun working with nice and strong people 🤗
GitHub - abenechehab/dicl: Official implementation of DICL (Disentangled In-Context Learning), featured in the paper Zero-shot Model-based Reinforcement Learning using Large Language Models.
Official implementation of DICL (Disentangled In-Context Learning), featured in the paper Zero-shot Model-based Reinforcement Learning using Large Language Models. - abenechehab/dicl
github.com
January 25, 2025 at 1:10 PM
🎤Presenting our work on Unsupervised Accuracy Estimation at #NeurIPS2024 this week!

✋🏾Poster Session 4 West - on Thu. at 4:30 pm

📍 Poster #4310 - East Exhibit Hall A-C

DM me if you'd like to chat :)
December 10, 2024 at 2:44 PM
Checkout the new version of this awesome domain adaptation library! So nice to work with such good people 🤗
🚀 Skada v0.4.0 is out!

Skada is an open-source Python library built for domain adaptation (DA), helping machine learning models to adapt to distribution shifts.
Github: github.com/scikit-adapt...
Doc: scikit-adaptation.github.io
DOI: doi.org/10.5281/zeno...
Installation: `pip install skada`
December 6, 2024 at 7:25 PM
🚨So, you want to predict your model's performance at test time?🚨

💡Our NeurIPS 2024 paper proposes 𝐌𝐚𝐍𝐨, a training-free and SOTA approach!

📑 arxiv.org/pdf/2405.18979
🖥️https://github.com/Renchunzi-Xie/MaNo

1/🧵(A surprise at the end!)
December 3, 2024 at 4:58 PM
Reposted by Ambroise Odonnat
Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition!

We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423
November 27, 2024 at 9:00 AM
Check this out, a low-hanging fruit of our recent work « Large Language Models as Markov Chains » arxiv.org/pdf/2410.02724
November 26, 2024 at 3:02 PM