Website: https://ambroiseodt.github.io/
Blog: https://logb-research.github.io
🌐 berts-workshop.github.io
📜Submit by August 22
🎓Speakers and panelists: Chenghao Liu, Mingsheng Long, Zoe Piran, Danielle C. Maddix, Ameet Talwalkar, Qingsong Wen
🌐 berts-workshop.github.io
📜Submit by August 22
🎓Speakers and panelists: Chenghao Liu, Mingsheng Long, Zoe Piran, Danielle C. Maddix, Ameet Talwalkar, Qingsong Wen
Slides are available on my website (link in thread).
🎉 New experiments with Llama and Gemma models in the updated paper!
Slides are available on my website (link in thread).
🎉 New experiments with Llama and Gemma models in the updated paper!
Hopefully, more to come soon🤠
"Moi, si je devais résumer ma vie aujourd’hui avec vous, je dirais que c’est d’abord des rencontres."
Hopefully, more to come soon🤠
"Moi, si je devais résumer ma vie aujourd’hui avec vous, je dirais que c’est d’abord des rencontres."
6/🧵
6/🧵
5/🧵
5/🧵
4/🧵
4/🧵
3/🧵
3/🧵
2/🧵
2/🧵
📝Easing Optimization Paths arxiv.org/pdf/2501.02362 (accepted @ICASSP 2025 🥳)
📝Clustering Heads 🔥https://arxiv.org/pdf/2410.24050
🖥️ github.com/facebookrese...
1/🧵
📝Easing Optimization Paths arxiv.org/pdf/2501.02362 (accepted @ICASSP 2025 🥳)
📝Clustering Heads 🔥https://arxiv.org/pdf/2410.24050
🖥️ github.com/facebookrese...
1/🧵
✋🏾Poster Session 4 West - on Thu. at 4:30 pm
📍 Poster #4310 - East Exhibit Hall A-C
DM me if you'd like to chat :)
✋🏾Poster Session 4 West - on Thu. at 4:30 pm
📍 Poster #4310 - East Exhibit Hall A-C
DM me if you'd like to chat :)
Finally, I want to thank @ramealexandre.bsky.social Youssef Attia El Hili for fruitful discussions during the elaboration of this work.
🧵/🧵
Finally, I want to thank @ramealexandre.bsky.social Youssef Attia El Hili for fruitful discussions during the elaboration of this work.
🧵/🧵
Our work includes a result akin to the one of
@petar-v.bsky.social in “softmax is not sharp enough” (arxiv.org/pdf/2410.01104). We discuss its implications in the context of unsupervised accuracy estimation.
12/🧵
Our work includes a result akin to the one of
@petar-v.bsky.social in “softmax is not sharp enough” (arxiv.org/pdf/2410.01104). We discuss its implications in the context of unsupervised accuracy estimation.
12/🧵
11/🧵
11/🧵
10/🧵
10/🧵
9/🧵
9/🧵
8/🧵
8/🧵
7/🧵
7/🧵
6/🧵
6/🧵
5/🧵
5/🧵
4/🧵
4/🧵
3/🧵
3/🧵
1) No access to pre-training data,
2) Unlabeled test data,
3) Distribution shift between training and test data.
2/🧵
1) No access to pre-training data,
2) Unlabeled test data,
3) Distribution shift between training and test data.
2/🧵
💡Our NeurIPS 2024 paper proposes 𝐌𝐚𝐍𝐨, a training-free and SOTA approach!
📑 arxiv.org/pdf/2405.18979
🖥️https://github.com/Renchunzi-Xie/MaNo
1/🧵(A surprise at the end!)
💡Our NeurIPS 2024 paper proposes 𝐌𝐚𝐍𝐨, a training-free and SOTA approach!
📑 arxiv.org/pdf/2405.18979
🖥️https://github.com/Renchunzi-Xie/MaNo
1/🧵(A surprise at the end!)