Google DeepMind
Paris
Adding to the discussion on using least-squares or cross-entropy, regression or classification formulations of supervised problems!
A thread on how to bridge these problems:
Big News: For the first time, there will be a day of workshops at #AISTATS 2026, in Tangier, Morocco 🌴🇲🇦
Quentin Berthet @qberthet.bsky.social and I are workshop chairs.
virtual.aistats.org/Conferences/...
Deadline: Oct 17, AOE
Big News: For the first time, there will be a day of workshops at #AISTATS 2026, in Tangier, Morocco 🌴🇲🇦
Quentin Berthet @qberthet.bsky.social and I are workshop chairs.
virtual.aistats.org/Conferences/...
Deadline: Oct 17, AOE
Looking forward to meeting new people and learning about new things. Feel free to reach out if you want to talk about Google DeepMind.
Looking forward to meeting new people and learning about new things. Feel free to reach out if you want to talk about Google DeepMind.
I’m delighted to tell you about our new paper, Soft Condorcet Optimization (SCO) for Ranking of General Agents, to be presented at AAMAS 2025! 🧵 1/N
Learning Theory for Kernel Bilevel Optimization
w/ @fareselkhoury.bsky.social E. Pauwels @michael-arbel.bsky.social
We provide generalization error bounds for bilevel optimization problems where the inner objective is minimized over a RKHS.
arxiv.org/abs/2502.08457
Learning Theory for Kernel Bilevel Optimization
w/ @fareselkhoury.bsky.social E. Pauwels @michael-arbel.bsky.social
We provide generalization error bounds for bilevel optimization problems where the inner objective is minimized over a RKHS.
arxiv.org/abs/2502.08457
Adding to the discussion on using least-squares or cross-entropy, regression or classification formulations of supervised problems!
A thread on how to bridge these problems:
Adding to the discussion on using least-squares or cross-entropy, regression or classification formulations of supervised problems!
A thread on how to bridge these problems:
But what happens when we swap autoregressive generation for discrete diffusion, a rising architecture promising faster & more controllable LLMs?
Introducing SEPO !
📑 arxiv.org/pdf/2502.01384
🧵👇
But what happens when we swap autoregressive generation for discrete diffusion, a rising architecture promising faster & more controllable LLMs?
Introducing SEPO !
📑 arxiv.org/pdf/2502.01384
🧵👇
www.lemonde.fr/sciences/art...
Beaucoup de messages qui me tiennent à cœur : travail d'équipe, logiciel libre, rigueur scientifique
Merci aux collègues et amis qui ont témoigné, je suis ému de lire
www.lemonde.fr/sciences/art...
Beaucoup de messages qui me tiennent à cœur : travail d'équipe, logiciel libre, rigueur scientifique
Merci aux collègues et amis qui ont témoigné, je suis ému de lire
📑 100 papers from NeurIPS 2024. Nearly twice as many as in 2023!
🧑🎓 over 300 registered participants
✅ a local and sustainable alternative to flying to Vancouver.
More info: neuripsinparis.github.io/neurips2024p...
📑 100 papers from NeurIPS 2024. Nearly twice as many as in 2023!
🧑🎓 over 300 registered participants
✅ a local and sustainable alternative to flying to Vancouver.
More info: neuripsinparis.github.io/neurips2024p...
It looks intense, I hope it's correct
arxiv.org/abs/2411.19826
It looks intense, I hope it's correct
arxiv.org/abs/2411.19826
Also a shout-out to the authors of the methods we build on: @qberthet.bsky.social @mblondel.bsky.social @marcocuturi.bsky.social @bachfrancis.bsky.social ky.social
Also a shout-out to the authors of the methods we build on: @qberthet.bsky.social @mblondel.bsky.social @marcocuturi.bsky.social @bachfrancis.bsky.social ky.social
People can be forgetful, or have busy lives but will often reply / update their scores when reminded to.
People can be forgetful, or have busy lives but will often reply / update their scores when reminded to.