David Heurtel-Depeiges
heurteldepeiges.bsky.social
David Heurtel-Depeiges
@heurteldepeiges.bsky.social
1st Year PhD Student @ Mila under the supervision of Sarath Chandar
Reposted by David Heurtel-Depeiges
At Chandar Lab, we are happy to announce the third edition of our assistance program to provide feedback for members of communities underrepresented in AI who want to apply to high-profile graduate programs. Want feedback? Details: chandar-lab.github.io/grad_app/. Deadline: Nov 01!
October 3, 2025 at 3:20 PM
Have a look at NovoMolGen implementation on our lab's HF page! Easy to work with and generate new molecules in no time.
We just made NovoMolGen easy to play with: Transformers-native checkpoints on the Hub and small notebooks that let you load, sample, and fine-tune in minutes. The few lines of code that load the model, plug in a reward, run a short RL finetune, and plot the curve.
September 8, 2025 at 5:14 PM
Reposted by David Heurtel-Depeiges
We just made NovoMolGen easy to play with: Transformers-native checkpoints on the Hub and small notebooks that let you load, sample, and fine-tune in minutes. The few lines of code that load the model, plug in a reward, run a short RL finetune, and plot the curve.
September 8, 2025 at 4:07 PM
Collaborative Multi Agent Reinforcement Learning is key for AI in the future. Check out R3D2, a generalist agent working on text-based Hanabi, accepted at ICLR 2025.

Website: chandar-lab.github.io/R3D2-A-Gener...
April 4, 2025 at 5:16 PM
Reposted by David Heurtel-Depeiges
I am excited to share that our BindGPT paper won the best poster award at #AAAI2025! Congratulations to the team! Work led by @artemzholus.bsky.social!
March 5, 2025 at 2:54 PM
Reposted by David Heurtel-Depeiges
The best part? We are open-sourcing everything, including the intermediary model checkpoints. The main model is already on HuggingFace, be sure to check it out! (6/n)

Model: huggingface.co/chandar-lab/...
Paper: arxiv.org/abs/2502.19587
Code and checkpoints to be released soon!
chandar-lab/NeoBERT · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
February 28, 2025 at 4:30 PM
NeoBERT is very strong compared to all baselines, is fully open source and open weights (including intermediary checkpoints)...and it has higher tokens/s throughput. Give it a try and substitute your favorite encoder with this new model!
With identical finetuning, NeoBERT outperforms all baselines on MTEB! (2/n)
February 28, 2025 at 4:40 PM
Great work by great colleagues! Have a look at the paper.
2025 BERT is NeoBERT! We have fully pre-trained a next-generation encoder for 2.1T tokens with the latest advances in data, training, and architecture. This is a heroic effort from my PhD student, Lola Le Breton, in collaboration with Quentin Fournier and Mariam El Mezouar (1/n)
February 28, 2025 at 4:38 PM