valeo.ai
@valeoai.bsky.social
We are a research team on artificial intelligence for automotive applications working toward assisted and autonomous driving.
--> https://valeoai.github.io/ <--
--> https://valeoai.github.io/ <--
Pinned
valeo.ai
@valeoai.bsky.social
· Feb 24
🚗 Ever wondered if an AI model could learn to drive just by watching YouTube? 🎥👀
We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.
No labels. No maps. Just pure observation.
And it works! 🤯
🧵👇 [1/10]
We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.
No labels. No maps. Just pure observation.
And it works! 🤯
🧵👇 [1/10]
Privileged to the diffusion master @nicolasdufour.bsky.social give to our team (full house) a tour of his excellent works in data and compute efficient diffusion models and a sneak preview of his latest MIRO work.
Check it out 👌
Check it out 👌
October 31, 2025 at 7:28 PM
Privileged to the diffusion master @nicolasdufour.bsky.social give to our team (full house) a tour of his excellent works in data and compute efficient diffusion models and a sneak preview of his latest MIRO work.
Check it out 👌
Check it out 👌
Our recent research will be presented at @iccv.bsky.social! #ICCV2025
We’ll present 5 papers about:
💡 self-supervised & representation learning
🌍 3D occupancy & multi-sensor perception
🧩 open-vocabulary segmentation
🧠 multimodal LLMs & explainability
valeoai.github.io/posts/iccv-2...
We’ll present 5 papers about:
💡 self-supervised & representation learning
🌍 3D occupancy & multi-sensor perception
🧩 open-vocabulary segmentation
🧠 multimodal LLMs & explainability
valeoai.github.io/posts/iccv-2...
October 17, 2025 at 10:10 PM
Our recent research will be presented at @iccv.bsky.social! #ICCV2025
We’ll present 5 papers about:
💡 self-supervised & representation learning
🌍 3D occupancy & multi-sensor perception
🧩 open-vocabulary segmentation
🧠 multimodal LLMs & explainability
valeoai.github.io/posts/iccv-2...
We’ll present 5 papers about:
💡 self-supervised & representation learning
🌍 3D occupancy & multi-sensor perception
🧩 open-vocabulary segmentation
🧠 multimodal LLMs & explainability
valeoai.github.io/posts/iccv-2...
The PhD graduation season in the team goes on!
Today, Corentin Sautier is defending his PhD on "Learning Actionable LiDAR Representations without Annotations".
Good luck! 🚀
Today, Corentin Sautier is defending his PhD on "Learning Actionable LiDAR Representations without Annotations".
Good luck! 🚀
Another great event for @valeoai.bsky.social team: a PhD defense of Corentin Sautier.
His thesis «Learning Actionable LiDAR Representations w/o Annotations» covers the papers BEVContrast (learning self-sup LiDAR features), SLidR, ScaLR (distillation), UNIT and Alpine (solving tasks w/o labels).
His thesis «Learning Actionable LiDAR Representations w/o Annotations» covers the papers BEVContrast (learning self-sup LiDAR features), SLidR, ScaLR (distillation), UNIT and Alpine (solving tasks w/o labels).
October 7, 2025 at 1:40 PM
The PhD graduation season in the team goes on!
Today, Corentin Sautier is defending his PhD on "Learning Actionable LiDAR Representations without Annotations".
Good luck! 🚀
Today, Corentin Sautier is defending his PhD on "Learning Actionable LiDAR Representations without Annotations".
Good luck! 🚀
“Has anyone heard about DUSt3R?”
All hands and hearts up in the room.
Honored to welcome @gabrielacsurka.bsky.social today to speak about the amazing work @naverlabseurope.bsky.social towards 3D Foundation Models
All hands and hearts up in the room.
Honored to welcome @gabrielacsurka.bsky.social today to speak about the amazing work @naverlabseurope.bsky.social towards 3D Foundation Models
October 6, 2025 at 12:38 PM
“Has anyone heard about DUSt3R?”
All hands and hearts up in the room.
Honored to welcome @gabrielacsurka.bsky.social today to speak about the amazing work @naverlabseurope.bsky.social towards 3D Foundation Models
All hands and hearts up in the room.
Honored to welcome @gabrielacsurka.bsky.social today to speak about the amazing work @naverlabseurope.bsky.social towards 3D Foundation Models
It’s PhD graduation season in the team!
Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! 🚀
Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! 🚀
October 6, 2025 at 12:09 PM
It’s PhD graduation season in the team!
Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! 🚀
Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! 🚀
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏
Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social
@iccv.bsky.social
iccv.thecvf.com/Conferences/...
Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social
@iccv.bsky.social
iccv.thecvf.com/Conferences/...
2025 ICCV Program Committee
iccv.thecvf.com
October 2, 2025 at 3:28 PM
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏
Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social
@iccv.bsky.social
iccv.thecvf.com/Conferences/...
Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social
@iccv.bsky.social
iccv.thecvf.com/Conferences/...
CoRL 2025 is just around the corner in Seoul, Korea!
🤖 🚗
We're excited to present our latest research and connect with the community.
#CoRL2025
🤖 🚗
We're excited to present our latest research and connect with the community.
#CoRL2025
September 24, 2025 at 4:47 PM
CoRL 2025 is just around the corner in Seoul, Korea!
🤖 🚗
We're excited to present our latest research and connect with the community.
#CoRL2025
🤖 🚗
We're excited to present our latest research and connect with the community.
#CoRL2025
We're thrilled to join the ambitious ELLIOT project towards European large open-multimodal foundation models.
The project kick-off is today!
The project kick-off is today!
🚀 A new era in European #AIresearch begins!
ELLIOT is a €25M #HorizonEurope project launching July 2025 to build open, trustworthy Multimodal Generalist Foundation Models.
30 partners, 12 countries, EU values.
🔗 Press release: apigateway.agilitypr.com/distribution...
ELLIOT is a €25M #HorizonEurope project launching July 2025 to build open, trustworthy Multimodal Generalist Foundation Models.
30 partners, 12 countries, EU values.
🔗 Press release: apigateway.agilitypr.com/distribution...
July 8, 2025 at 6:40 AM
We're thrilled to join the ambitious ELLIOT project towards European large open-multimodal foundation models.
The project kick-off is today!
The project kick-off is today!
Reposted by valeo.ai
ELSA will be extended!🎉
The European Commission decided to extend the duration of our Lighthouse on Secure and Safe AI. We will now run for an additional 12 months until August 2026.
Find more details in the official press release:
elsa-ai.eu/official-ext...
Congratulations to the network!
The European Commission decided to extend the duration of our Lighthouse on Secure and Safe AI. We will now run for an additional 12 months until August 2026.
Find more details in the official press release:
elsa-ai.eu/official-ext...
Congratulations to the network!
June 17, 2025 at 10:42 AM
ELSA will be extended!🎉
The European Commission decided to extend the duration of our Lighthouse on Secure and Safe AI. We will now run for an additional 12 months until August 2026.
Find more details in the official press release:
elsa-ai.eu/official-ext...
Congratulations to the network!
The European Commission decided to extend the duration of our Lighthouse on Secure and Safe AI. We will now run for an additional 12 months until August 2026.
Find more details in the official press release:
elsa-ai.eu/official-ext...
Congratulations to the network!
Check out our MOCA self-supervised learning approach unifying the learning principles of both discriminative & masked image modelling paradigms.
With a non-linear path, MOCA has been accepted at #TMLR and presented in the TMLR poster session at #iclr2025
With a non-linear path, MOCA has been accepted at #TMLR and presented in the TMLR poster session at #iclr2025
1/ New & old work on self-supervised representation learning (SSL) with ViTs:
MOCA ☕ - Predicting Masked Online Codebook Assignments w/ @spyrosgidaris.bsky.social O. Simeoni, A. Vobecky, @matthieucord.bsky.social, N. Komodakis, @ptrkprz.bsky.social #TMLR #ICLR2025
Grab a ☕ & brace for a story & a🧵
MOCA ☕ - Predicting Masked Online Codebook Assignments w/ @spyrosgidaris.bsky.social O. Simeoni, A. Vobecky, @matthieucord.bsky.social, N. Komodakis, @ptrkprz.bsky.social #TMLR #ICLR2025
Grab a ☕ & brace for a story & a🧵
June 27, 2025 at 8:32 AM
Reposted by valeo.ai
Check out DIP (Dense In-context Post-training) @iccv.bsky.social: an effective post-training strategy to unleash dense awareness of features from your favorite pre-trained encoder (DINOv2, CLIP, MAE)
We leverage meta-learning-like pseudo-tasks w/ pseudo-labels.
Kudos @ssirko.bsky.social 👇
#iccv2025
We leverage meta-learning-like pseudo-tasks w/ pseudo-labels.
Kudos @ssirko.bsky.social 👇
#iccv2025
1/n 🚀New paper out - accepted at #ICCV2025!
Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding
Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding
Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
June 25, 2025 at 7:24 PM
Check out DIP (Dense In-context Post-training) @iccv.bsky.social: an effective post-training strategy to unleash dense awareness of features from your favorite pre-trained encoder (DINOv2, CLIP, MAE)
We leverage meta-learning-like pseudo-tasks w/ pseudo-labels.
Kudos @ssirko.bsky.social 👇
#iccv2025
We leverage meta-learning-like pseudo-tasks w/ pseudo-labels.
Kudos @ssirko.bsky.social 👇
#iccv2025
How to make your DINOv2 excel at dense in-context scene understanding tasks.
Check out DIP an effective post-training strategy by @ssirko.bsky.social @spyrosgidaris.bsky.social
@vobeckya.bsky.social @abursuc.bsky.social and Nicolas Thome 👇
#iccv2025
Check out DIP an effective post-training strategy by @ssirko.bsky.social @spyrosgidaris.bsky.social
@vobeckya.bsky.social @abursuc.bsky.social and Nicolas Thome 👇
#iccv2025
1/n 🚀New paper out - accepted at #ICCV2025!
Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding
Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding
Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
June 25, 2025 at 7:35 PM
How to make your DINOv2 excel at dense in-context scene understanding tasks.
Check out DIP an effective post-training strategy by @ssirko.bsky.social @spyrosgidaris.bsky.social
@vobeckya.bsky.social @abursuc.bsky.social and Nicolas Thome 👇
#iccv2025
Check out DIP an effective post-training strategy by @ssirko.bsky.social @spyrosgidaris.bsky.social
@vobeckya.bsky.social @abursuc.bsky.social and Nicolas Thome 👇
#iccv2025
Hey #IV2025, @t-martyniuk.bsky.social is presenting her LiDPM work today!
tl;dr: LiDPM - a point diffusion strategy for scene completion from outdoor LiDAR point clouds.
Check out the paper and code below if you can't make it for the poster.
tl;dr: LiDPM - a point diffusion strategy for scene completion from outdoor LiDAR point clouds.
Check out the paper and code below if you can't make it for the poster.
Presenting our project #LiDPM in the afternoon oral session at #IV2025!
Project page: astra-vision.github.io/LiDPM/
w/ @gillespuy.bsky.social, @alexandreboulch.bsky.social, Renaud Marlet, Raoul de Charette
Also, see our poster at 3pm in the Caravaggio room and AMA 😉
Project page: astra-vision.github.io/LiDPM/
w/ @gillespuy.bsky.social, @alexandreboulch.bsky.social, Renaud Marlet, Raoul de Charette
Also, see our poster at 3pm in the Caravaggio room and AMA 😉
June 23, 2025 at 11:29 AM
Hey #IV2025, @t-martyniuk.bsky.social is presenting her LiDPM work today!
tl;dr: LiDPM - a point diffusion strategy for scene completion from outdoor LiDAR point clouds.
Check out the paper and code below if you can't make it for the poster.
tl;dr: LiDPM - a point diffusion strategy for scene completion from outdoor LiDAR point clouds.
Check out the paper and code below if you can't make it for the poster.
Reposted by valeo.ai
🚀Thrilled to introduce JAFAR—a lightweight, flexible, plug-and-play module that upsamples features from any Foundation Vision Encoder to any desired output resolution (1/n)
Paper : arxiv.org/abs/2506.11136
Project Page: jafar-upsampler.github.io
Github: github.com/PaulCouairon...
Paper : arxiv.org/abs/2506.11136
Project Page: jafar-upsampler.github.io
Github: github.com/PaulCouairon...
June 16, 2025 at 1:59 PM
🚀Thrilled to introduce JAFAR—a lightweight, flexible, plug-and-play module that upsamples features from any Foundation Vision Encoder to any desired output resolution (1/n)
Paper : arxiv.org/abs/2506.11136
Project Page: jafar-upsampler.github.io
Github: github.com/PaulCouairon...
Paper : arxiv.org/abs/2506.11136
Project Page: jafar-upsampler.github.io
Github: github.com/PaulCouairon...
Just back from CVPR@Paris 🥐, what a fantastic event!
Great talks, great posters, and great to connect with the French & European vision community.
Kudos to the organizers, hoping that it returns next year! 🤞
#CVPR2025 @cvprconference.bsky.social
Great talks, great posters, and great to connect with the French & European vision community.
Kudos to the organizers, hoping that it returns next year! 🤞
#CVPR2025 @cvprconference.bsky.social
June 6, 2025 at 5:41 PM
Just back from CVPR@Paris 🥐, what a fantastic event!
Great talks, great posters, and great to connect with the French & European vision community.
Kudos to the organizers, hoping that it returns next year! 🤞
#CVPR2025 @cvprconference.bsky.social
Great talks, great posters, and great to connect with the French & European vision community.
Kudos to the organizers, hoping that it returns next year! 🤞
#CVPR2025 @cvprconference.bsky.social
👏 Huge congrats to our research scientist Elias Ramzi for winning the AFRIF 2024 PhD award for his thesis "Robust image retrieval with deep learning", conducted at CNAM. Well deserved recognition for amazing work! 🏆
🔗 afrif.irisa.fr?page_id=54
🔗 afrif.irisa.fr?page_id=54
Lauréats des prix de thèse AFRIF – AFRIF
afrif.irisa.fr
April 14, 2025 at 7:50 AM
👏 Huge congrats to our research scientist Elias Ramzi for winning the AFRIF 2024 PhD award for his thesis "Robust image retrieval with deep learning", conducted at CNAM. Well deserved recognition for amazing work! 🏆
🔗 afrif.irisa.fr?page_id=54
🔗 afrif.irisa.fr?page_id=54
Our recent research will be presented at #ICLR2025 @iclr_conf: VLMs, LLMs, diffusion models, self-supervised learning, physics-informed learning…
Find out more below 🧵
valeoai.github.io/posts/2025-0...
Find out more below 🧵
valeoai.github.io/posts/2025-0...
April 9, 2025 at 9:38 AM
Our recent research will be presented at #ICLR2025 @iclr_conf: VLMs, LLMs, diffusion models, self-supervised learning, physics-informed learning…
Find out more below 🧵
valeoai.github.io/posts/2025-0...
Find out more below 🧵
valeoai.github.io/posts/2025-0...
We have a very special guest visiting us today: the one and only Alyosha Efros
March 20, 2025 at 10:16 PM
We have a very special guest visiting us today: the one and only Alyosha Efros
🚗 Ever wondered if an AI model could learn to drive just by watching YouTube? 🎥👀
We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.
No labels. No maps. Just pure observation.
And it works! 🤯
🧵👇 [1/10]
We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.
No labels. No maps. Just pure observation.
And it works! 🤯
🧵👇 [1/10]
February 24, 2025 at 12:53 PM
🚗 Ever wondered if an AI model could learn to drive just by watching YouTube? 🎥👀
We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.
No labels. No maps. Just pure observation.
And it works! 🤯
🧵👇 [1/10]
We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.
No labels. No maps. Just pure observation.
And it works! 🤯
🧵👇 [1/10]
We've just had our annual gathering to get together and brainstorm on new exciting ideas and projects ahead -- stay tuned!
This is also an excellent occasion to fit all team members in a photo 📸
This is also an excellent occasion to fit all team members in a photo 📸
January 27, 2025 at 5:00 PM
We've just had our annual gathering to get together and brainstorm on new exciting ideas and projects ahead -- stay tuned!
This is also an excellent occasion to fit all team members in a photo 📸
This is also an excellent occasion to fit all team members in a photo 📸
📚🔍Excited to share our work at #NeurIPS2024! Dive into representation learning, optimization, explainability, VLMs & LLMs, and more.
Check out our blog post to know more about our 7 papers: valeoai.github.io/posts/2024-1...
🧵👇
Check out our blog post to know more about our 7 papers: valeoai.github.io/posts/2024-1...
🧵👇
valeo.ai at NeurIPS 2024 | valeo.ai - valeo.ai research page
Victor Letzelter, Nermin Samet, Yuan Yin, Andrei Bursuc, Éloi Zablocki
valeoai.github.io
December 13, 2024 at 3:40 PM
📚🔍Excited to share our work at #NeurIPS2024! Dive into representation learning, optimization, explainability, VLMs & LLMs, and more.
Check out our blog post to know more about our 7 papers: valeoai.github.io/posts/2024-1...
🧵👇
Check out our blog post to know more about our 7 papers: valeoai.github.io/posts/2024-1...
🧵👇