⚲ Paris, France 🔗 abursuc.github.io
Let me know whom I missed!
go.bsky.app/2Bqtn6T
R: Yes! 😍
Introducing Driving on Registers (DrivoR):
a pure Transformer backbone that achieves SOTA results in NAVSIM v1 / v2 and closed-loop HUGSIM evaluation.
Here is how 👇
R: Yes! 😍
Introducing Driving on Registers (DrivoR):
a pure Transformer backbone that achieves SOTA results in NAVSIM v1 / v2 and closed-loop HUGSIM evaluation.
Here is how 👇
Meet DrivoR (Driving on Registers): our latest end2end autonomous driving model.
We teared down complex dependencies & modules from current models to
obtain a pure Transformer-based SOTA driving agent (NAVSIM v1 & v2, HUGSIM).
Find out more 👇
R: Yes! 😍
Introducing Driving on Registers (DrivoR):
a pure Transformer backbone that achieves SOTA results in NAVSIM v1 / v2 and closed-loop HUGSIM evaluation.
Here is how 👇
Meet DrivoR (Driving on Registers): our latest end2end autonomous driving model.
We teared down complex dependencies & modules from current models to
obtain a pure Transformer-based SOTA driving agent (NAVSIM v1 & v2, HUGSIM).
Find out more 👇
Details: www.euraxess.cz/jobs/399390
Details: www.euraxess.cz/jobs/399390
Excited to share 𝙋𝙪𝙛𝙛𝙚𝙧𝘿𝙧𝙞𝙫𝙚 2.0: A fast, friendly driving simulator with RL training via PufferLib at 𝟯𝟬𝟬𝗞 𝘀𝘁𝗲𝗽𝘀/𝘀𝗲𝗰 🐡 + 🚗
youtu.be/LfQ324R-cbE?...
Excited to share 𝙋𝙪𝙛𝙛𝙚𝙧𝘿𝙧𝙞𝙫𝙚 2.0: A fast, friendly driving simulator with RL training via PufferLib at 𝟯𝟬𝟬𝗞 𝘀𝘁𝗲𝗽𝘀/𝘀𝗲𝗰 🐡 + 🚗
youtu.be/LfQ324R-cbE?...
It outperforms all other methods on CARLA by a wide margin, 95 DS on Bench2Drive!
We show that minimizing the asymmetry between data annotator and policy is key for strong IL results.
Code, models, and paper:
ln2697.github.io/lead/
It outperforms all other methods on CARLA by a wide margin, 95 DS on Bench2Drive!
We show that minimizing the asymmetry between data annotator and policy is key for strong IL results.
Code, models, and paper:
ln2697.github.io/lead/
We introduce REGLUE: a unified framework that entangles VAE latents ➕ Global ➕ Local semantics for faster, higher-fidelity image generation.
Links (paper + code) at the end👇
We introduce REGLUE: a unified framework that entangles VAE latents ➕ Global ➕ Local semantics for faster, higher-fidelity image generation.
Links (paper + code) at the end👇
And your argument is saved time, then perhaps you planned to not read all the details. Then again, you should not be a reviewer.
And your argument is saved time, then perhaps you planned to not read all the details. Then again, you should not be a reviewer.
Forza Gianni!
Forza Gianni!
Many thanks to the reviewers and organizers.
Kudos to @yuanyinnn.bsky.social & team!
Finetuning large models is cheaper thanks to LoRA, but is its random init optimal?🤔
Meet IPA: a feature-aware alternative to random projections
#NeurIPS2025 WS #CCFM Oral+Best Paper
Work w/
S. Venkataramanan @tuanhungvu.bsky.social @abursuc.bsky.social M. Cord
🧵
Many thanks to the reviewers and organizers.
Kudos to @yuanyinnn.bsky.social & team!
The talk will be live-streamed: www.hi-paris.fr/2025/09/26/a...
The talk will be live-streamed: www.hi-paris.fr/2025/09/26/a...
It's the Winter School on Social Robotics, Artificial Intelligence and Multimedia (SoRAIM), 9-13 Feb 2026 👇
It's the Winter School on Social Robotics, Artificial Intelligence and Multimedia (SoRAIM), 9-13 Feb 2026 👇
Wired Europe: Let's do tons of AI open source
#aiPULSE2025
Wired Europe: Let's do tons of AI open source
#aiPULSE2025
The morning keynotes talked a lot about open source so my slide here might be timely.
The morning keynotes talked a lot about open source so my slide here might be timely.
We present 5 full papers + 1 workshop about:
💡 self-supervised & representation learning
🖼️ generative image models
🧠 finetuning and understanding LLMs & multimodal LLMs
🔎 feature upsampling
valeoai.github.io/posts/neurip...
We found an asymmetry in LoRA: during training, A changes little & B eats most task-specific adaptation.
So we pre-train A to preserve information before adaptation w/ excellent parameter efficiency #NeurIPS2025 #CCFM 👇
Finetuning large models is cheaper thanks to LoRA, but is its random init optimal?🤔
Meet IPA: a feature-aware alternative to random projections
#NeurIPS2025 WS #CCFM Oral+Best Paper
Work w/
S. Venkataramanan @tuanhungvu.bsky.social @abursuc.bsky.social M. Cord
🧵
We found an asymmetry in LoRA: during training, A changes little & B eats most task-specific adaptation.
So we pre-train A to preserve information before adaptation w/ excellent parameter efficiency #NeurIPS2025 #CCFM 👇
@iclr-conf.bsky.social
@iclr-conf.bsky.social
NAF outperform both VFM-specific upsamplers (FeatUp, JAFAR) and VFM-agnostic methods (JBU, AnyUp) over multiple downstream tasks 👇
🚀Introducing NAF: A universal, zero-shot feature upsampler.
It turns low-res ViT features into pixel-perfect maps.
-⚡ Model-agnostic
-🥇 SoTA results
-🚀 4× faster than SoTA
-📈 Scales up to 2K res
NAF outperform both VFM-specific upsamplers (FeatUp, JAFAR) and VFM-agnostic methods (JBU, AnyUp) over multiple downstream tasks 👇