Eric Dexheimer
@ericdexheimer.bsky.social
Reposted by Eric Dexheimer
Naver Labs Europe organizes a Workshop on AI for Robotics in the French Alpes (Grenoble), the 4th edition. This year the topic is 'Spatial AI', registration is open!
Major announcement ✨registration is OPEN✨
AI for Robotics workshop (4th edition): Spatial AI
🗓️Nov 21-22 Grenoble, France!
Details: tinyurl.com/bdtk2nzs
⭐⭐ 14 confirmed speakers ⭐⭐: 🧵2/3
Poster submissions (travel grant possible): 🧵 3/3
Spread the word!
AI for Robotics workshop (4th edition): Spatial AI
🗓️Nov 21-22 Grenoble, France!
Details: tinyurl.com/bdtk2nzs
⭐⭐ 14 confirmed speakers ⭐⭐: 🧵2/3
Poster submissions (travel grant possible): 🧵 3/3
Spread the word!
July 29, 2025 at 4:14 PM
Naver Labs Europe organizes a Workshop on AI for Robotics in the French Alpes (Grenoble), the 4th edition. This year the topic is 'Spatial AI', registration is open!
Reposted by Eric Dexheimer
🔍Looking for a multi-view depth method that just works?
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/
March 31, 2025 at 12:52 PM
🔍Looking for a multi-view depth method that just works?
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/
We’ve had fun testing the limits of MASt3R-SLAM on in-the-wild videos. Here’s the drone video of a Minnesota bowling alley that we’ve always wanted to reconstruct! Different scene scales, dynamic objects, specular surfaces, and fast motion.
February 25, 2025 at 7:22 PM
We’ve had fun testing the limits of MASt3R-SLAM on in-the-wild videos. Here’s the drone video of a Minnesota bowling alley that we’ve always wanted to reconstruct! Different scene scales, dynamic objects, specular surfaces, and fast motion.
Reposted by Eric Dexheimer
Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation.
Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.
With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)
Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.
With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)
December 16, 2024 at 3:43 PM
Introducing MASt3R-SLAM, the first real-time monocular dense SLAM with MASt3R as a foundation.
Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.
With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)
Easy to use like DUSt3R/MASt3R, from an uncalibrated RGB video it recovers accurate, globally consistent poses & a dense map.
With @ericdexheimer.bsky.social* @ajdavison.bsky.social (*Equal Contribution)