Previously intern at Niantic Labs and Skydio.
Working on 3D reconstruction and Deep Learning.
serizba.github.io
We show how the accurate and robust depths from MVSAnywhere serve to regularize gaussian splats, obtaining much cleaner scene reconstructions.
As MVSAnywhere is agnostic to the scene scale, this is plug-and-play for your splats!
We show how the accurate and robust depths from MVSAnywhere serve to regularize gaussian splats, obtaining much cleaner scene reconstructions.
As MVSAnywhere is agnostic to the scene scale, this is plug-and-play for your splats!
MVSAnywhere achieves state-of-the-art results on the Robust Multi-View Depth Benchmark, showing its strong generalization performance.
MVSAnywhere achieves state-of-the-art results on the Robust Multi-View Depth Benchmark, showing its strong generalization performance.
🔹Most models require a known depth range to estimate the cost volume.
✅MVSAnywhere estimates an initial range based on camera scale and setup and refines it. It predicts at the same scale as the input cameras!
🔹Most models require a known depth range to estimate the cost volume.
✅MVSAnywhere estimates an initial range based on camera scale and setup and refines it. It predicts at the same scale as the input cameras!
🔹Previous models struggle across different domains ( indoor🏠 vs outdoor🏞️).
✅MVSAnywhere uses a transformer architecture and is trained on a large array of varied synthetic datasets
🔹Previous models struggle across different domains ( indoor🏠 vs outdoor🏞️).
✅MVSAnywhere uses a transformer architecture and is trained on a large array of varied synthetic datasets
🔹MVS methods completely rely on the matches of the cost volume (not working for low overlap & dynamic)
✅MVSAnywhere successfully combines strong single-view image priors with multi-view information from our cost volume
🔹MVS methods completely rely on the matches of the cost volume (not working for low overlap & dynamic)
✅MVSAnywhere successfully combines strong single-view image priors with multi-view information from our cost volume
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/
We're excited to share MVSAnywhere, which we will present at #CVPR2025. MVSAnywhere produces sharp depths, generalizes and is robust to all kind of scenes, and it's scale agnostic.
More info:
nianticlabs.github.io/mvsanywhere/