I'm interested in 3D Reconstruction, Radiance Fields, Gaussian splatting, 3D Scene Rendering, 3D Scene Understanding, etc.
Webpage: https://anttwo.github.io/
For even cleaner backgrounds, we propose an optional loss using DepthAnythingV2 that enforces depth ordering consistency.
This drastically improves background geometry quality!
For even cleaner backgrounds, we propose an optional loss using DepthAnythingV2 that enforces depth ordering consistency.
This drastically improves background geometry quality!
Since Gaussians align with the extracted mesh surface, any mesh modification can easily be propagated to the Gaussians!
We include in the code a Blender addon for easy editing and animation - no coding required.
Since Gaussians align with the extracted mesh surface, any mesh modification can easily be propagated to the Gaussians!
We include in the code a Blender addon for easy editing and animation - no coding required.
MILo reconstructs full scenes including all background elements, not just foregrounds.
To achieve this efficiency, we select only surface-likely Gaussians by repurposing the importance sampling from Mini-Splatting2.
MILo reconstructs full scenes including all background elements, not just foregrounds.
To achieve this efficiency, we select only surface-likely Gaussians by repurposing the importance sampling from Mini-Splatting2.
✅ Higher quality meshes with significantly fewer vertices
✅ 60-350MB mesh sizes (vs GBs in other methods)
✅ Complete scene reconstruction (including backgrounds)
✅ Better performance on benchmarks
Efficiency meets quality!
✅ Higher quality meshes with significantly fewer vertices
✅ 60-350MB mesh sizes (vs GBs in other methods)
✅ Complete scene reconstruction (including backgrounds)
✅ Better performance on benchmarks
Efficiency meets quality!
1️⃣ Each Gaussian spawns pivots
2️⃣ Delaunay triangulation connects pivots
3️⃣ SDF values assigned to pivots
4️⃣ Differentiable Marching Tetrahedra extracts mesh
The pipeline is differentiable, enabling mesh supervision to improve Gaussian configurations!
1️⃣ Each Gaussian spawns pivots
2️⃣ Delaunay triangulation connects pivots
3️⃣ SDF values assigned to pivots
4️⃣ Differentiable Marching Tetrahedra extracts mesh
The pipeline is differentiable, enabling mesh supervision to improve Gaussian configurations!
Unlike previous methods, MILo extracts vertex locations and connectivity purely from Gaussian parameters, allowing gradient flow from mesh back to Gaussians. This creates a powerful feedback loop!
Unlike previous methods, MILo extracts vertex locations and connectivity purely from Gaussian parameters, allowing gradient flow from mesh back to Gaussians. This creates a powerful feedback loop!
Check out our new work: MILo: Mesh-In-the-Loop Gaussian Splatting!
🎉Accepted to SIGGRAPH Asia 2025 (TOG)
MILo is a novel differentiable framework that extracts meshes directly from Gaussian parameters during training.
🧵👇
Check out our new work: MILo: Mesh-In-the-Loop Gaussian Splatting!
🎉Accepted to SIGGRAPH Asia 2025 (TOG)
MILo is a novel differentiable framework that extracts meshes directly from Gaussian parameters during training.
🧵👇
anttwo.github.io/frosting/
anttwo.github.io/frosting/
The depth maps can be multi-view inconsistent, no problem!
MAtCha still gets smooth, detailed background while preserving foreground details.
The depth maps can be multi-view inconsistent, no problem!
MAtCha still gets smooth, detailed background while preserving foreground details.
It properly handles both foreground and background geometry while being lightweight if needed (only 150-350MB).
No post-processing mesh decimation is required!
It properly handles both foreground and background geometry while being lightweight if needed (only 150-350MB).
No post-processing mesh decimation is required!
No more choosing between sparse or dense methods!
No more choosing between sparse or dense methods!
💡Our key idea: model scene geometry as an Atlas of Charts and refine it with 2D Gaussian surfels.
💡Our key idea: model scene geometry as an Atlas of Charts and refine it with 2D Gaussian surfels.
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
This approach recovers meshes with much higher quality than TSDF-fusion!
This approach recovers meshes with much higher quality than TSDF-fusion!
Gaussians are constrained to stay close to our charts, preventing them from diverging in this sparse-view scenario.
(👇3 training images)
Gaussians are constrained to stay close to our charts, preventing them from diverging in this sparse-view scenario.
(👇3 training images)
We rely on a sparse-view SfM method (MASt3R-SfM) to estimate camera poses.
We rely on a sparse-view SfM method (MASt3R-SfM) to estimate camera poses.
⚠️Depth maps contain many refined details but have inaccurate scale; Our deformation model aims to solve this problem!
(👇5 training images)
⚠️Depth maps contain many refined details but have inaccurate scale; Our deformation model aims to solve this problem!
(👇5 training images)
🗺️Each input image is converted into a optimizable chart.
👇In this video, you can see the charts flying and aligning together (10 training images)!
🗺️Each input image is converted into a optimizable chart.
👇In this video, you can see the charts flying and aligning together (10 training images)!
☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵
🌐Webpage: anttwo.github.io/matcha/
☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵
🌐Webpage: anttwo.github.io/matcha/