Antoine Guédon
banner
antoine-guedon.bsky.social
Antoine Guédon
@antoine-guedon.bsky.social
PhD student in computer vision at Imagine, ENPC - @imagineenpc.bsky.social

I'm interested in 3D Reconstruction, Radiance Fields, Gaussian splatting, 3D Scene Rendering, 3D Scene Understanding, etc.

Webpage: https://anttwo.github.io/
8/n📈Optional depth-order regularization:
For even cleaner backgrounds, we propose an optional loss using DepthAnythingV2 that enforces depth ordering consistency.

This drastically improves background geometry quality!
September 8, 2025 at 11:35 AM
7/n🎨Animation & Editing:
Since Gaussians align with the extracted mesh surface, any mesh modification can easily be propagated to the Gaussians!

We include in the code a Blender addon for easy editing and animation - no coding required.
September 8, 2025 at 11:35 AM
5/n🎯Scalability advantage:
MILo reconstructs full scenes including all background elements, not just foregrounds.

To achieve this efficiency, we select only surface-likely Gaussians by repurposing the importance sampling from Mini-Splatting2.
September 8, 2025 at 11:35 AM
4/n📊Results:
✅ Higher quality meshes with significantly fewer vertices
✅ 60-350MB mesh sizes (vs GBs in other methods)
✅ Complete scene reconstruction (including backgrounds)
✅ Better performance on benchmarks

Efficiency meets quality!
September 8, 2025 at 11:35 AM
3/n🏗️How MILo works:
1️⃣ Each Gaussian spawns pivots
2️⃣ Delaunay triangulation connects pivots
3️⃣ SDF values assigned to pivots
4️⃣ Differentiable Marching Tetrahedra extracts mesh

The pipeline is differentiable, enabling mesh supervision to improve Gaussian configurations!
September 8, 2025 at 11:35 AM
2/n🔗Key innovation: differentiable mesh extraction at every training iteration

Unlike previous methods, MILo extracts vertex locations and connectivity purely from Gaussian parameters, allowing gradient flow from mesh back to Gaussians. This creates a powerful feedback loop!
September 8, 2025 at 11:35 AM
1/n🚀Gaussians > Differentiable function > Mesh?
Check out our new work: MILo: Mesh-In-the-Loop Gaussian Splatting!

🎉Accepted to SIGGRAPH Asia 2025 (TOG)
MILo is a novel differentiable framework that extracts meshes directly from Gaussian parameters during training.

🧵👇
September 8, 2025 at 11:35 AM
I actually saw him dancing on a bench 😱
anttwo.github.io/frosting/
April 3, 2025 at 3:58 PM
🔑 Key point #3: We also introduce a novel “depth-order” regularization that leverages depth maps estimated with a monodepth estimator.

The depth maps can be multi-view inconsistent, no problem!

MAtCha still gets smooth, detailed background while preserving foreground details.
April 3, 2025 at 10:33 AM
🔑 Key point #2: Inspired by Gaussian Opacity Fields, we developed a new mesh extraction method for 2DGS.

It properly handles both foreground and background geometry while being lightweight if needed (only 150-350MB).

No post-processing mesh decimation is required!
April 3, 2025 at 10:33 AM
🔑 Key point #1: Our novel optimization pipeline is robust to sparse-view inputs (as few as 3 to 10 images) but also scales to dense-view scenarios (hundreds of views).

No more choosing between sparse or dense methods!
April 3, 2025 at 10:33 AM
MAtCha introduces a novel surface representation that reconstructs high-quality 3D meshes with photorealistic rendering from just a handful of images.

💡Our key idea: model scene geometry as an Atlas of Charts and refine it with 2D Gaussian surfels.
April 3, 2025 at 10:33 AM
💻We've released the code for our #CVPR2025 paper MAtCha!

🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...

...While also working with dense-view datasets (hundreds of images)!
April 3, 2025 at 10:33 AM
To extract the final 3D mesh from our representation, we propose adapting the tetrahedralization from Gaussian Opacity Fields to make it compatible with any Gaussian-based rendering method.

This approach recovers meshes with much higher quality than TSDF-fusion!
December 11, 2024 at 2:59 PM
After aligning our charts, we refine them using Gaussian Splatting rendering.

Gaussians are constrained to stay close to our charts, preventing them from diverging in this sparse-view scenario.

(👇3 training images)
December 11, 2024 at 2:59 PM
☑️We designed a lightweight neural module that distills high-frequency details from the initial depth maps, while deforming low-frequencies to solve scale ambiguities.

We rely on a sparse-view SfM method (MASt3R-SfM) to estimate camera poses.
December 11, 2024 at 2:59 PM
🗺️We initialize the charts with DepthAnythingV2 and deform them with a novel neural deformation model.

⚠️Depth maps contain many refined details but have inaccurate scale; Our deformation model aims to solve this problem!

(👇5 training images)
December 11, 2024 at 2:59 PM
💡Our key idea is to model the scene geometry as an Atlas of Charts, rendered with Gaussians.

🗺️Each input image is converted into a optimizable chart.

👇In this video, you can see the charts flying and aligning together (10 training images)!
December 11, 2024 at 2:59 PM
⚠️Reconstructing sharp 3D meshes from a few unposed images is a hard and ambiguous problem.

☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵

🌐Webpage: anttwo.github.io/matcha/
December 11, 2024 at 2:59 PM