A live demo is at 🌐 huggingface.co/spaces/daide...
A live demo is at 🌐 huggingface.co/spaces/daide...
Exciting topics: lots of generative AI using transformers, diffusion, 3DGS, etc. focusing on image synthesis, geometry generation, avatars, and much more - check it out!
So proud of everyone involved - let's go🚀🚀🚀
niessnerlab.org/publications...
Exciting topics: lots of generative AI using transformers, diffusion, 3DGS, etc. focusing on image synthesis, geometry generation, avatars, and much more - check it out!
So proud of everyone involved - let's go🚀🚀🚀
niessnerlab.org/publications...
Yueh-Cheng Liu learns 2DGS initialization, densification, and optimization priors from ScanNet++ => fast & accurate reconstruction!
Project: liu115.github.io/quicksplat
w/ Lukas Höllein and @niessner.bsky.social
Yueh-Cheng Liu learns 2DGS initialization, densification, and optimization priors from ScanNet++ => fast & accurate reconstruction!
Project: liu115.github.io/quicksplat
w/ Lukas Höllein and @niessner.bsky.social
Think your method can handle large-scale 3D scenes?
Put it to the test:
kaldir.vc.in.tum.de/scannetpp/cv...
Updates:
✅ Preprocessed, undistorted DSLR images
✅ 3DGS demo: github.com/scannetpp/3D...
by Yueh-Cheng Liu, @cyeshwanth.bsky.social
Think your method can handle large-scale 3D scenes?
Put it to the test:
kaldir.vc.in.tum.de/scannetpp/cv...
Updates:
✅ Preprocessed, undistorted DSLR images
✅ 3DGS demo: github.com/scannetpp/3D...
by Yueh-Cheng Liu, @cyeshwanth.bsky.social
-> highly accurate face reconstruction by training powerful VITs via surface normals & UV-coordinates estimation.
These cues from our 2D foundation model constrain the 3DMM parameters, achieving great accuracy.
-> highly accurate face reconstruction by training powerful VITs via surface normals & UV-coordinates estimation.
These cues from our 2D foundation model constrain the 3DMM parameters, achieving great accuracy.
From text input, we generate renderable PBR maps! Next to editable image generation, our predictions can be distilled into room-scale scenes using SDS for large-scale PBR texture generation.
From text input, we generate renderable PBR maps! Next to editable image generation, our predictions can be distilled into room-scale scenes using SDS for large-scale PBR texture generation.
@awhiteguitar.bsky.social & Yueh-Cheng Liu have been working tirelessly to bring:
🔹1006 high-fidelity 3D scans
🔹+ DSLR & iPhone captures
🔹+ rich semantics
Elevating 3D scene understanding to the next level!🚀
w/ @niessner.bsky.social
kaldir.vc.in.tum.de/scannetpp
@awhiteguitar.bsky.social & Yueh-Cheng Liu have been working tirelessly to bring:
🔹1006 high-fidelity 3D scans
🔹+ DSLR & iPhone captures
🔹+ rich semantics
Elevating 3D scene understanding to the next level!🚀
w/ @niessner.bsky.social
kaldir.vc.in.tum.de/scannetpp
Daoyi Gao generates articulated meshes with a hierarchical transformer, modeling articulation-aware structures that guide mesh synthesis.
w/ Yawar Siddiqui, Lei Li
Project Page: daoyig.github.io/Mesh_Art/
Daoyi Gao generates articulated meshes with a hierarchical transformer, modeling articulation-aware structures that guide mesh synthesis.
w/ Yawar Siddiqui, Lei Li
Project Page: daoyig.github.io/Mesh_Art/
See you tomorrow!
Project Page: manuel-dahnert.com/research/sce...
See you tomorrow!
Project Page: manuel-dahnert.com/research/sce...
We propose a training-free 3D shape editing approach that rapidly and precisely edits the regions intended by the user and keeps the rest as is.
We propose a training-free 3D shape editing approach that rapidly and precisely edits the regions intended by the user and keeps the rest as is.