Matthias Niessner
niessner.bsky.social
Matthias Niessner
@niessner.bsky.social
Professor for Visual Computing & Artificial Intelligence @TU Munich
Co-Founder @synthesiaIO
Co-Founder @SpAItialAI

https://niessnerlab.org/publications.html
We also provide an interactive GUI to enable the exploration of our editing pipeline.

🌍 antoniooroz.github.io/PercHead/
📽️ youtu.be/4hFybgTk4kE

Great work by Antonio Oroz and and Tobias Kirschstein
PercHead: Perceptual Head Model for Single-Image 3D Head Reconstruction & Editing
PercHead: Perceptual Head Model for Single-Image 3D Head Reconstruction & Editing
antoniooroz.github.io
November 5, 2025 at 11:37 AM
by swapping the encoder, we can transform the model into a disentangled 3D editing pipeline. In this scenario, we can control geometry through - potentially hand-drawn - segmentation maps, and condition style via image or text prompt.
November 5, 2025 at 11:37 AM
Our trained reconstruction model is able to generate 3D-consistent heads from a single input image. Even with challenging side-view inputs, the model robustly infers missing regions for a coherent, high-fidelity output.

In addition, our architecture seamlessly adapts to downstream tasks:
PercHead: Perceptual Head Model for Single-Image 3D Head Reconstruction & Editing
YouTube video by Matthias Niessner
youtu.be
November 5, 2025 at 11:37 AM
At its core is a generalized 3D head decoder trained with perceptual supervision from DINOv2 and SAM 2.1. We find that our new perceptual loss formulation improves reconstruction fidelity compared to commonly-used methods such as LPIPS.
PercHead: Perceptual Head Model for Single-Image 3D Head Reconstruction & Editing
PercHead: Perceptual Head Model for Single-Image 3D Head Reconstruction & Editing
antoniooroz.github.io
November 5, 2025 at 11:37 AM
Reposted by Matthias Niessner
for more documentation: github.com/scannetpp/sc...

Huge thanks to Yueh-Cheng Liu, as well as Chandan Yeshwanth and @niessner.bsky.social for their incredible work!
GitHub - scannetpp/scannetpp: [ICCV 2023 Oral] ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes
[ICCV 2023 Oral] ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes - scannetpp/scannetpp
github.com
October 13, 2025 at 4:19 PM
On the bright side, tooling for training has dramatically improved since then. Deep learning frameworks (PyTorch et. al) and scheduling systems such as SLURM or Kubernetes have become the backbone of modern AI.
October 12, 2025 at 3:46 PM
Given the humongous compute demands of recent generative frontier AI models -- LLMs, image, and video models, etc. --, where compute is measured in Gigawatts, these challenges seem quite amusing.
October 12, 2025 at 3:46 PM
The required compute was typically a couple of GPUs on a single desktop machine, trained over several days; e.g., AlexNet was trained on two GTX 580 3GB GPUs for 5-6 days.
October 12, 2025 at 3:46 PM
We generate multiple videos along short, pre-defined trajectories that explore the scene in depth. Our scene memory conditions each video on the most relevant prior views while avoiding collisions.

Great work by Manuel Schneider & @LukasHollein
September 17, 2025 at 12:08 PM
We further propose a color-based densification and progressive training scheme for improved quality and faster convergence.

shivangi-aneja.github.io/projects/sca...
youtu.be/VyWkgsGdbkk

Great work by Shivangi Aneja, Sebastian Weiss, Irene Baeza Rojo, Prashanth Chandran, Gaspard Zoss, Derek Bradley
ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions
shivangi-aneja.github.io
August 5, 2025 at 12:30 PM
We operate on patch-based local expression features and increase the representation capacity by synthesizing 3D Gaussians dynamically by leveraging tiny scaffold MLPs conditioned on localized expressions.
ScaffoldAvatar: High-Fidelity Gaussian Avatars with Patch Expressions
shivangi-aneja.github.io
August 5, 2025 at 12:30 PM
TL;DR RGB-D scan as input -> compact, CAD scene representation that also features materials in order to create a digital copy that features the looks of a real environment.

Great work by Zhening (Jack) Huang in collaboration with Xiaoyang Wu, Fangcheng Zhong, Hengshuang Zhao, Joan Lasenby
July 4, 2025 at 7:52 AM