Yue Chen
banner
fanegg.bsky.social
Yue Chen
@fanegg.bsky.social

PhD Student at Westlake University. 3D/4D Reconstruction, Virtual Humans.
fanegg.github.io

Computer science 41%
Engineering 15%

Code, model and 4D interactive demo now available

🔗Page: fanegg.github.io/Human3R
📄Paper: arxiv.org/abs/2510.06219
💻Code: github.com/fanegg/Human3R

Big thanks to our awesome team!
@fanegg.bsky.social @xingyu-chen.bsky.social Yuxuan Xue @apchen.bsky.social @xiuyuliang.bsky.social Gerard Pons-Moll
GitHub - fanegg/Human3R: An unified model for 4D human-scene reconstruction
An unified model for 4D human-scene reconstruction - fanegg/Human3R
github.com

GT comparison shows our feedforward method, without any iterative optimization, is not only fast but also accurate.

This is achieved by reading out humans from a 4D foundation model, #CUT3R, with our proposed 𝙝𝙪𝙢𝙖𝙣 𝙥𝙧𝙤𝙢𝙥𝙩 𝙩𝙪𝙣𝙞𝙣𝙜.

Bonus: #Human3R is also a compact human tokenizer!

Our human tokens capture ID+ shape + pose + position of human, unlocking 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴-𝗳𝗿𝗲𝗲 4D tracking.

#Human3R: Everyone Everywhere All at Once

Just input a RGB video, we online reconstruct 4D humans and scene in 𝗢𝗻𝗲 model and 𝗢𝗻𝗲 stage.

Training this versatile model is easier than you think – it just takes 𝗢𝗻𝗲 day using 𝗢𝗻𝗲 GPU!

🔗Page: fanegg.github.io/Human3R/

Again, training-free is all you need.
#TTT3R: 3D Reconstruction as Test-Time Training
TTT3R offers a simple state update rule to enhance length generalization for #CUT3R — No fine-tuning required!
🔗Page: rover-xingyu.github.io/TTT3R
We rebuilt @taylorswift13’s "22" live at the 2013 Billboard Music Awards - in 3D!

Reposted by Yue Chen

#TTT3R: 3D Reconstruction as Test-Time Training
TTT3R offers a simple state update rule to enhance length generalization for #CUT3R — No fine-tuning required!
🔗Page: rover-xingyu.github.io/TTT3R
We rebuilt @taylorswift13’s "22" live at the 2013 Billboard Music Awards - in 3D!

Reposted by Yue Chen

💡Humans naturally separate ego-motion from object-motion without dynamic labels. We observe that #DUSt3R has implicitly learned a similar mechanism, reflected in its attention layers.

Reposted by Yue Chen

Excited to introduce LoftUp!

A strong (than ever) and lightweight feature upsampler for vision encoders that can boost performance on dense prediction tasks by 20%–100%!

Easy to plug into models like DINOv2, CLIP, SigLIP — simple design, big gains. Try it out!

github.com/andrehuang/l...

Reposted by Yue Chen

🦣Easi3R: 4D Reconstruction Without Training!

Limited 4D datasets? Take it easy.

#Easi3R adapts #DUSt3R for 4D reconstruction by disentangling and repurposing its attention maps → make 4D reconstruction easier than ever!

🔗Page: easi3r.github.io

Just "dissect" the cross-attention mechanism of #DUSt3R, making 4D reconstruction easier.
💡Humans naturally separate ego-motion from object-motion without dynamic labels. We observe that #DUSt3R has implicitly learned a similar mechanism, reflected in its attention layers.

#Easi3R is a simple training-free approach adapting DUSt3R for dynamic scenes.
🦣Easi3R: 4D Reconstruction Without Training!

Limited 4D datasets? Take it easy.

#Easi3R adapts #DUSt3R for 4D reconstruction by disentangling and repurposing its attention maps → make 4D reconstruction easier than ever!

🔗Page: easi3r.github.io

Our findings in 3D probe lead to a simple-yet-effective solution, by just combining features from different visual foundation models and outperform prior works.

Apply #Feat2GS in sparse & causal captures:
🤗Online Demo: huggingface.co/spaces/endle...

With #Feat2GS we evaluated more than 10 visual foundation models (DUSt3R, DINO, MAE, SAM, CLIP, MiDas, etc) in terms of geometry and texture — see the paper for comparison.

📄Paper: arxiv.org/abs/2412.09606
🔍Try it NOW: fanegg.github.io/Feat2GS/#chart

How much 3D do visual foundation models (VFMs) know?

Previous work requires 3D data for probing → expensive to collect!

#Feat2GS @cvprconference.bsky.social 2025 - our idea is to read out 3D Gaussains from VFMs features, thus probe 3D with novel view synthesis.

🔗Page: fanegg.github.io/Feat2GS