Yue Chen
banner
fanegg.bsky.social
Yue Chen
@fanegg.bsky.social
PhD Student at Westlake University. 3D/4D Reconstruction, Virtual Humans.
fanegg.github.io
#Human3R: Everyone Everywhere All at Once

Just input a RGB video, we online reconstruct 4D humans and scene in 𝗢𝗻𝗲 model and 𝗢𝗻𝗲 stage.

Training this versatile model is easier than you think – it just takes 𝗢𝗻𝗲 day using 𝗢𝗻𝗲 GPU!

🔗Page: fanegg.github.io/Human3R/
October 8, 2025 at 8:49 AM
Again, training-free is all you need.
#TTT3R: 3D Reconstruction as Test-Time Training
TTT3R offers a simple state update rule to enhance length generalization for #CUT3R — No fine-tuning required!
🔗Page: rover-xingyu.github.io/TTT3R
We rebuilt @taylorswift13’s "22" live at the 2013 Billboard Music Awards - in 3D!
October 1, 2025 at 7:06 AM
Reposted by Yue Chen
Excited to introduce LoftUp!

A strong (than ever) and lightweight feature upsampler for vision encoders that can boost performance on dense prediction tasks by 20%–100%!

Easy to plug into models like DINOv2, CLIP, SigLIP — simple design, big gains. Try it out!

github.com/andrehuang/l...
April 22, 2025 at 7:55 AM
Reposted by Yue Chen
I was really surprised when I saw this. Dust3R has learned very well to segment objects without supervision. This knowledge can be extracted post-hoc, enabling accurate 4D reconstruction instantly.
🦣Easi3R: 4D Reconstruction Without Training!

Limited 4D datasets? Take it easy.

#Easi3R adapts #DUSt3R for 4D reconstruction by disentangling and repurposing its attention maps → make 4D reconstruction easier than ever!

🔗Page: easi3r.github.io
April 1, 2025 at 6:45 PM
Just "dissect" the cross-attention mechanism of #DUSt3R, making 4D reconstruction easier.
💡Humans naturally separate ego-motion from object-motion without dynamic labels. We observe that #DUSt3R has implicitly learned a similar mechanism, reflected in its attention layers.
April 1, 2025 at 3:45 PM
#Easi3R is a simple training-free approach adapting DUSt3R for dynamic scenes.
🦣Easi3R: 4D Reconstruction Without Training!

Limited 4D datasets? Take it easy.

#Easi3R adapts #DUSt3R for 4D reconstruction by disentangling and repurposing its attention maps → make 4D reconstruction easier than ever!

🔗Page: easi3r.github.io
April 1, 2025 at 3:45 PM
How much 3D do visual foundation models (VFMs) know?

Previous work requires 3D data for probing → expensive to collect!

#Feat2GS @cvprconference.bsky.social 2025 - our idea is to read out 3D Gaussains from VFMs features, thus probe 3D with novel view synthesis.

🔗Page: fanegg.github.io/Feat2GS
March 31, 2025 at 4:06 PM