Yue Chen
banner
fanegg.bsky.social
Yue Chen
@fanegg.bsky.social
PhD Student at Westlake University. 3D/4D Reconstruction, Virtual Humans.
fanegg.github.io
GT comparison shows our feedforward method, without any iterative optimization, is not only fast but also accurate.

This is achieved by reading out humans from a 4D foundation model, #CUT3R, with our proposed 𝙝𝙪𝙢𝙖𝙣 𝙥𝙧𝙤𝙢𝙥𝙩 𝙩𝙪𝙣𝙞𝙣𝙜.
October 8, 2025 at 8:51 AM
Bonus: #Human3R is also a compact human tokenizer!

Our human tokens capture ID+ shape + pose + position of human, unlocking 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴-𝗳𝗿𝗲𝗲 4D tracking.
October 8, 2025 at 8:50 AM
#Human3R: Everyone Everywhere All at Once

Just input a RGB video, we online reconstruct 4D humans and scene in 𝗢𝗻𝗲 model and 𝗢𝗻𝗲 stage.

Training this versatile model is easier than you think – it just takes 𝗢𝗻𝗲 day using 𝗢𝗻𝗲 GPU!

🔗Page: fanegg.github.io/Human3R/
October 8, 2025 at 8:49 AM
Our findings in 3D probe lead to a simple-yet-effective solution, by just combining features from different visual foundation models and outperform prior works.

Apply #Feat2GS in sparse & causal captures:
🤗Online Demo: huggingface.co/spaces/endle...
March 31, 2025 at 4:08 PM
With #Feat2GS we evaluated more than 10 visual foundation models (DUSt3R, DINO, MAE, SAM, CLIP, MiDas, etc) in terms of geometry and texture — see the paper for comparison.

📄Paper: arxiv.org/abs/2412.09606
🔍Try it NOW: fanegg.github.io/Feat2GS/#chart
March 31, 2025 at 4:07 PM
How much 3D do visual foundation models (VFMs) know?

Previous work requires 3D data for probing → expensive to collect!

#Feat2GS @cvprconference.bsky.social 2025 - our idea is to read out 3D Gaussains from VFMs features, thus probe 3D with novel view synthesis.

🔗Page: fanegg.github.io/Feat2GS
March 31, 2025 at 4:06 PM