Yixin Chen
yixinchen.bsky.social
Yixin Chen
@yixinchen.bsky.social
Research Scientist at BIGAI, 3D Vision, prev @UCLA, @MPI_IS, @Amazon, https://yixchen.github.io
📊 We also visualize the sampling process of:

🔹 Ours (with biased timestep scheduler) ✅

🔹 Zero123 (without it) ❌

Our approach shows more precise location prediction in the earlier stage & finer detail refinement in later stages! 🎯✨ (5/6)
April 1, 2025 at 1:45 AM
💡 Key insight in MOVIS: A biased noise timestep scheduler for diffusion-based novel view synthesizer that prioritizes larger timesteps early in training and gradually decreases them over time. This improves novel view synthesis in multi-object scenes! 🎯🔥 (4/6)
April 1, 2025 at 1:45 AM
🔍We analyze the sampling process of diffusion-based novel view synthesizers and:
📌 Larger timesteps → Focus on position & orientation recovery
📌 Smaller timesteps → Refine geometry & appearance

👇 We visualize the sampling process below! (3/6)
April 1, 2025 at 1:44 AM
In MOVIS, we enhance diffusion-based novel view synthesis with:
🔍 Additional structural inputs (depth & mask)
🖌️ Novel-view mask prediction as an auxiliary task
🎯 A biased noise scheduler to facilitate training
We identify the following key insight: (2/6)
April 1, 2025 at 1:43 AM
🚀How to preserve object consistency in NVS, ensuring correct position, orientation, plausible geometry, and appearance? This is especially critical for image/video generative models and world models.

🎉Check out our #CVPR2025 paper: MOVIS (jason-aplp.github.io/MOVIS) 👇 (1/6)
April 1, 2025 at 1:42 AM
Even more!

Our model generalizes to in-the-wild scenes like YouTube videos🎥🌍! Using just *15 input views*, we achieve high-quality reconstructions with detailed geometry & appearance. 🌟 Watch the demo to see it in action! 👇 (5/n)
March 21, 2025 at 9:52 AM
🏆 On datasets like Replica and ScanNet++, our model produces higher-quality reconstructions compared to baselines, including better accuracy in less-captured areas, more precise object structures, smoother backgrounds, and fewer floating artifacts. 👀 (4/n)
March 21, 2025 at 9:51 AM
🎥✨ Our method excels in large, heavily occluded scenes, outperforming baselines that require 100 views using just 10. The reconstructed scene supports interactive text-based editing, and its decomposed object meshes enable photorealistic VFX edits.👇 (3/n)
March 21, 2025 at 9:50 AM
🛠️ Our method combines decompositional neural reconstruction with diffusion prior, filling in missing information in less observed and occluded regions. The reconstruction (rendering loss) and generative (SDS loss) guidance are balanced by our visibility-guided modeling. (2/n)
March 21, 2025 at 9:48 AM
🚀 How to reconstruct 3D scenes with decomposed objects from sparse inputs?

Check out DPRecon (dp-recon.github.io) at #CVPR2025 — it recovers all objects, achieves photorealistic mesh rendering, and supports text-based geometry & appearance editing. More details👇 (1/n)
March 21, 2025 at 9:48 AM
📢📢📢Excited to announce the 5th Workshop on 3D Scene Understanding for Vision, Graphics, and Robotics at #CVPR2025! Expect our awesome speakers and challenges on multi-modal 3D scene understanding and reasoning. 🎉🎉🎉

Learn more at scene-understanding.com.
March 14, 2025 at 9:20 AM