Liyuan Zhu
liyuanzzz.bsky.social
Liyuan Zhu
@liyuanzzz.bsky.social
PhD student @ Stanford University. MS @ ETH Zurich.
3D Vision and Generation.
https://www.zhuliyuan.net/
Pinned
🔔 Want to redesign your apartment and control the style of every piece of furniture? (virtual try-on for 3D scenes).
🎨 Introducing ReStyle3D, a method that transforms your apartment into the design styles as you want! #stylization #SIGGRAPH
Page: restyle3d.github.io
Code: github.com/GradientSpac...
Point maps have become a powerful representation for image-based 3D reconstruction. What if we could push point maps even further to tackle 3D registration and assembly?
Introducing Rectified Point Flow (RPF), a generic formulation for point cloud pose estimation.
July 7, 2025 at 3:57 AM
🔔 Want to redesign your apartment and control the style of every piece of furniture? (virtual try-on for 3D scenes).
🎨 Introducing ReStyle3D, a method that transforms your apartment into the design styles as you want! #stylization #SIGGRAPH
Page: restyle3d.github.io
Code: github.com/GradientSpac...
May 27, 2025 at 4:22 PM
Glad to be selected as Outstanding Reviewer for CVPR25!
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!

cvpr.thecvf.com/Conferences/...
May 12, 2025 at 5:11 AM
Reposted by Liyuan Zhu
🚨 SLAM struggling in dynamic environments? We've been there.

WildGS-SLAM at #CVPR2025, our new monocular RGB SLAM system, tackles dynamic scenes with uncertainty-aware tracking and mapping, resulting to more robust tracking, cleaner maps, and high-quality view synthesis. ⬇️

🌐 wildgs-slam.github.io
🥳Excited to share our latest work, WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments, accepted to #CVPR2025 🌐

We present a robust monocular RGB SLAM system that uses uncertainty-aware tracking and mapping to handle dynamic scenes.
April 11, 2025 at 7:15 PM
Reposted by Liyuan Zhu
🎉 Excited to share our latest work, CrossOver: 3D Scene Cross-Modal Alignment, accepted to #CVPR2025 🌐✨

We learn a unified, modality-agnostic embedding space, enabling seamless scene-level alignment across multiple modalities — no semantic annotations needed!🚀
February 26, 2025 at 10:02 PM