Chih-Hao Lin
banner
chih-hao.bsky.social
Chih-Hao Lin
@chih-hao.bsky.social
Pinned
What if you could control the weather in any video — just like applying a filter?
Meet WeatherWeaver, a video model for controllable synthesis and removal of diverse weather effects — such as 🌧️ rain, ☃️ snow, 🌁 fog, and ☁️ clouds — for any input video.
We’re presenting WeatherWeaver at #ICCV2025, Poster Session 3 (Oct 22, Wed, 10:45–12:45)!
Come visit #337 and see how we make it snow in Hawaii 🏝️❄️⛄
What if you could control the weather in any video — just like applying a filter?
Meet WeatherWeaver, a video model for controllable synthesis and removal of diverse weather effects — such as 🌧️ rain, ☃️ snow, 🌁 fog, and ☁️ clouds — for any input video.
October 22, 2025 at 10:27 AM
Reposted by Chih-Hao Lin
Finally, meet your #3DV2026 Publicity Chairs! 📢
@hanwenjiang1 @yanxg.bsky.social @chih-hao.bsky.social @csprofkgd.bsky.social

We’ll keep the 3DV conversation alive: posting updates, refreshing the website, and listening to your feedback.

Got questions or ideas? Tag @3dvconf.bsky.social anytime!
July 31, 2025 at 4:54 PM
Reposted by Chih-Hao Lin
Introducing your #3DV2026 📝Publication Chairs &🔍Research Interaction Chairs!

📝Publication Chairs ensure accepted papers are properly published in the conference proceedings

🔍Research Interaction Chairs encourage engagement by spotlighting exceptional research in 3D vision
July 24, 2025 at 2:36 AM
Reposted by Chih-Hao Lin
Understanding and reconstructing the 3D world are at the heart of computer vision and graphics. At #CVPR2025, we’ve seen many exciting works in 3D vision.
If you're pushing the boundaries, please consider submitting your work to #3DV2026 in Vancouver! (Deadline: Aug. 18, 2025)
July 1, 2025 at 2:08 AM
Excited to share our work at #CVPR2025!
👁️IRIS estimates accurate surface material, spatially-varying HDR lighting, and camera response function given a set of LDR images! It enables realistic, view-consistent, and controllable relighting and object insertion.
(links in 🧵)
June 10, 2025 at 2:46 AM
Reposted by Chih-Hao Lin
I’m thrilled to share that I will be joining Johns Hopkins University’s Department of Computer Science (@jhucompsci.bsky.social, @hopkinsdsai.bsky.social) as an Assistant Professor this fall.
June 2, 2025 at 7:46 PM
Reposted by Chih-Hao Lin
📢 3DV 2026 – Call for Papers is Out!

📝 Paper Deadline: Aug 18
🎥 Supplementary: Aug 21
🔗 3dvconf.github.io/2026/call-fo...

📅 Conference Date: Mar 20–23, 2026
🌆 Location: Vancouver 🇨🇦

🚀 Showcase your latest research to the world!
#3DV2026 #CallForPapers #Vancouver #Canada
May 29, 2025 at 5:11 PM
Reposted by Chih-Hao Lin
🔊 New NVIDIA paper: Audio-SDS 🔊
We repurpose Score Distillation Sampling (SDS) for audio, turning any pretrained audio diffusion model into a tool for diverse tasks, including source separation, impact synthesis & more.

🎧 Demos, audio examples, paper: research.nvidia.com/labs/toronto...

🧵below
May 9, 2025 at 4:06 PM
What if you could control the weather in any video — just like applying a filter?
Meet WeatherWeaver, a video model for controllable synthesis and removal of diverse weather effects — such as 🌧️ rain, ☃️ snow, 🌁 fog, and ☁️ clouds — for any input video.
May 2, 2025 at 2:19 PM
Reposted by Chih-Hao Lin
[1/10] Is scene understanding solved?

Models today can label pixels and detect objects with high accuracy. But does that mean they truly understand scenes?

Super excited to share our new paper and a new task in computer vision: Visual Jenga!

📄 arxiv.org/abs/2503.21770
🔗 visualjenga.github.io
March 29, 2025 at 7:36 PM
🎬Imagine creating professional visual effects (VFX) with just words! We are excited to introduce AutoVFX, a framework that creates realistic video effects from natural language instructions!

This is a cool project led by Hao-Yu, and we will present it at #3DV 2025!
March 22, 2025 at 3:54 AM
Reposted by Chih-Hao Lin
Can we create realistic renderings of urban scenes from a single video while enabling controllable editing: relighting, object compositing, and nighttime simulation?

Check out our #3DV2025 UrbanIR paper, led by @chih-hao.bsky.social that does exactly this.

🔗: urbaninverserendering.github.io
March 16, 2025 at 3:39 AM
Reposted by Chih-Hao Lin
Check out UrbanIR - Inverse rendering of unbounded scenes from a single video!

It’s a super cool project led by the amazing Chih-Hao!

@chih-hao.bsky.social is a rising star in 3DV! Follow him!

Learn more here👇
✨What if we could transform a daytime driving video into a realistic nighttime scene—without ever stepping outside again?
We introduce UrbanIR, a neural rendering framework for 💡relighting, 🌃nighttime simulation, and 🚘 object insertion—all from a single video of urban scenes!
March 15, 2025 at 1:49 PM
✨What if we could transform a daytime driving video into a realistic nighttime scene—without ever stepping outside again?
We introduce UrbanIR, a neural rendering framework for 💡relighting, 🌃nighttime simulation, and 🚘 object insertion—all from a single video of urban scenes!
March 15, 2025 at 6:30 AM
Reposted by Chih-Hao Lin
How do we go beyond colors and recover the intrinsic scene properties? 🤔

👁️ IRIS: Inverse Rendering of Indoor Scenes

IRIS estimates accurate material, lighting, and camera response functions given a set of LDR images, enabling photorealistic and view-consistent relighting and object insertion.
January 10, 2025 at 9:56 PM