Roni Sengupta
banner
ronisen.bsky.social
Roni Sengupta
@ronisen.bsky.social
Asst. Prof. UNC Chapel Hill CS
Computer Vision & Graphics.

https://www.cs.unc.edu/~ronisen/
Fueling up for #ICCV2025 deadline with some local Pizza!
PS. I kind of like hanging around in the lab on the deadline day! So much energy all around!
March 8, 2025 at 1:32 AM
My students will present two papers, including one oral paper, in #WACV2025, which centers on personalized generative face models.
Thanks @luchaoqi.bsky.social for leading this direction.
Unfortunately, couldn't attend WACV as my other students needed me for MICCAI and ICCV deadlines!
February 28, 2025 at 10:43 PM
Our NFL-BA loss relies on the fact that points closer to the camera (depth map) with orientations facing the camera (normal map) reflect more light. NFL-BA loss is minimized during the Tracking & Mapping phase of existing SLAM frameworks, results shown on 3DGS SLAMs.
December 18, 2024 at 2:05 PM
SLAM algorithms struggle on endoscopy videos - specularity, textureless, & dynamic lighting.

We introduce a Near-Field Light Bundle Adjst. loss (NFL-BA): improves performance of SOTA SLAM, e.g. MonoGS (⬆️35% in tracking, ⬆️48% in mapping).

See asdunnbe.github.io/NFL-BA/
Led by Andrea & Daniel
December 18, 2024 at 2:05 PM
A few key insights:

❌ CtrlNet struggles to preserve input color & texture during relighting.
✅ Albedo-cond. Stable Diffusion to the rescue.

❌ Only scribble input in CtrlNet doesn't work at all.
✅ Use latent code from a denoising autoencoder that predicts shading map from scribble+normal map.
December 3, 2024 at 3:27 PM
💡Introducing ScribbleLight, a generative model that can relight an image with simple scribbles - making it easier for virtual staging and interior design of homes.

Led by Jun-Myeong, collabs with Anand (TTIC), Pieter (W&M), and Annie.
@unccs.bsky.social

👉 chedgekorea.github.io/ScribbleLight/
December 3, 2024 at 3:27 PM
Holiday party with the UNC Vision & Graphics Lab students … where they get to see more relaxed version of me after the CVPR deadline! 🤣
November 23, 2024 at 9:48 PM
'Surprisingly', good-old StyleGANv2 is more effective for identity-preserving personalized aging than Diffusion, which struggles to make subtle changes - something to fix in the future!
Credits to @luchaoqi.bsky.social for leading the work, in collabs with folks from @unccs.bsky.social and umd
2/2
November 22, 2024 at 5:14 PM
SOTA virtual face re-aging techniques often struggle to preserve identity for large age changes.
We present MyTimeMachine, a personalized virtual aging gen. model, trained with ~50 images across 20-40 years.
Check out more cool results here: mytimemachine.github.io
1/2
⬇️👂
November 22, 2024 at 5:14 PM