Rohit Gandikota
rohitgandikota.bsky.social
Rohit Gandikota
@rohitgandikota.bsky.social
Ph.D. AI @ Northeastern University. Understanding, mapping, and editing knowledge in large generative models. Ex-Scientist Indian Space Research Organization
You can now edit text-to-video models in under 1 second! ⚡️🚀🎥

Unified Concept Editing now supports video models, thanks to Mamiglia!

It's amazing how UCE extends to large video models: instant erase/edit (1 sec) compared to standard fine-tuning (>30 mins) ⏱️

Try the code👇
December 8, 2025 at 7:56 PM
We discovered how to fix diffusion model's diversity issues using interpretability!

It's all in the first time-step!⏱️

Turns out the concepts to be diverse are present in the model - it simply doesn't use them!

Checkout our @wacv_official work - we added theoretical evidence👇
x.com/rohitgandik...
December 3, 2025 at 5:00 PM
We also find a deep trade-off:
Robust methods (destruction-based🧨) tend to distort unrelated generations.

Understanding this helps researchers choose or design erasure methods that fit their needs.
December 1, 2025 at 2:50 PM
⚙️Classifier Steering: Before the popular classifier-free guidance ( @hojonathanho) came the classifier guidance ( @prafdhar)

By steering generator outputs along an external classifier's manifold, we search the knowledge of a diffusion model and bring back the erased concepts
x.com/prafdhar/st...
December 1, 2025 at 2:50 PM
🏞️ In-context attacks: Inspired by In-context Learning in LLMs, we design a similar experiment in image models with in-painting

By showing an unfinished image and asking model to finish it, we nudge it to search through its knowledge and complete the task through visual context
x.com/arankomatsu...
December 1, 2025 at 2:50 PM
🧠Training-free method: We add small amounts of noise after each denoising step, like Brownian motion in physics. We call it - Noise-based probing.

This technique reveals hidden "erased" knowledge inside most of the robust unlearnt models
December 1, 2025 at 2:50 PM
Real trick is finding clever ways to stimulate the model to reveal its hidden knowledge. In this work, we found several simple probes to do that!

📈Optimization-based
🧠Training-free methods
🏞️ In-context attacks
⚙️Classifier Steering

All unlearning methods show traces!
December 1, 2025 at 2:50 PM
We tested several unlearning methods and found none of them really erase knowledge from the model - they simply hide it! 🧐

What does this mean? We must tread carefully with unlearning research within diffusion models🚨

Here is what we learned 🧵👇(led by @kevinlu4588)
x.com/kevinlu4588...
December 1, 2025 at 2:50 PM
Stuck on what Thanksgiving dish to cook? 🦃 Here are some AI generated ideas - by composing concepts like "cooked" and "fancy" dinner. 🍗🥧 You can also blend and explore “vegetarian”, “healthier diet”, and many more!

Stay tuned for technical details! 🧵

Happy Holidays!🎄✨

@davidbau.bsky.social
November 22, 2023 at 4:40 AM