Anubhav Jain
anubhavj480.bsky.social
Anubhav Jain
@anubhavj480.bsky.social
PhD Candidate @ NYU
Reposted by Anubhav Jain
New results from @anubhavj480.bsky.social, one of my co-advised students (on the job market, hint hint): a new way of forging or removing watermarks in images generated with diffusion models. This is a simple and effective adversarial attack that only requires only one example!
Think your latent-noise diffusion watermarking method is robust? Think again!

We show that they are susceptible to simple adversarial attacks that only require one watermarked example and an off-the-shelf encoder. This attack can forge and remove the watermark with very high accuracy.
April 30, 2025 at 5:58 PM
Think your latent-noise diffusion watermarking method is robust? Think again!

We show that they are susceptible to simple adversarial attacks that only require one watermarked example and an off-the-shelf encoder. This attack can forge and remove the watermark with very high accuracy.
April 30, 2025 at 5:32 PM
Diffusion models are amazing at generating high-quality images of what you ask them for, but can also generate things you didn't ask for. How do you stop a diffusion model from generating unwanted content such as nudity, violence, or the style of a particular artist? We introduce TraSCE (1/n)
December 18, 2024 at 8:07 PM
Have you ever wondered why diffusion models memorize and all initializations lead to the same training sample? As we show, this is because like in dynamic systems, the memorized sample acts as an attractor and a corresponding attraction basin is formed in the denoising trajectory.
December 4, 2024 at 9:03 PM