Arijit Ghosh
arrijitghosh.bsky.social
Arijit Ghosh
@arrijitghosh.bsky.social
I add noise to images and make neural networks remove them!
Reposted by Arijit Ghosh
Yesterday, @nicolasdufour.bsky.social defended is PhD. I really enjoyed the years of collaboration w/ @vickykalogeiton.bsky.social (& @loicland.bsky.social)

Video: youtube.com/live/DXQ7FZA...

Big thanks to the jury @dlarlus.bsky.social @ptrkprz.bsky.social @gtolias.bsky.social A. Efros & T. Karras
November 27, 2025 at 7:14 PM
Reposted by Arijit Ghosh
Nicolas ( @nicolasdufour.bsky.social ) is defending his PhD right now.

I was so in awe of the presentation that I even forgot to take pictures 😅
November 26, 2025 at 6:00 PM
Reposted by Arijit Ghosh
#KostasThoughts: Please stop using AI tools to write reviews. I want your authentic thoughts, even if they contain spelling, grammar, & factual mistakes. An LLM should not influence your judgment. The purpose of multiple reviewers is to provide diverse views and error checking.
November 17, 2025 at 1:50 AM
Reposted by Arijit Ghosh
There has to be a decoupling from research for the sake of increasing knowledge, and research for the sake of pleasing VCs. As long as these conference are the key to a career making millions, they will be overflown by people just taking the gamble and thus not taking the reviewing part seriously.
November 16, 2025 at 9:39 PM
Reposted by Arijit Ghosh
All the slop review drama unfolding at ICLR is painful to watch. But having so many eyes on it puts us in a better point to find solutions. This would have taken longer to be exposed on a closed platform where we may have had a few screenshots and a few author stories at best.
November 16, 2025 at 9:09 PM
Reposted by Arijit Ghosh
Update on #CVPR2026 full paper & compute form submission issue:

We would like to inform authors that the OR submission system requires the submission of a compute reporting form along with any updates to the full paper. We have identified this as a system-related issue.

1/2
November 13, 2025 at 8:27 PM
Reposted by Arijit Ghosh
This Nature retrospective is quite interesting.
To me, the only solution to the credit assignment problem is obvious: stop believing a single person is responsible for every big discovery. It's an artifact of our monkey brain requiring a face for storage, not the reality of how knowledge progresses.
"stole Rosalind Franklin's work" has become the new orthodoxy. While she was certainly the victim of sexism from Watson, I think her colleague Wilkins was the real villain. Events 1951-53 well covered in Nature in 2023 www.nature.com/articles/d41...
What Rosalind Franklin truly contributed to the discovery of DNA’s structure
Franklin was no victim in how the DNA double helix was solved. An overlooked letter and an unpublished news article, both written in 1953, reveal that she was an equal player.
www.nature.com
November 8, 2025 at 8:22 AM
Reposted by Arijit Ghosh
Image generation becomes much more energy efficient. 👍
We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
October 31, 2025 at 8:28 PM
Reposted by Arijit Ghosh
Check out our new work: MIRO

No more post-training alignment!
We integrate human alignment right from the start, during pretraining!

Results:
✨ 19x faster convergence ⚡
✨ 370x less compute 💻

🔗 Explore the project: nicolas-dufour.github.io/miro/
October 31, 2025 at 9:11 PM
Reposted by Arijit Ghosh
Privileged to the diffusion master @nicolasdufour.bsky.social give to our team (full house) a tour of his excellent works in data and compute efficient diffusion models and a sneak preview of his latest MIRO work.
Check it out 👌
October 31, 2025 at 7:28 PM
Reposted by Arijit Ghosh
I'm super happy about Nicolas' latest work, probably the magnum opus of his PhD.

Read the thread for all the great details.
The main conclusion I draw from this work is that better pretraining, in particular by conditioning on better data, allows us to train SOTA models at a fraction of the cost.
We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
October 31, 2025 at 11:39 AM
Reposted by Arijit Ghosh
We introduce MIRO: a new paradigm for T2I model alignment integrating reward conditioning into pretraining, eliminating the need for separate fine-tuning/RL stages. This single-stage approach offers unprecedented efficiency and control.

- 19x faster convergence ⚡
- 370x less FLOPS than FLUX-dev 📉
October 31, 2025 at 11:24 AM
Reposted by Arijit Ghosh
👀 arxiv.org/abs/2510.25897

Thread with all details coming soon!
October 31, 2025 at 8:55 AM
Reposted by Arijit Ghosh
"The Principles of Diffusion Models" by Chieh-Hsin Lai, Yang Song, Dongjun Kim, Yuki Mitsufuji, Stefano Ermon. arxiv.org/abs/2510.21890
It might not be the easiest intro to diffusion models, but this monograph is an amazing deep dive into the math behind them and all the nuances
The Principles of Diffusion Models
This monograph presents the core principles that have guided the development of diffusion models, tracing their origins and showing how diverse formulations arise from shared mathematical ideas. Diffu...
arxiv.org
October 28, 2025 at 8:35 AM
Reposted by Arijit Ghosh
"familiar to most members of society"? Hell, if I know 20% of the things in that table, that's a maximum.

Who are those guys and in what society do they live?

This is exactly why I think the concept of AGI is meaningless.
October 17, 2025 at 4:50 PM
Reposted by Arijit Ghosh
Probably the best paper title of my career. To read with the Indiana jones soundtrack. And yes we solved and differentiated quite well 80 millions (small) Fused Gromov Wasserstein problems per epoch using a neural network on GPU.
Our latest paper “The Quest for the GRAph Level autoEncoder (GRALE)” was accepted at NeurIPS 2025!

arxiv.org/abs/2505.22109

🏆 GRALE 🏆 can encode and decode graphs into and from a shared Euclidean space.

Training such a model should require solving the graph matching problem but...
The quest for the GRAph Level autoEncoder (GRALE)
Although graph-based learning has attracted a lot of attention, graph representation learning is still a challenging task whose resolution may impact key application fields such as chemistry or biolog...
arxiv.org
October 16, 2025 at 3:17 PM
Reposted by Arijit Ghosh
Did you open-source your #ICCV2025 works?

As a PyTorch Ambassador, I would like to write an article to introduce open-sourced ICCV 2025 works (including workshops and demos) for promoting open-source/science + PyTorch

If interested, share your work via the form in my reply 👇
October 12, 2025 at 7:55 PM
Reposted by Arijit Ghosh
Over the past year, my lab has been working on fleshing out theory + applications of the Platonic Representation Hypothesis.

Today I want to share two new works on this topic:

Eliciting higher alignment: arxiv.org/abs/2510.02425
Unpaired learning of unified reps: arxiv.org/abs/2510.08492

1/9
October 10, 2025 at 10:13 PM
Reposted by Arijit Ghosh
Happening now at VRG Colloquium by @vickykalogeiton.bsky.social
October 9, 2025 at 11:48 AM
Reposted by Arijit Ghosh
Very proud of our recent work, kudos to the team! Read @davidpicard.bsky.social’s excellent post for more details or the paper arxiv.org/pdf/2502.21318
October 8, 2025 at 9:19 PM
Reposted by Arijit Ghosh
Final note: I'm (we're) tempted to organize a challenge on that topic as a workshop at a CV conf. ImageNet is the only source of images allowed and then you compete to get the bold numbers.

Do you think there would be people in for that? Do you think it would make for a nice competition?
October 8, 2025 at 8:43 PM
Reposted by Arijit Ghosh
🚨Updated: "How far can we go with ImageNet for Text-to-Image generation?"

TL;DR: train a text2image model from scratch on ImageNet only and beat SDXL.

Paper, code, data available! Reproducible science FTW!
🧵👇

📜 arxiv.org/abs/2502.21318
💻 github.com/lucasdegeorg...
💽 huggingface.co/arijitghosh/...
October 8, 2025 at 8:43 PM