Matheus Gadelha
gadelha.bsky.social
Matheus Gadelha
@gadelha.bsky.social
Research Scientist at Adobe Research. ML/3D/Graphics. http://mgadelha.me
Reposted by Matheus Gadelha
Breaking: we release a fully synthetic generalist dataset for pretraining, SYNTH and two new SOTA reasoning models exclusively trained on it. Despite having seen only 200 billion tokens, Baguettotron is currently best-in-class in its size range. pleias.fr/blog/blogsyn...
November 10, 2025 at 5:30 PM
Session this afternoon (in 30 minutes)!!

Poster 153 — see you there!
Excited to share our #ICCV2025 work Reusing Computation in Text-to-Image Diffusion for Efficient Generation of Image Sets!

Our method generates large sets of images using significantly less compute than standard diffusion.

📎https://ddecatur.github.io/hierarchical-diffusion/

1/
October 22, 2025 at 11:53 PM
Reposted by Matheus Gadelha
I wrote a notebook for a lecture/exercice on image generation with flow matching. The idea is to use FM to render images composed of simple shapes using their attributes (type, size, color, etc). Not super useful but fun and easy to train!
colab.research.google.com/drive/16GJyb...

Comments welcome!
June 27, 2025 at 4:53 PM
For folks attending CVPR: is there a website where I can see the list of workshops, their location AND time? Day and time are empty when I access cvpr.thecvf.com/Conferences/...
CVPR 2025 Workshop List
cvpr.thecvf.com
June 11, 2025 at 4:04 AM
I will be in Nashville until Saturday for CVPR'25 \o/

DM if you want to meet!
June 9, 2025 at 7:53 PM
Reposted by Matheus Gadelha
🏅Honored to have been awarded at #Eurographics25 for our paper on #LipschitzPruning to speed-up SDF rendering!

👉 The paper's page: wbrbr.org/publications...

Congrats to @wbrbr.bsky.social, M. Sanchez, @axelparis.bsky.social, T. Lambert, @tamyboubekeur.bsky.social, M. Paulin and T. Thonat!
May 19, 2025 at 9:54 AM
NeurIPS and SIGGRAPH Asia deadline are coming.

Make your life easier: read this thread.
After one more CVPR deadline, I feel compelled to share a couple of very useful LaTeX/Overleaf that, surprisingly, and not broadly adopted. It boils down to just two very simple things:
May 14, 2025 at 12:09 AM
Let's gooo!!! \o/

Probably my first time visiting Brazil for professional reasons :-)
That's a wrap for #ICLR2025! See you all next year in Brazil! Please all welcome @bharathhariharan.bsky.social as the new Senior Program Chair! (With @cvondrick.bsky.social continuing on as General Chair.)
April 28, 2025 at 7:48 PM
Reposted by Matheus Gadelha
By popular demand, we are extending #CVPR2025 coverage to Bluesky. Stay tuned!
February 27, 2025 at 9:07 PM
Reposted by Matheus Gadelha
Exciting news! MegaSAM code is out🔥 & the updated Shape of Motion results with MegaSAM are really impressive! A year ago I didn't think we could make any progress on these videos: shape-of-motion.github.io/results.html
Huge congrats to everyone involved and the community 🎉
February 24, 2025 at 6:52 PM
I understand the sentiment, but it is important for people to know that is currently does not reflect reviewer guidelines at CVPR: cvpr.thecvf.com/Conferences/...

“(…) you should include specific feedback on ways the authors can improve their papers.”
February 23, 2025 at 6:54 PM
Reposted by Matheus Gadelha
Late to post, but excited to introduce CUT3R!

An online 3D reasoning framework for many 3D tasks directly from just RGB. For static or dynamic scenes. Video or image collections, all in one!

Project Page: cut3r.github.io
Code and Model: github.com/CUT3R/CUT3R
February 18, 2025 at 5:03 PM
Those plots are so cool! \o/
I just pushed a new paper to arXiv. I realized that a lot of my previous work on robust losses and nerf-y things was dancing around something simpler: a slight tweak to the classic Box-Cox power transform that makes it much more useful and stable. It's this f(x, λ) here:
February 18, 2025 at 10:10 PM
Reposted by Matheus Gadelha
🌌🛰️🔭Wanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"!

arxiv.org/abs/2502.03714

(1/9)
February 7, 2025 at 3:15 PM
Reposted by Matheus Gadelha
New paper about pictures: I identify trends in geometric perspective in my own drawings and photos, and compare them to how the original scenes looked. I discuss what these trends might say about art history and vision science. Published in _Art & Perception_. #visionscience
psyarxiv.com/pq8nb
OSF
psyarxiv.com
February 6, 2025 at 10:58 PM
Reposted by Matheus Gadelha
"𝐑𝐚𝐝𝐢𝐚𝐧𝐭 𝐅𝐨𝐚𝐦: Real-Time Differentiable Ray Tracing"

A mesh-based 3D represention for training radiance fields from collections of images.

radfoam.github.io
arxiv.org/abs/2502.01157

Project co-lead by my PhD students Shrisudhan Govindarajan and Daniel Rebain, and w/ co-advisor Kwang Moo Yi
February 5, 2025 at 6:59 PM
#CVPR2025

Is there any way in OpenReview to "mention" a reviewer in a discussion? I think reviewers get the e-mail with whatever message gets posted in the discussion that is sent to them, but they have no idea if Reviewer HjKl is them or someone else...
February 3, 2025 at 6:31 PM
I will keep repeating this until I convince everyone I work with or I will die trying.
After one more CVPR deadline, I feel compelled to share a couple of very useful LaTeX/Overleaf that, surprisingly, and not broadly adopted. It boils down to just two very simple things:
January 23, 2025 at 6:34 PM
I think this is a great idea!
This is where a threat of desk rejection of their paper would help. It would be great if their co-authors got a warning that one of their co-authors is putting their paper at risk of a desk reject. You would quickly see a change in behaviour.
January 17, 2025 at 10:57 PM
It would *really* help if OpenReview showed how many reviews a paper already had on the reviewer assignment page
January 15, 2025 at 7:12 PM
This is a great speaker lineup and I will definitely try to attend.

I can't help but think though: if everyone is trying to stand out you will stand out by not trying to :-)

(I am obviously kidding, the workshop is about more than that, visit the webpage to learn more)
🧵 1/3 Many at #CVPR2024 & #ECCV2024 asked what would be next in our workshop series.

We're excited to announce "How to Stand Out in the Crowd?" at #CVPR2025 Nashville - our 4th community-building workshop featuring this incredible speaker lineup!

🔗 sites.google.com/view/standou...
January 14, 2025 at 6:37 AM
If you are an undergrad/MS student interested in graphics, geometry, applied math, etc, I strongly recommend SGI.

Apply!

vvvvv
Announcing SGI 2025! Undergrads and MS students: Apply for 6 weeks of paid summer geometry processing research. No experience needed: 1 week tutorials + 5 weeks of projects. Mentors are top researchers in this emerging branch of graphics/computing/math. sgi.mit.edu
January 7, 2025 at 12:33 AM
Reposted by Matheus Gadelha
Orient Anything: Learning Robust Object Orientation Estimation from Rendering 3D Models

Zehan Wang, Ziang Zhang, Tianyu P
tl;dr: train ViT on Objaverse(800K->55K obj after filtering)to predict canonical ori->PROFIT

arxiv.org/abs/2412.18605
January 6, 2025 at 2:10 PM
Reposted by Matheus Gadelha
Physics has a set of useful numbers that you can use for quick order-of-magnitude estimation: densities of some common materials, energies in bonds, interatomic spacing, etc.
What are the equivalently useful numbers, worth memorizing for estimating economic / political things?
Some of mine:
1/2
January 2, 2025 at 1:30 AM
Reposted by Matheus Gadelha
⚠️Reconstructing sharp 3D meshes from a few unposed images is a hard and ambiguous problem.

☑️With MAtCha, we leverage a pretrained depth model to recover sharp meshes from sparse views including both foreground and background, within mins!🧵

🌐Webpage: anttwo.github.io/matcha/
December 11, 2024 at 2:59 PM