thibaultgroueix.bsky.social
@thibaultgroueix.bsky.social
Likewise, i am a big fan !
May 19, 2025 at 8:08 PM
This work was led by Amir Barda, in collaboration with Matheus
@gadelha.bsky.social
, Noam Aigerman
@noamiko.bsky.social
, Vova Kim @vovakim.bsky.social and Amit Bermano.
Check out our paper for more details 📜: arxiv.org/abs/2412.00518
7/end
December 4, 2024 at 1:55 AM
In the end, with the editing tool becoming FAST 🐇 , 3D editing becomes really FUN to play with! 6/
December 4, 2024 at 1:49 AM
Do we teach inpainting to a multiview backbone 🤔? Or do we teach multiview to an inpainting backbone? We show that the latter is much better. Multiview is easier to learn than inpainting. 4/
December 4, 2024 at 1:49 AM
Now, all we need is a multiview inpainting model 😅. How do we train one? Data is always king. We know inpainting masks can’t be random; they need to be realistic and close to what users would do. We propose 3 strategies, in 3D, to create Objaverse masks, closely resembling what a user would do. 4/
December 4, 2024 at 1:49 AM
However, SDS remains slow and brittle 🐢💥. Instead, we propose to cast the problem of 3D inpainting as 2D *multiview* inpainting 📸-📸-📸-📸. This is possible thanks to off-the-shelf pre-trained transformer models (LRM), which reconstruct multiview image back to meshes, Gsplats, and NeRFs. Great! 3/
December 4, 2024 at 1:49 AM
There has been previous attempts to tackle generative mesh editing. Check out Amir Barda’s talk on MagicClay this Thursday at Siggraph Asia, Japan 🇯🇵, using SDS. 2/
December 4, 2024 at 1:49 AM