Valentin Deschaintre
banner
vdeschaintre.bsky.social
Valentin Deschaintre
@vdeschaintre.bsky.social
Doing research at Adobe in Computer Graphics/Vision/ML on appearance & content authoring and generation. I also like photography, and baking, but I try to keep it under control!
https://valentin.deschaintre.fr
Sneak peak into the live event. That's the most important part of the FF, the other 2.5 hours pale in comparison.
August 19, 2025 at 12:53 PM
I was very honoured to receive one of the two Eurographics Young Researcher Award 2025 yesterday!

This is the combination of the work of many people, mentors, collaborators, students, friends who trusted me and taught me so much along the way!
May 13, 2025 at 10:09 AM
Material selection enables all kind of applications for down stream editing as we show here with NeRF, 3DGS and meshes editing and segmentation.
December 9, 2024 at 3:22 PM
Don't want to click? No worries, we propose a segmentation algorithm which segments an objects into a material ID map in under 1 minute.
December 9, 2024 at 3:22 PM
This lets us select on any representation which can be queried for depth and rendered. Compared to SAM2, our model focuses on materials rather than object level selection. However, our 3D aggregation can also be used with SAM2 to select objects if that is the desired modality.
December 9, 2024 at 3:22 PM
We adapt the SAM2 video model to material and find that it's multi-view consistency enables efficient 3D aggregation through a similarity point cloud (built in 2s), which can be queried through a KNN voting mechanism in a couple ms.
December 9, 2024 at 3:22 PM
🎓 We introduce SAMa! A material selection and segmentation model on 3D models in any format (3DGS, NeRF, Mesh).
Given a user click, we propose to select all regions on an objects with the same material. We can also do segmentation in under a minute: mfischer-ucl.github.io/sama/
December 9, 2024 at 3:22 PM
SIGGRAPH Asia is a wrap,thanks Japan! As usual it was great (but too short) to catch up with everyone!
December 8, 2024 at 3:07 AM
Hope you could catch it!
December 5, 2024 at 4:54 AM
The paper received a Best paper award honorable mention at SIGGRAPH Asia! 🥳
Come to our session on Thursday afternoon!
December 3, 2024 at 2:07 AM
This lets out method generate procedural material that better match the target image 🥳. We can of course apply MATch a posteriori to adjust the colour if needed, but this step is not really needed using our RL fine-tuned model.
November 23, 2024 at 5:29 PM
The process uses our pre-trained parameter generator and samples many plausible solutions. We compare renderings of these procedural materials with image metrics to compute a "reward". It's normalized and used to weigh the gradient, encouraging the generation of materials matching the image prompt.
November 23, 2024 at 5:29 PM
So we propose to use Reinforcement Learning. This lets us compare the appearance of the generated material directly, and does not require GT data to train, only material photographs, including photographs of real materials!
November 23, 2024 at 5:29 PM
This build on the MATFormer line of work, procedural graph are serialised into 3 sequences (nodes, edges, node parameters) and generated by 3 separate transformers with cross conditioning. In this work we focus on improving the parameter generation.
November 23, 2024 at 5:29 PM
SIGGRAPH Asia paper thread 3 (/3)!

🤖 We propose to use RL as a way to improve procedural material parameter generation, avoiding the ground truth data bottleneck, and improve appearance matching!

Open Access: dl.acm.org/doi/10.1145/...
November 23, 2024 at 5:29 PM
We also filter the gradients to only change the most impacted parameters, avoiding small jittering in undesired directions.
November 21, 2024 at 5:49 PM
When the user interacts with a stroke, we can optimize the parameters to have the selected points follow the user stroke in real time.
November 21, 2024 at 5:49 PM
To understand which action relate to which procedural parameters. We define co-parameters on the primitives and modify the procedural graph to propagate this information, letting us compute the derivative of a position with respect to the procedural parameters of the shape!
November 21, 2024 at 5:49 PM
So we want to let the user click on some points they want to move or to remain stationary and let them interact with the procedural primitives parameters directly
November 21, 2024 at 5:49 PM
SIGGRAPH Asia (still) coming up! Second paper thread!

"Direct Manipulation of Procedural Implicit Surfaces" where we explore how to create a WYSIWYG editing interface for procedural representations, avoiding awkward sliders!
eliemichel.github.io/SdfManipulat...
November 21, 2024 at 5:49 PM
This is a TOG paper that we will present at SIGGRAPH Asia in the "Appearance Modeling" session on the Friday 6th of December at 1pm!

The work is a collaboration with colleagues at Adobe Research and was led by Giuseppe (gvecchio.com), check him out!
November 20, 2024 at 4:43 PM
We also enable high resolution (up to 4K) generation through multi scale iterative generation and tile merging.
November 20, 2024 at 4:43 PM
We make the results tileable at inference time using noise rolling, where at each diffusion step we roll the noise tensor, forcing the generation of a tileable material.
November 20, 2024 at 4:43 PM
This relies on the combination of a Latent Diffusion Model trained from scratch for material generation and a ControlNet architecture for local conditioning, all trained on synthetic data.
November 20, 2024 at 4:43 PM
SIGGRAPH Asia in two weeks, first paper thread on 🦋!
🎓 1/3 Controlmat, a diffusion model for material acquisition.
gvecchio.com/controlmat/

Given an image we generate a corresponding tileable, high-resolution relightable material!
November 20, 2024 at 4:43 PM