Now at Reality Lab, Meta
ex-EagleDynamics, ex-WellDone Games
mishok43.com
Please join us, it'll be fun! 😊
🏠Room 208-209
🕜10:30-11:30
Please join us, it'll be fun! 😊
🏠Room 208-209
🕜10:30-11:30
Super honored - my first SIGGRAPH!
Let’s discuss neural & real-time rendering, grab a coffee, or just hang out - feel free to leave a DM
Super honored - my first SIGGRAPH!
Let’s discuss neural & real-time rendering, grab a coffee, or just hang out - feel free to leave a DM
I'm in the Redmond office, but already visited Seattle. If you wanna grab ☕ - DMs are opened
I'm in the Redmond office, but already visited Seattle. If you wanna grab ☕ - DMs are opened
3SPP vs 1SPP+25 Neural Resamples
I believe we should invest more resources in more rapid adaptivity of neural caches and more aggressive quantizations, so we could deliver it to production real-time rendering
3SPP vs 1SPP+25 Neural Resamples
I believe we should invest more resources in more rapid adaptivity of neural caches and more aggressive quantizations, so we could deliver it to production real-time rendering
Previously I've got a little bit of time to conduct similar experiments on top of our NIRC, as each additional neural sample costs just pure tensor FLOPs ~ 1.5ms on 4080
1 spp vs 1 spp + 25 cache resamples
Previously I've got a little bit of time to conduct similar experiments on top of our NIRC, as each additional neural sample costs just pure tensor FLOPs ~ 1.5ms on 4080
1 spp vs 1 spp + 25 cache resamples
Classics 😊
Classics 😊
Huge thanks to everyone who supported me along the way, and to the EG chairs, committee, and organizers for this recognition
Huge thanks to everyone who supported me along the way, and to the EG chairs, committee, and organizers for this recognition
But yeah... variance may increase next to foliage, brush, trees🌿 — still the eternal pain in CG 😅
But yeah... variance may increase next to foliage, brush, trees🌿 — still the eternal pain in CG 😅
It works like (N)CV, but doesn't introduce any architectural constraints! No need to train Normalizing Flows on-the-fly
It works like (N)CV, but doesn't introduce any architectural constraints! No need to train Normalizing Flows on-the-fly
Downside: not all scenes benefit from it (esp. with high-variance MC estimator. must be further researched)
Downside: not all scenes benefit from it (esp. with high-variance MC estimator. must be further researched)
But it scales pretty well with the number of neural samples!
But it scales pretty well with the number of neural samples!
1. Use hash-grid on surface point → get latent light rep
2. Sample incoming dirs via BSDF
3. Decode radiance using MLPs (per-dir)
The more directions, the more we leverage GPU tensor FLOPS 💥
1. Use hash-grid on surface point → get latent light rep
2. Sample incoming dirs via BSDF
3. Decode radiance using MLPs (per-dir)
The more directions, the more we leverage GPU tensor FLOPS 💥
– Up to 70% time spent on iNGP → memory-bound
– MLP depth ≠ sign. better quality → poor FLOPs scaling
– Biased for specular & detailed BSDFs with normals
– Up to 70% time spent on iNGP → memory-bound
– MLP depth ≠ sign. better quality → poor FLOPs scaling
– Biased for specular & detailed BSDFs with normals