Mike Dereviannykh
mishok43.bsky.social
Mike Dereviannykh
@mishok43.bsky.social
CG&AI PhD Student @KIT
Now at Reality Lab, Meta
ex-EagleDynamics, ex-WellDone Games

mishok43.com
In just less than an hour we'll present "Neural Two-Level Monte Carlo Real-Time Rendering" on SIGGRAPH 2025 at "Best of Eurographics" session

Please join us, it'll be fun! 😊

🏠Room 208-209
🕜10:30-11:30
August 13, 2025 at 4:52 PM
Big news! The Eurographics Association invited us to present our work "Neural Two-Level Monte Carlo Real-Time Rendering" at #SIGGRAPH2025 🎉
Super honored - my first SIGGRAPH!

Let’s discuss neural & real-time rendering, grab a coffee, or just hang out - feel free to leave a DM
July 28, 2025 at 4:53 AM
Just joined RealityLabs at Meta to do some real-time neural rendering research. Unfortunately without a 9- digits compensation package, but WIP 😁

I'm in the Redmond office, but already visited Seattle. If you wanna grab ☕ - DMs are opened
July 26, 2025 at 8:37 PM
To be completely honest, it's a equal-time-comparison:

3SPP vs 1SPP+25 Neural Resamples

I believe we should invest more resources in more rapid adaptivity of neural caches and more aggressive quantizations, so we could deliver it to production real-time rendering
June 25, 2025 at 12:28 PM
It's a cool work which deserve a lot attentions from real-time rendering community!

Previously I've got a little bit of time to conduct similar experiments on top of our NIRC, as each additional neural sample costs just pure tensor FLOPs ~ 1.5ms on 4080

1 spp vs 1 spp + 25 cache resamples
June 25, 2025 at 12:26 PM
I found it cool to hear the motivation for High-Frequency Learnable Encoding for NRC\NIRC\Neural Ambient Occlusion from the perspective of Kernel Machines!

Classics 😊
June 25, 2025 at 12:23 PM
We've received an "Honorable Mention" at the Eurographics 2025 for our work on "Neural Two-Level Monte Carlo Real-Time Rendering" in London! 🥳

Huge thanks to everyone who supported me along the way, and to the EG chairs, committee, and organizers for this recognition
May 16, 2025 at 2:13 PM
That's what I mean by the lack of compute
May 15, 2025 at 11:00 AM
In equal-time comparisons, NIRC achieves surprisingly cool results both in the biased and unbiased cases

But yeah... variance may increase next to foliage, brush, trees🌿 — still the eternal pain in CG 😅
May 11, 2025 at 3:55 PM
Using Two-Level Monte Carlo, we can debias NIRC while still cutting variance — thanks to fast cache sampling - dozens of times for the cost of 1 real path

It works like (N)CV, but doesn't introduce any architectural constraints! No need to train Normalizing Flows on-the-fly
May 11, 2025 at 3:55 PM
Another positive scalability property: deeper MLPs do improve quality here

Downside: not all scenes benefit from it (esp. with high-variance MC estimator. must be further researched)
May 11, 2025 at 3:55 PM
And we got basically classical Monte-Carlo integration, but over the neural domain!

But it scales pretty well with the number of neural samples!
May 11, 2025 at 3:55 PM
NIRC amortizes iNGP costs via task-reformulation: from outgoing to incident radiance

1. Use hash-grid on surface point → get latent light rep
2. Sample incoming dirs via BSDF
3. Decode radiance using MLPs (per-dir)

The more directions, the more we leverage GPU tensor FLOPS 💥
May 11, 2025 at 3:55 PM
Inspired by NRC + iNGP’s adaptivity from amazing Tomas Müller, Christoph Schied, Jan Novák, Alex Evans and et al, but found key limits:
– Up to 70% time spent on iNGP → memory-bound
– MLP depth ≠ sign. better quality → poor FLOPs scaling
– Biased for specular & detailed BSDFs with normals
May 11, 2025 at 3:55 PM