Ben Beilharz
banner
ben.graphics
Ben Beilharz
@ben.graphics
I do my PhD on physically based (differentiable) rendering, material appearance modeling/perception/capturing @tsawallis.bsky.social's Perception Lab

I enjoy photography, animation/VFX, working on my renderer, languages and contributing to Blender.
Uh-oh:
Spiraling down the #nixOS road.

After watching this video (www.youtube.com/watch?v=dsl_...), I thought I should also have dinner. Dinner with the penguins and the snowflakes.

Let's see how far I am getting with this. Not sure I am ready.
September 29, 2025 at 3:34 PM
Honestly flabbergasted by all the amazing work presented at #BCon25. It’s nice to see what everyone makes out of Blender and how far contributions can go. Definitely one of the nicest communities to be around. Superb talks, nice location, great people. Can’t wait to see you all next year again 🧡🎉
September 21, 2025 at 6:58 AM
First @blender.org Conference in person. Let’s gooooooo!
September 17, 2025 at 8:49 AM
It's a wrap! Thanks for the ride @blender.org!
I had a great time and learned a lot of stuff. Thanks to Omar and Habib for the mentoring, and I will definitely continue to contribute to Blender. :)

I'll be around at BlenderCon this week and happy to chat :)
September 15, 2025 at 1:34 PM
I was today years old to discover that C++ also has:

- && `a and b`
- || `a or b`
- !a `not a`
September 3, 2025 at 8:37 PM
Meet Higgins!
Not my goofball but a friend’s favorite 😬
August 15, 2025 at 5:29 AM
A minimal example of how to build your scene with MSD to render: 昇る太陽
August 11, 2025 at 9:59 AM
Not sure how many will affect this, but I assume this will kill a lot of the rebuttals. #neurips
July 27, 2025 at 9:34 AM
@wetafxofficial.bsky.social and James Cameron bring us the next sequel of #Avatar to the theatres coming this December 19th! Get ready to experience the Ash people.
July 22, 2025 at 1:25 PM
To whom it may concern
July 12, 2025 at 7:16 PM
First @blender.org contribution merged into main! 🥳
It's a minor one, but a start for many more to come!
April 3, 2025 at 2:06 PM
Please go ahead and read the full reply.

I would suggest to not just quote a reply without setting any context. Thanks.

Further it is theft, because the training data has not been eradicated and the model has never been put down. Commercially use without consent of the artists.
January 5, 2025 at 8:16 AM
Hao & Romero 2024 :: Meshtron
Looks pretty sick.

Paper: arxiv.org/abs/2412.09548

Check their project's website: research.nvidia.com/labs/dir/mes...

(Video belongs to the original authors and is shortened for upload reasons).
December 16, 2024 at 1:38 PM
Images are encoded using DINO and decoded from images onto the triplane. Different viewports and respective features are fed through multiple MLPs (SDF, Deformation, and Weight will be propagated to FlexiCubes). With the Albedo MLP, the network predicts a mesh.
December 12, 2024 at 2:39 PM
Ge & Lin 2024 :: Photometric Stereo Based Large Reconstruction Model

arxiv.org/pdf/2412.07371

PRM is a photometric stereo scene reconstruction model based on a two-stage optimization using InstantMesh. This optimization utilizes triplanes and volume rendering and is followed by FlexiCubes.
December 12, 2024 at 2:23 PM
The authors assume images are puzzles made of many pieces. Reference and test images are embedded using a SqueezeNet. Next, compute the cosine similarity between layers 2-4 image patches and build a similarity map for each image. The similarity map is re-scaled to the image's original dimensions.
November 29, 2024 at 1:47 PM
Another day, another paper.

Hermann et al. 2024 :: Puzzle Similarity: A Perceptually-guided No-Reference Metric for Artifact Detection in 3D Scene Reconstructions

arxiv.org/abs/2411.17489

The authors propose a new no-reference metric for 3D scene reconstruction artifacts & an annotated dataset.
November 29, 2024 at 1:47 PM
Fresh outta Vision Science x Computer Graphics preprint press :: Perceptually Optimized Super Resolution (Karpenko et al. 2024)

arxiv.org/abs/2411.17513

Dynamically-guided SR to human-sensitive areas in an image leading to half the flops used for an indistinguishable result from prior methods.
November 28, 2024 at 9:25 AM