Florian Hahlbohm
banner
fhahlbohm.bsky.social
Florian Hahlbohm
@fhahlbohm.bsky.social
PhD student, Computer Graphics Lab, TU Braunschweig.
Radiance Fields and Point Rendering.
Webpage: https://fhahlbohm.github.io/
"DaD's a pretty good keypoint detector, probably the best." Nice one 😂
March 10, 2025 at 7:56 AM
We also provide a multitude of data loaders, camera model implementations, as well as various utilities for optimization and visualization.
March 8, 2025 at 11:57 AM
Each method has a Trainer, Model, and Renderer class that extend the respective base classes. Many of the current methods also define custom CUDA extensions or a designated loss class.
March 8, 2025 at 11:57 AM
NeRFICG is a research-focused framework for developing novel view synthesis methods. Shoutout to my colleague Moritz Kappel, who is responsible for most of the underlying architecture! We think, NeRFICG is a decent starting point for any PyTorch-based graphics/vision project.
March 8, 2025 at 11:57 AM
Further discussion and ideas for where things could be improved can be found in our paper and the "Additional Notes" in our GitHub repository.

The remainder is on our framework NeRFICG: github.com/nerficg-proj...
NeRFICG
A flexible Pytorch framework for simple and efficient implementation of neural radiance fields and rasterization-based view synthesis methods. - NeRFICG
github.com
March 8, 2025 at 11:57 AM
BlueSky did not let me have two videos in the same post. So here's the OIT video.
March 8, 2025 at 11:57 AM
An interesting observation we had is that OIT (enabled by setting "Blend Mode" to 3 in the config) seems to help background reconstruction and overall densification. Videos show the first 3K training iterations using hybrid vs. order-independent transparency.
March 8, 2025 at 11:57 AM
Note that the GUI has a non-negligible impact on frame rate as it is Python-based. So you won't see maximum performance even after turning off v-sync. It is also Linux-only but my colleague Timon Scholz recently started working on a C++ version that also supports Windows.
March 8, 2025 at 11:57 AM
Btw, all visualizations in this thread use our perspective-correct approach for rendering 3D Gaussians. It is based on ray-casting and can be implemented efficiently. However the high frame rates reported in our paper are due to the hybrid transparency approach.
March 8, 2025 at 11:57 AM
Here are examples using (0) hybrid transparency with K=16, (1) alpha blending of the 4 first fragments per-pixel, (2) alpha blending in "global" depth-ordering, and (3) order-independent transparency. Model was trained using the settings as in (0).
March 8, 2025 at 11:57 AM
You can also modify the "Blend Mode" (see the Readme on GitHub) and core size K for blending modes where this is applicable. To reduce compile times, we only compile kernels for K in [1, 2, 4, 8, 16, 32] and "round down" for other values (e.g., 12 -> 8).
March 8, 2025 at 11:57 AM
Via the "Viewer Config" (F3), you can switch to rendering depth maps and expanding the advanced renderer config allows you to switch between expected (shown here) and median depth.
March 8, 2025 at 11:57 AM
Don't get confused by the "Time" stuff, which is for dynamic scenes reconstructed by methods such as our recent D-NPC: github.com/MoritzKappel...

HTGS also does currently not support changing the background color and using camera models other than "Perspective" without distortion.
GitHub - MoritzKappel/D-NPC: Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video".
Official code release for "D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video". - MoritzKappel/D-NPC
github.com
March 8, 2025 at 11:57 AM
By modifying the "Principal Point" and/or "Focal Length" you can create fun images like the one below. You can even do this while watching your Gaussians train if you set TRAINING.GUI.ACTIVATE to true in the config file.

And yes, you could in theory train on images like this.
March 8, 2025 at 11:57 AM
Let's start with the GUI features you might want to try with HTGS. If you open the "Camera Config" panel (F4) you can switch between "Orbital" and "Walking" controls. You can also modify the near/far plane.
March 8, 2025 at 11:57 AM
Many thanks to my co-authors Fabian Friederichs, @timweyrich.bsky.social, @linusfranke.bsky.social, Moritz Kappel, Susana Castillo and @mcstammi.bsky.social, Martin Eisemann, and Marcus Magnor!

Thoughts and things to try in the thread below:
March 8, 2025 at 11:57 AM
Reposted by Florian Hahlbohm
@chrisoffner3d.bsky.social we are in need of your eval)
January 23, 2025 at 10:57 AM
Merry Christmas :) I tried this as well but with Brush by @arthurperpixel.bsky.social . How many pictures did you take? For me Colmap only ended up using like 25/50 images and it didn't work that well. Tbf lighting was pretty bad.
December 25, 2024 at 11:57 PM
I really enjoyed watching the videos the last time you did this. Thanks for making them available to everyone :)
December 1, 2024 at 12:58 PM