turbotartine.bsky.social
@turbotartine.bsky.social
Indie Game Developer (#Godot, #Blender, #OpenSource) 🇫🇷 🇬🇧
- GitHub: https://github.com/J-Ponzo/
- itch.IO: https://jponzo.itch.io/
- 🇫🇷 Blog: https://j-ponzo.github.io/
Merci pour le message ! Pour le toggle de snippets, avec plaisir !
Techniquement, le repo du site est public… mais je ne suis pas sûr que ce soit une bonne idée de mettre le nez dedans. 😅
October 9, 2025 at 11:19 AM
Oh... ça ... c'est la magie du différé ! 😅
September 4, 2025 at 4:08 PM
Merci beaucoup ! Les lumières déterministes ça faisait beaucoup trop d'un coup. Mais normalement le plan et reconstitution tech pour la Part II sont déjà bien avancés (voir fini). Elle devrait sortir un peu plus vite que d'habitude.

Pour l'Oracle, qui peut savoir ? (mais oui elle va revenir ^^)
September 4, 2025 at 10:32 AM
"Turputide", c’est "turpitude" mais dans le bon sens du terme, c’est ça ? 😄 En tout cas, merci pour le partage ! 😉
July 25, 2025 at 8:02 PM
The "Scene Update" entry corresponds to the maintenance of the caches. That is why it is 0.00 ms for the Brute Force version—we do not rely on cache. Instead, all the unnecessary work is redone over and over again in the render frame, which is why the "Render" entry is ridiculously huge.
June 18, 2025 at 6:31 PM
Furthermore, I regenerate GPU data only when the corresponding cache is invalidated. Given that most of the scene is static (vertices of the environment, textures, light info, etc.), this saves a lot of work and data streaming.
June 18, 2025 at 6:31 PM
In the optimized version, I no longer traverse the graph. Instead, I catch enter/exit scene events from Godot and maintain cache tables with the relevant objects hooked this way. This allows me to iterate over only the relevant objects and not the entire scene.
June 18, 2025 at 6:30 PM
In the brute force version, the scene graph was traversed each frame, and for all graphically relevant nodes encountered, the GPU representation of the data (vertex buffers/arrays, uniforms, etc.) was regenerated/restreamed, which is obviously inefficient.
June 18, 2025 at 6:30 PM
This video is a before (right side) / after (left side) comparison of the optimizations I made.
June 18, 2025 at 6:30 PM
If I manage to go further, I will certainly perform some benchmarks to answer this question. But for now, I'm pretty sure the bottleneck is my poorly designed code :)
June 8, 2025 at 8:51 PM
Despite these missing parts, it runs terribly because my implementation is very naive and unoptimized (no culling at all, buffers are regenerated each frame, etc.). I just wanted to know how far I can go with the RenderingDevice API.
June 8, 2025 at 8:51 PM
n particular, it does not support:
- Mesh deformation (animations).
- Particles.
- Shadows.
- Transparency.
- Materials (it just retrieves the albedo and normal textures from BaseMaterial3D and sends those to the hard-coded shader).
- All the things I haven't thought about yet...
June 8, 2025 at 8:50 PM
For now, it only implements a very limited set of features:
- A pure GLSL hard-coded shader supporting opaque static meshes only.
- A very limited number of light sources (max 1 directional, 4 point, 4 spots).
- A simple Lambert illumination model (no specular).
June 8, 2025 at 8:50 PM
It's difficult to say at this point. It's just an early proof of concept I made for fun.
June 8, 2025 at 8:50 PM