Sebastian Aaltonen
sebaaltonen.bsky.social
Sebastian Aaltonen
@sebaaltonen.bsky.social
Building a new renderer at HypeHype. Former principal engineer at Unity and Ubisoft. Opinions are my own.
This game doesn't have any textures. This is true for most of the games on the platform currently.

Texture support is a recent addition, and we are going to add lots of new texturing features in January / February.
December 19, 2024 at 1:09 PM
This is why we have virtual texturing in HypeHype. Our games are tiny (10MB), so we don't need to stream anything. However, we still implemented virtual texturing to improve our CPU and GPU performance. It also solves material variety and tiling issues (fast material blending and decaling).
December 14, 2024 at 12:58 PM
Rendering, culling, collisions, physics, etc. engine features need well-optimized data-oriented code. That code is long-lasting, so you can afford to write complex parallel algorithms and micro-optimize them. Game code on the other hand is prototyping heavy and with lots of organic dependencies.
December 14, 2024 at 12:51 PM
In Claybook, we had a game world in GPU and lots of game code running in compute shaders. Iteration times sucked. We couldn't make a good game. It ended up shipping as tech proto. For game code, prototyping speed is often more important than runtime performance.
December 14, 2024 at 12:48 PM
Yeah, ECS examples tend to be trivial: Mover component moves entities along their velocity vectors. No complex dependencies. Real game code has many dependencies that are hard to express with ECS. Game programmers then waste time writing complex CS stuff like graph colorization algorithms.
December 14, 2024 at 12:46 PM
Even without math knowledge, dividing by determinant is unnecessary if you normalize the vectors. Because that's just multiply (inverse multiply) by a scalar, which is lost in normalize anyway.

But people just take math formulas and put them one after the other without thinking.
December 13, 2024 at 8:30 AM
I inherited HypeHype rendering code ownership, and we had stupid bugs like quaternions being left handed in some places and right handed in others, and OpenGL and DirectX matrix layouts mixed (row vs column major). Now we have been cleaning up the math. This is one extra thing to do.
December 13, 2024 at 8:28 AM
Yeah. But it's hard for people to unlearn things. If the old solution works, they stick with it. AFAIK Unreal, Unity, and Godot all still use the traditional inverse transpose. Most game devs don't read math books. They just use Unity.
December 13, 2024 at 8:23 AM
Inverse transpose is still the de-facto standard solution, even though it's not the best. Most engineers working on game engines haven't read your books or articles.

I have your book, and I am halfway through it. It's excellent. Thanks for educating people.
December 13, 2024 at 7:26 AM
Not right now, but that's a plan.
December 13, 2024 at 7:14 AM
Nice!
December 12, 2024 at 10:32 PM
VT makes it possible to fit all textures in one atlas (3x atlases, actually, since there are three textures per PBR material). 4096x4096 atlas is fine for 720p resolution (low end phones) and 8192x8192 for 4K.
December 12, 2024 at 10:31 PM
This demo doesn't yet have a virtual texture decaling system. It will make it possible to add lots of texture variety with minimal storage cost. But it's possible to make pretty good-looking results even with just a couple of traditional tiling textures.
December 12, 2024 at 6:03 PM
Thanks for the info! We are implementing local lights for HypeHype and investigating similar techniques. UGC is usually kit-bashed content, and users don't consider technical limitations. They want to add more objects. This is why we are also interested in "unlimited" light count technologies.
December 6, 2024 at 11:36 AM
Unity's DOTS Renderer (our team wrote it) calculates the determinant for every entity. If the determinant is negative, then that entity gets binned into a different batch. Batches are mesh+material specific. The winding difference causes two instanced draws for the same mesh+material combo.
December 6, 2024 at 11:17 AM
In a GPU-driven renderer, you do this manually. If the winding needs to be flipped, our cluster->index expansion shader outputs the indices in reverse order. The same is true for mesh shader approaches. You never change the gfx API winding state. This way you can batch both at once.
December 6, 2024 at 11:13 AM
This is PITA in Vulkan 1.0 because the triangle winding state is embedded in the PSO. There's no support for dynamically changing the winding in Vulkan 1.0. Dynamic winding was added in Vulkan 1.3, which most mobile phones don't yet support :(
December 6, 2024 at 11:10 AM
The measurements were done in a large scene (formula track). A total of 2 million triangles were rendered with 4 cascades and 1.8 million with 3 cascades.
December 6, 2024 at 11:06 AM