Thibaut Boissin
thib-s.bsky.social
Thibaut Boissin
@thib-s.bsky.social
So in short:

AOL preconditioning (fused + re-tuned) -> 1 iter saved

Better convergence, singular values closer to 1

Kernel tweak removes extra memory load

This gives ~1.6x speedup, ~3x vs plain torch. 🚀
September 21, 2025 at 8:06 PM
Bonus: I spotted redundant memory loads in the 3rd NS line.
Wrote a small kernel to optimize bandwidth ->more free speed.
September 21, 2025 at 8:06 PM
Problem 1: AOL adds extra cost.
Fix: fuse AOL's operation with an existing NS step -> essentially free.

Problem 2: NS isn’t tuned for "almost orthogonal" inputs.
Fix: re-tune parameters with a genetic algorithm that is aware of the preconditioning.
September 21, 2025 at 8:06 PM
The inspiration comes from
Bernd Prach's Almost Orthogonal Layer (AOL).
It gives a cheap way to make a matrix "almost orthogonal."

Not great for full orthogonalization, but much better than rescaling -> perfect as a preconditioner for NS.
September 21, 2025 at 8:06 PM
The key idea: reduce the number of NS iterations.
How? By pre-conditioning the input matrix.

This makes the algorithm converge faster without losing precision.
September 21, 2025 at 8:06 PM
here’s the code: github.com/thib-s/flash... (I'll do a PR soon in Dion/Muon)

And here’s how I squeezed out the extra gain
GitHub - thib-s/flash-newton-schulz: My attempt to improve the speed of the newton schulz algorithm, starting from the dion implementation.
My attempt to improve the speed of the newton schulz algorithm, starting from the dion implementation. - thib-s/flash-newton-schulz
github.com
September 21, 2025 at 8:06 PM
I used a mathematical trick to pre-condition the matrix, allowing to shave one iteration of the algorithm. This is not only faster, but also unlocks better convergence, with singular values closer to 1.
September 21, 2025 at 8:06 PM
What is the S_n^++ ?
August 10, 2025 at 10:17 AM
It's crazy to think that I spent years using bjork&Bowie algorithm with 25 iters, and within a year, we got NS alg, an optimized set of parameters to run it in 5 iter, and triton kernels.
August 10, 2025 at 10:15 AM
Large matrices are already compute bounded so the gain is small for those, so I will work to add fp8 support (once current code is consolidated).
I'll do a PR into the Dion repo when ready !
August 10, 2025 at 10:15 AM
Open Question: Does FP4 make fine-tuning easier or harder? On one side, fp4 weight might demand high precision gradients, on the other, it might be super compliant with QLoRA, what do you think ?
August 3, 2025 at 11:01 AM
Robustness Check: Training in FP4 stress-tests hyperparameters and initialization quality.
If your model converges, you have robust, well-conditioned weights and gradients.
The model will likely be more resistant to input noise.
August 3, 2025 at 11:01 AM
Not "Pure" FP4: FP4 rarely stands alone. It's usually accompanied by per-row or per-column scaling factors (FP8/FP16). Gradients are often accumulated at higher precision (FP16/FP32), making ultra-low precision practical.
August 3, 2025 at 11:01 AM
Efficiency Boost: Halving precision (FP8 → FP4) allows doubling parameters with roughly similar FLOPs. But benefits can be even bigger because:
- Larger vector sizes enhance activations utilization.
- Lower precision floating-point math itself adds beneficial non-linearities.
August 3, 2025 at 11:01 AM
This makes me wonder what happens in standard training: when your training loss increases, does it mean that optimization failed? Or that, thanks to weight decay, the network’s (unknown) Lipschitz constant got lower and the network is just getting more robust? 🤷
July 25, 2025 at 7:44 PM
This has deeper implications: two networks with different initialization, batch order, or data augmentation end up learning the same function (same answers, same errors, both in train and val), even though the weights are completely different!
July 25, 2025 at 7:44 PM
The change in the Lipschitz constant makes the network more accurate (when increased) or more robust (when decreased). Unlike traditional classification, robust classification with a Lipschitz net has a unique minimizer once the Lipschitz constant is set.
July 25, 2025 at 7:44 PM
The Lipschitz constant of a network impacts its robustness, but what happens when you change it during training? Here, we train 16 networks with a fixed Lipschitz constant at first, then increase or decrease it by a factor of two mid-training.
July 25, 2025 at 7:44 PM