Arthur Douillard
douillard.bsky.social
Arthur Douillard
@douillard.bsky.social
distributed (diloco) + modularity (dipaco) + llm @ deepmind | continual learning phd @ sorbonne
read the full paper: https://arxiv.org/abs/2502.12996
@huggingface page: https://huggingface.co/papers/2502.12996

congrats to my collaborators @SatyenKale who led that work and Yani Donchev
February 19, 2025 at 5:41 PM
required bandwidth reduction is massive, for a 100B params model:

DP requires 471 Gbits/s
Streaming DiLoCo with inner com. overlap: 1.4 Gbits/s
Streaming DiLoCo with eager outer com. overlap: 400Mbits/s, more than 1000x reduction

400Mbits/s is consumer-grade bandwidth FYI
February 19, 2025 at 5:41 PM
scaling up to 1B params, and we see that our eager method can reach no loss of performance when synchronizing every 30 steps (thus overlapping 30 computation steps!), and follow closely when overlapping 100 steps
February 19, 2025 at 5:41 PM
thus we propose an *eager* version:

the update is made of the average of the *local up-to-date* update of the self replica and the *remote stale* update from the other replicas
February 19, 2025 at 5:41 PM
but its performance are dramatically bad.

we can recover a bit by lowering the outer learning by 4x, but this is still unsatisfying
February 19, 2025 at 5:41 PM
in this work, we explore if we can overlap an entire outer step, made of dozen to hundred of computation steps!

we first try a naive "delayed" version
February 19, 2025 at 5:41 PM
Streaming DiLoCo's second contribution is to overlap communication with computation, massively increasing the tolerable latency

we can safely overlap up to 5 steps, but more than that and performance drops rapidly!

https://x.com/Ar_Douillard/status/1885292127678021751
February 19, 2025 at 5:41 PM
DiLoCo allows us to distributed data-parallel across the world by only synchronizing once in a while, thus amortizing the communication cost

however, when syncing, this is a blocking operation!

https://x.com/Ar_Douillard/status/1724732329740976187
February 19, 2025 at 5:41 PM
arxiv is here: https://arxiv.org/abs/2502.12996

read more below!

February 19, 2025 at 5:41 PM
I'll be in SF in two weeks to talk at the AlgoPerf workshop, and i have a bunch of stickers to give, so let me know if you want to meet!
January 31, 2025 at 1:35 PM
Big thanks to all my collaborators!

We finished this last spring, and it was one of the coolest project i've been on.

The future will be distributed 🫡

https://arxiv.org/abs/2501.18512v1
January 31, 2025 at 1:35 PM
All of this is why we say Streaming DiLoCo is a good step towards distributed free lunch 🥪

So so many ideas we try just work on top of DiLoCo. And it can scale too! Look at the cracked folks of @PrimeIntellect who scaled their version to 10B
January 31, 2025 at 1:35 PM
What if each replica overlap a different num of steps (\tau) because they run a different speeds?

Can we break away from the lockstep synchronization? yes!

Workers can have a few delay steps, and it just work, w/o any special handling.
January 31, 2025 at 1:35 PM
There are tons of plots, tables, and charts in our paper; but let me share two more exciting plots:

Over how many steps can you overlap safely communication?

At least 5 without any significant loss of perf! That's a massive increase of tolerated latency.
January 31, 2025 at 1:35 PM
Likewise, with a Llama with 405B parameters.
January 31, 2025 at 1:35 PM
Speaking about DeepSeek, how to distribute its pretraining across the world with low-bandwidth?

It has only 35B activated params, but you need to sync 671B params in total! Hard to do across continents with data-parallel...

However, with our method? ❤️‍🔥
January 31, 2025 at 1:35 PM
Indeed, post-training RL reasoning is easier to distribute (good post here primeintellect.ai/blog/intellect-math ) than pretraining

but we need to scale more our pretraining, this is still a relevant axis!

DeepSeek-R1 also notes it:
January 31, 2025 at 1:35 PM
Of course the number displayed in that table are from a "simulation", but it's a pretty good indicator to what we find in practice.

Abolish the tyranny of requiring huge bandwidth! ✊
January 31, 2025 at 1:35 PM
The good part of overlapping communication with computation?

As @m_ryabinin noted in Swarm Parallelism: larger networks spent more time doing computation O(n^3) vs doing communication O(n^2).

We have much more time to sync at larger scales!
January 31, 2025 at 1:35 PM
My co-author, Yani, built a simulator: a DAG with fwd, bwd, and gradient reduction nodes.

It estimates how much time is spent in the costly com. between non-colocated devices and how much is spent crunching flops.
January 31, 2025 at 1:35 PM
Put everything together, scale it to 4B, and reach similar performance than data-parallel.

It's even better when overtraining with a larger token budget? remember the bitter lesson? just put more data and flops in your model, Streaming DiLoCo enables that.
January 31, 2025 at 1:35 PM
[3] You don't need full precision for your communication.

Quantize your update with 4 bits is enough -- you can barely see any changes on the performance.

And that's the free dessert 🍦
January 31, 2025 at 1:35 PM