Zach Mueller
banner
muellerzr.bsky.social
Zach Mueller
@muellerzr.bsky.social
Technical Lead on Accelerate @ Hugging Face | Passionate about Open Source | https://muellerzr.github.io
On November 1st we're running it again! Come join over 300 students in learning about what makes distributed training tick, common bottlenecks, and more. maven.com/walk-with-c...
Scratch to Scale: Large-Scale Training in the Modern World by Zachary Mueller on Maven
Learn the techniques used today to take your model training from Colab to Clusters
maven.com
October 5, 2025 at 12:02 PM
Dispatching works by keeping the dataset on one process and then sending the batches to the other workers throughout training. This incurs a memory cost since this is a GPU -> GPU transfer, however many find this to be more appealing than other alternatives.
October 5, 2025 at 12:02 PM
Also get access to the prior cohort's guest speakers too! (And get in for 35% off) maven.com/walk-with-c...
Scratch to Scale: Large-Scale Training in the Modern World by Zachary Mueller on Maven
Learn the techniques used today to take your model training from Colab to Clusters
maven.com
October 3, 2025 at 11:24 AM
On November 1st starts the second cohort of Scratch to Scale! Come learn the major tricks and algorithms used when single-GPU training hits failure points.
October 3, 2025 at 11:24 AM
This ensures consistent randomness and avoids redundant computation, making it well-suited for more complex sampling strategies.
October 3, 2025 at 11:24 AM
Batch sampler sharding is especially efficient when using non-trivial sampling methods (weighted, balanced, temperature-based, etc.), since the sampling logic runs once globally rather than being duplicated per worker.
October 3, 2025 at 11:24 AM
Unlike iterable dataset sharding, where each worker directly pulls raw data in its __iter__, batch sampler sharding first generates indices that specify which items to fetch. These indices are grouped into batches, which are then collated into tensors and moved to CUDA for training.
October 3, 2025 at 11:24 AM
On November 1st starts the second cohort of Scratch to Scale! Come learn the major tricks and algorithms used when single-GPU training hits failure points. Also get access to the prior cohort's guest speakers too! (And get in for 35% off) maven.com/walk-with-c...
Scratch to Scale: Large-Scale Training in the Modern World by Zachary Mueller on Maven
Learn the techniques used today to take your model training from Colab to Clusters
maven.com
October 2, 2025 at 11:05 AM
The main benefit to doing things this way is it's extremely fast (there are no communications involved since each process can figure out its own slice of the data), however it's more RAM heavy since the entire dataset needs to live in memory.
October 2, 2025 at 11:05 AM
Afterwards, all that's needed is to move the data to the device and you're ready to perform distributed data parallelism!
October 2, 2025 at 11:05 AM
For this to work, every process contains the entire dataset (so uses more RAM, but RAM is cheap). During the Dataset's __iter__() call each process will grab a pre-determined "chunk" of a global batch of items from the dataset in the __getitem__().
October 2, 2025 at 11:05 AM
Reach out to me if you need financial assistance/cost is too much. Happy to try and work within budgets (or company education stipends): scratchtoscale@gmail.com
June 22, 2025 at 11:19 PM
I specifically mean my gym tracking app, STRONG. I’ve used it for 5 years, have a lifetime membership, and they decided that there needs to be internet access to check for premium to add warmup sets. Which is a bit insane. (It is a premium feature, but there’s better ways to do this)
March 11, 2025 at 11:49 AM
And I’ll try wearing it not sleeping only :)
March 8, 2025 at 6:05 PM
(As a result I no longer wear mine and track my sleep using other methods that don’t have that psychological affect)
March 8, 2025 at 5:37 PM
As someone who did Oura for 2 years, I can say it got to a point where my sleep depended on the Oura score, not the Oura score providing insight. So be weary of that long-term otherwise full agree 💯
March 8, 2025 at 5:37 PM