Matthew Leavitt
banner
leavittron.bsky.social
Matthew Leavitt
@leavittron.bsky.social
Chief Science Officer, Co-Founder @datologyai
Former: Head of Data Research @MosaicML; FAIR.
views are from nowhere
Also we're a small startup and we want to remain nimble. Roles here are fairly fluid, though there are specific strength areas that we are trying to hire for, hence our distinct job postings.
November 25, 2024 at 7:40 PM
I know you're not trying to call me out, but happy to give my thoughts: Many research orgs have a "research vs. eng" divide that comes along w/ baggage: hierarchies, expectation of duties, etc. We don't want that here. Nobody is too good to touch code or insufficiently credentialed to do science
November 25, 2024 at 7:40 PM
Huge shoutout to @agcrnz.bsky.social @alvin-d.bsky.social @pratyushmaini.bsky.social and Mo Razzak for leading this work. You did an amazing job! Stay tuned for more announcements from us. We’ll have a booth at NeurIPS, come say hi!
November 25, 2024 at 5:49 PM
If you’re interested in pushing the bounds of what’s possible with data curation, we’re also looking for talented Members of Technical Staff who have experience doing data research, translating science into products, and building scalable data products
jobs.ashbyhq.com/DatologyAI
DatologyAI Jobs
DatologyAI Jobs
jobs.ashbyhq.com
November 25, 2024 at 5:49 PM
We’re starting to work with early customers: if you’re an enterprise AI company interested in training multimodal and/or text models faster, better, or smaller, get in touch!
November 25, 2024 at 5:49 PM
If you want more details, here’s the full technical deep-dive!
www.datologyai.com/post/technic...
Technical Deep-Dive: Curating Our Way to a State-of-the-Art Text Dataset
Our data curation pipeline to obtain substantial improvements in LLM quality, training speed, and inference efficiency.
www.datologyai.com
November 25, 2024 at 5:49 PM
Overall I’m thrilled with these results. And I’m so very proud of our team for the amazing work that got us here. But the results aren’t the goal. The results are the first proof that it’s possible to build a product for foundation-scale data curation.
November 25, 2024 at 5:49 PM
We can also use our data curation to train better, smaller models to save on inference: a 1.3B model trained on 180B tokens of our data has better 5-shot performance than every 2.7B model we trained on public data sets, on token-matched (NOT FLOPs-matched) basis. FLOPs-matched is even better
November 25, 2024 at 5:49 PM
Our curated data also allows us to train faster! We save 86.9% on compute (7.7x speedup) training a 2.7B model on our data to reach the same avg 5-shot accuracy as training on RPJv1 for 180B tokens, and save 70.1% on compute (3.4x speedup) to reach the same accuracy as DCLM
November 25, 2024 at 5:49 PM
This is noteworthy because FW-Edu and DCLM have pool sizes that are 10x (DCLM) and 11.5x (FW-Edu) the curated dataset size. Our 180B token dataset is curated from a pool size of 540B tokens, which is only 3x. So we probably have a lot of room for improvement with larger datasets!
November 25, 2024 at 5:49 PM
Interestingly, we also find that starting with a larger dataset to curate yields a much better final dataset.
November 25, 2024 at 5:49 PM
Our improved model quality is general—it doesn’t come from outsize gains on a small number of tasks. We tie or surpass even the strongest baseline, DCLM, in two thirds or more of the evaluations, and are at par or outperforming other baselines on nearly all evals.
November 25, 2024 at 5:49 PM
With our curated data we were able to train better models: 8.4 percentage-point (pp) mean 5-shot improvement over RPJv1, +6.1pp vs FineWeb-Edu (FW-Edu), and +4.4pp vs DCLM. This is no small feat: FineWeb, FineWeb-Edu, and DCLM are VERY high-quality, meticulously-curated datasets
November 25, 2024 at 5:49 PM
Then we trained standard (MPT-style) transformers up to 2.7B parameters for token budgets up to 180B on our curated RPJv1 and other public pretraining corpora, and evaluated the models on a suite of 15 standard language model evals
November 25, 2024 at 5:49 PM
Why did we choose to curate RPJv1? Because it’s well-established, contains diverse content across a number of domains, and already has a moderate degree of curation applied to it
November 25, 2024 at 5:49 PM
Our data curation pipeline is a scalable, productionized system that integrates a suite of bleeding-edge algorithms to curate data in the quantity necessary for foundation model pretraining. And with it, we developed a single recipe that we used to to curate RPJv1
November 25, 2024 at 5:49 PM
tl;dr: We transformed RedPajama-v1 (RPJv1) into a dataset that outperforms FineWeb-Edu and DCLM, two of the strongest publicly-available text pretraining datasets. Let me walk you through how we did it
November 25, 2024 at 5:49 PM
Some of you may have seen our recent announcement of our state-of-the-art data curation pipeline and the fantastic results we got applying it to multimodal data for training CLIP models. Well it works pretty well for text, too!
bsky.app/profile/leav...
🧵We’ve spent the last few months at @datologyai.bsky.social
building a state-of-the-art data curation pipeline and I’m SO excited to share our first results: we curated image-text pretraining data and massively improved CLIP model quality, training speed, and inference efficiency 🔥🔥🔥
November 25, 2024 at 5:49 PM
HUGE shoutout to Haoli Yin, Amro Abbas, and (Evil) Josh Wills for leading this work. You did an amazing job! Oh, and stay tuned for more announcements from us. Our curation pipeline works for text, too 😉
November 14, 2024 at 5:16 PM
If you’re interested in pushing the bounds of what’s possible with data curation, we’re also looking for talented Members of Technical Staff who have experience doing data research, translating science into products, and building scalable data products jobs.ashbyhq.com/DatologyAI
DatologyAI Jobs
DatologyAI Jobs
jobs.ashbyhq.com
November 14, 2024 at 5:16 PM
We’re starting to work with early customers: if you’re an enterprise AI company interested in training multimodal and/or text models faster, better, or smaller, get in touch! forms.wix.com/f/7257903640...
Join our waitlist-DatologyAI
We're still building! By submitting this form, your company will join our waitlist to get early access to Datology. When your company has been selected, we will reach out.
forms.wix.com
November 14, 2024 at 5:16 PM
And if you’d prefer a quick overview, we have one of those, too: www.datologyai.com/post/datolog...
DatologyAI’s Image-Text Data Curation: Train Better, Faster, Smaller
What if you could save up to 98% on compute costs? Read on to find out how DatologyAI’s deep learning data curation tools make this possible.
www.datologyai.com
November 14, 2024 at 5:16 PM