Jonas
banner
jonasgeiping.bsky.social
Jonas
@jonasgeiping.bsky.social
ML research, safety & efficiency
Finally, this project was made possible by the INCITE program of the DoE, who sponsored our compute on the OLCF Frontier supercomputer. Without them, we could not have done open research at this scale!
February 10, 2025 at 4:48 PM
Thank you to all of my collaborators, @sean-mcleish.bsky.social , Neel Jain, jwkirchenbauer.bsky.social, Siddharth Singh, Brian Bartoldson, Bhavya Kailkhura, Abhinav Bhatele and especially Tom Goldstein, for doing this.

This really was a long project for us, with initial starts in Summer '23!
jwkirchenbauer.bsky.social
PhD Student at University of Maryland, advised by @tomgoldstein.bsky.social. jwkirchenbauer.notion.site
jwkirchenbauer.bsky.social
February 10, 2025 at 4:48 PM
You can find the model here: huggingface.co/tomg-group-u...
The code here: github.com/seal-rg/recu...
and the tech report here: www.arxiv.org/abs/2502.05171
tomg-group-umd/huginn-0125 · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
February 10, 2025 at 4:48 PM
What is it doing when it thinks longer?

We find evidence for pretty advanced structures in latent space, such as the tendency to use orbitals (see picture) to compute arithmetic tasks and reasoning about sentence structure

So, this model really is rotating shapes in a high-dimensional space?
February 10, 2025 at 4:48 PM
What is pretty exciting is that simply by training with our arch and objective, a separation emerges from scale - the model's latents converge quicker for some tokens in a sentence than others,

In this figure the model takes more time to think about the key parts of the text:
February 10, 2025 at 4:48 PM
We had enough compute for only a single shot to train at scale (and that is the model we've published).

On reasoning tasks like GSM8k, the model is pretty competitive, even compared to other pretrained open-source models, even though we have done no post/mid-training...
February 10, 2025 at 4:48 PM
First, the model (with 3.5B params), even though trained semi-optimally, and for 800B tokens, is competive with 7B open-source models trained for 2-3T tokens (OLMo-v1) - but we can't beat the new OLMo data recipe (yet)

This is pretty exciting, for our first large-scale run
February 10, 2025 at 4:48 PM
has something for everyone, new model architecture, optimizer details, AMD training (we trained on 4096 AMD GPUs), our data pipeline, and lots of analysis!

Here are a few of my highlights:
February 10, 2025 at 4:48 PM