Alex Makelov
banner
amakelov.bsky.social
Alex Makelov
@amakelov.bsky.social
Mechanistic interpretability
Creator of https://github.com/amakelov/mandala
prev. Harvard/MIT
machine learning, theoretical computer science, competition math.
Still waiting for someone to create Uqbar and Tlön using LLMs
December 28, 2024 at 2:08 PM
I also want 1.! It's a great way to get around the annoying thing where LLMs use faulty reasoning to arrive at the correct answer (which I've seen happen many times in math problems). Identifying the wrong proof step gives us a potentially harder benchmark that's still easy to evaluate!
December 23, 2024 at 3:13 PM
Despite failing to give a complete proof, I'd count this as a major improvement over other models' attempts. Most importantly, the model engaged directly with the key steps necessary for a full proof. I essentially consider this problem "solved by LLMs" now!
December 5, 2024 at 9:20 PM
In reality, you need to pick at least 18,003 instead of 18,000 (lol), and a precise calculation gives the average number of representations is at least (18003 choose 3) / (3*18003^2) = 1000.000006... You could go up to 18257 before this fails.
December 5, 2024 at 9:20 PM
Finally, it realizes and tries to fix the off-by-a-factor-of-6 issue. It writes a little essay giving what mathematicians would call a "moral" argument for why everything is OK. Pretty close!
December 5, 2024 at 9:20 PM
Then, it counts these triples. Unfortunately, it counts the number of ordered triples, which overestimates the number of unordered triples (what we care about) by about a factor of 6. Then it proceeds to the key step - lower-bound the average number of representations:
December 5, 2024 at 9:19 PM
So how does o1 do? Well, still not perfect, but it gets the overall steps correct! It goes for a direct pigeonhole argument. It eventually figures out that if we look at triples of numbers at most 18,000 each, the sum of their squares is always less than 1,000,000,000:
December 5, 2024 at 9:18 PM
Similarly, o1-mini (and o1-preview, from what I remember - it's not available in chat anymore) recalls the asymptotic statement, and spends more time talking about it, but also proves nothing about the constant.
December 5, 2024 at 9:18 PM
So how do LLMs do on this problem? 4o spits out a bunch of related facts and confidently asserts the (correct) answer without justification. Importantly, it states that the number of representations grows as sqrt(n) asymptotically - which is true, but the constant is decisive.
December 5, 2024 at 9:17 PM
Also, the numbers superficially pattern-match to relationships (10^9 = 1000^3, there's three numbers, etc.) irrelevant to the problem. Finally, the numbers are deliberately chosen to make the simple solution work by only a tiny margin that requires a precise calculation.
December 5, 2024 at 9:16 PM
The problem superficially pattern-matches to some heavy-ish tools, like Pythagorean triples or Legendre's three-square theorem; however, the only solution I'm aware of is actually quite simple and uses no "theory".
December 5, 2024 at 9:16 PM
The presence of the cat correlates with mech interp research getting done
November 25, 2024 at 1:00 PM