Tim Franzmeyer
timlive.bsky.social
Tim Franzmeyer
@timlive.bsky.social
Machine Learning PhD student @UniofOxford interested in reinforcement learning, multi-agent systems, and LLMs. Previously @GoogleDeepMind, @MetaAI and @ETH.
🚨 One model, high correctness:

With low-threshold tuning, we take Llama3-70B from:

➡️ 51% → 87% correctness
➡️ Retaining 53% of the original completeness
June 6, 2025 at 8:22 AM
⚖️ HALT allows you to trade off completeness and correctness

We introduce a threshold that tunes how eagerly the model should respond:

Low threshold = more reliable answers 🔒 (Left box)
High threshold = more detailed answers 📝(Right box)
June 6, 2025 at 8:22 AM
🛠️ Our approach: Adjust finetuning responses to match the capabilities of the LLM

1️⃣ Break pretrained LLM responses into factual fragments
2️⃣ Use ground truth to flag incorrect fragments
3️⃣ Modify finetuning responses by removing or replacing errors with “Unsure from here” 🚧
June 6, 2025 at 8:22 AM
What if LLMs knew when to stop? 🚧

HALT finetuning teaches LLMs to only generate content they’re confident is correct.

🔍 Insight: Post-training must be adjusted to the model’s capabilities.
⚖️ Tunable trade-off: Higher correctness 🔒 vs. More completeness 📝

🧵
June 6, 2025 at 8:22 AM