Tim Franzmeyer
timlive.bsky.social
Tim Franzmeyer
@timlive.bsky.social
Machine Learning PhD student @UniofOxford interested in reinforcement learning, multi-agent systems, and LLMs. Previously @GoogleDeepMind, @MetaAI and @ETH.
📄 Full paper: arxiv.org/abs/2506.04051

With amazing collaborators:
Archie Sravankumar
Lijuan Liu
Yuning Mao
Rui Hou
Sinong Wang
@jfoerst.bsky.social
Madian Khabsa
@lukezettlemoyer.bsky.social
High Accuracy, Less Talk (HALT): Reliable LLMs through Capability-Aligned Finetuning
Large Language Models (LLMs) currently respond to every prompt. However, they can produce incorrect answers when they lack knowledge or capability -- a problem known as hallucination. We instead propo...
arxiv.org
June 6, 2025 at 8:22 AM
🚨 One model, high correctness:

With low-threshold tuning, we take Llama3-70B from:

➡️ 51% → 87% correctness
➡️ Retaining 53% of the original completeness
June 6, 2025 at 8:22 AM
⚖️ HALT allows you to trade off completeness and correctness

We introduce a threshold that tunes how eagerly the model should respond:

Low threshold = more reliable answers 🔒 (Left box)
High threshold = more detailed answers 📝(Right box)
June 6, 2025 at 8:22 AM
🛠️ Our approach: Adjust finetuning responses to match the capabilities of the LLM

1️⃣ Break pretrained LLM responses into factual fragments
2️⃣ Use ground truth to flag incorrect fragments
3️⃣ Modify finetuning responses by removing or replacing errors with “Unsure from here” 🚧
June 6, 2025 at 8:22 AM
🧠 Standard LLMs always respond — even when unsure.

This leads to partially incorrect outputs in critical domains like Coding, Math, Medicine, and QA.

Why? Standard finetuning ignores what the pretrained model actually knows and pushes it to always complete every prompt.
June 6, 2025 at 8:22 AM