adamlsteinl.bsky.social
@adamlsteinl.bsky.social
Crucially, InstABoost achieves this control without degrading text quality. While other latent steering methods can cause generation fluency to drop sharply as you increase their strength, InstABoost maintains coherence while steering towards the instruction.
(6/7)
July 10, 2025 at 6:22 PM
Across 15 tasks, InstABoost either outperforms or matches the best steering method (prompt or latent-based). For tasks where prompt and latent-based steering perform equivalently, InstABoost can even combine the strengths of both and outperform both categories of methods.
(5/7)
July 10, 2025 at 6:22 PM
InstABoost steers an LLM in attention space, bridging the performance gap between latent and prompt-based steering. InstABoost can be implemented in ~3 lines of code which simply increases attention weight to an in-context instruction.
(3/7)
July 10, 2025 at 6:22 PM
Existing steering methods are either prompt or latent-based (modifying the hidden state), but which is better? We show the answer depends on the task. The steering task landscape includes those which are latent-optimal, instruction-optimal, and equivalent.
(2/7)
July 10, 2025 at 6:22 PM
Read our full position paper for in-depth experiments and insights:
🔗 Paper: arxiv.org/abs/2505.24874
💻 Code: github.com/adaminsky/ne...
Thanks to my collaborators Aaditya Naik, Neelay Velingker, Mayur Naik, and @profericwong.bsky.social .
(9/9)
The Road to Generalizable Neuro-Symbolic Learning Should be Paved with Foundation Models
Neuro-symbolic learning was proposed to address challenges with training neural networks for complex reasoning tasks with the added benefits of interpretability, reliability, and efficiency. Neuro-sym...
arxiv.org
June 13, 2025 at 8:30 PM
As foundation models continue to scale, we argue it’s time to move beyond enforcing rigid symbolic structure in NeSy during training and tackle the exciting problem of inferring which symbols and which program are needed for each task.
(8/9)
June 13, 2025 at 8:30 PM
On the other hand, NeSy prompting provides two key benefits atop foundation models:

Reliability: A symbolic program enables accurate, stable, and trustworthy results.

Interpretability: Explicit symbols provide a clear, debuggable window into the model's "understanding."
(7/9)
June 13, 2025 at 8:30 PM
3️⃣ The Program Pitfall: Training neural nets in conjunction with a fixed program leads to "hallucinated" symbols, reaching the correct answer for the wrong reasons, similar to reasoning shortcuts.
(6/9)
June 13, 2025 at 8:30 PM
2️⃣ The Data Pitfall: Training on small, specialized datasets encourages overfitting.
(5/9)
June 13, 2025 at 8:30 PM
1️⃣ The Compute Pitfall: Training specialized NeSy models has diminishing returns. As foundation models scale, the gap between NeSy training and NeSy prompting disappears, making dedicated training a costly detour.
(4/9)
June 13, 2025 at 8:30 PM
We compare traditional NeSy systems (trained end-to-end) with what we call neuro-symbolic prompting (foundation models performing perception tasks via prompting connected to a symbolic program) and find that the NeSy training process itself introduces three key pitfalls.
(3/9)
June 13, 2025 at 8:30 PM
Neuro-symbolic learning combines neural nets + programs for efficient, interpretable AI. But NeSy training is challenging and brittle due to the symbolic component.
With foundation models succeeding via prompting alone, we argue it’s time to rethink NeSy system design.
(2/9)
June 13, 2025 at 8:30 PM