adamlsteinl.bsky.social
@adamlsteinl.bsky.social
Excited to share our new paper: "Instruction Following by Boosting Attention of Large Language Models"!

We introduce Instruction Attention Boosting (InstABoost), a simple yet powerful method to steer LLM behavior by making them pay more attention to instructions.
(🧵1/7)
July 10, 2025 at 6:22 PM
🧠 Foundation models are reshaping reasoning. Do we still need specialized neuro-symbolic (NeSy) training, or can clever prompting now suffice?
Our new position paper argues the road to generalizable NeSy should be paved with foundation models.
🔗 arxiv.org/abs/2505.24874
(🧵1/9)
June 13, 2025 at 8:30 PM