Reduan Achtibat
reduanachtibat.bsky.social
Reduan Achtibat
@reduanachtibat.bsky.social
PhD student @ Fraunhofer HHI. XAI and Interpretability for NLP & Vision.
Reposted by Reduan Achtibat
Have had enough of the fake "sources" "cited" by ChatGPT? We have the solution in the form of low-cost causal citations for LLMs.

Go check this out! arxiv.org/abs/2505.15807

Thanks to my amazing co-authors
@pkhdipraja.bsky.social,
@reduanachtibat.bsky.social , Thomas Wiegand and Wojciech Samek!
May 28, 2025 at 2:50 PM
Reposted by Reduan Achtibat
ICL allows LLMs to adapt to new tasks and at the same time enables them to access external knowledge through RAG. How does the latter work?

TL;DR we find that certain attention heads perform various, distinct operations on the input prompt for QA!

arxiv.org/abs/2505.15807

1/
The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation
Large language models are able to exploit in-context learning to access external knowledge beyond their training data through retrieval-augmentation. While promising, its inner workings remain unclear...
arxiv.org
May 26, 2025 at 4:01 PM