🛠️ Xplique library development team member.
This is my first big conference!
📅 Tuesday morning, 10:30–12:00, during Poster Session 2.
💬 If you're around, feel free to message me. I would be happy to connect, chat, or have a drink!
This is my first big conference!
📅 Tuesday morning, 10:30–12:00, during Poster Session 2.
💬 If you're around, feel free to message me. I would be happy to connect, chat, or have a drink!
- NMF (best, but requires positive embeddings)
- SAE (second, though possibly underestimated due to tuning complexities)
- ICA
- SVD & PCA (performed worse than providing no explanation or no projection at all)
6/7
- NMF (best, but requires positive embeddings)
- SAE (second, though possibly underestimated due to tuning complexities)
- ICA
- SVD & PCA (performed worse than providing no explanation or no projection at all)
6/7
To assess an explanation’s utility, we measure how well a meta-predictor—human or LLM—can learn a model’s decision process from concept explanations and replicate predictions on new samples. We focus on LLM-based simulators for scalable experiments.
4/7
To assess an explanation’s utility, we measure how well a meta-predictor—human or LLM—can learn a model’s decision process from concept explanations and replicate predictions on new samples. We focus on LLM-based simulators for scalable experiments.
4/7
Concept-based methods are a 3-step process:
(1) Defining a concept space (projecting features onto interpretable dimensions);
(2) Interpreting concepts using textual or labeled descriptors;
(3) Assigning importance to concepts for predictions.
3/7
Concept-based methods are a 3-step process:
(1) Defining a concept space (projecting features onto interpretable dimensions);
(2) Interpreting concepts using textual or labeled descriptors;
(3) Assigning importance to concepts for predictions.
3/7
Concept-based explanations still hold tremendous untapped potential! Our new evaluation framework aims to measure the quality of these concepts and how effectively they guide users toward interpreting model decisions.
2/7
Concept-based explanations still hold tremendous untapped potential! Our new evaluation framework aims to measure the quality of these concepts and how effectively they guide users toward interpreting model decisions.
2/7
How can we compare concept-based #XAI methods in #NLProc?
ConSim (arxiv.org/abs/2501.05855) provides the answer.
Read the thread to find out which method is the most interpretable! 🧵1/7
How can we compare concept-based #XAI methods in #NLProc?
ConSim (arxiv.org/abs/2501.05855) provides the answer.
Read the thread to find out which method is the most interpretable! 🧵1/7
#XAI for #NLProc under the supervision of Pr. Nicholas Asher,
@philmuller.bsky.social, and @fannyjrd.bsky.social!
My project? Improve the transparency of LLMs through interactive explanations and user-tailored explanations. 🚀
#XAI for #NLProc under the supervision of Pr. Nicholas Asher,
@philmuller.bsky.social, and @fannyjrd.bsky.social!
My project? Improve the transparency of LLMs through interactive explanations and user-tailored explanations. 🚀