Antonin Poché @ ACL
banner
antoninpoche.bsky.social
Antonin Poché @ ACL
@antoninpoche.bsky.social
PhD Student doing XAI for NLP at @ANITI_Toulouse, IRIT, and IRT Saint Exupery.

🛠️ Xplique library development team member.
Can it be biased by people answering randomly.

If you have like 1 person over 5 answering randomly on the other guessing correctly, wouldn't you obtain your blue curve?
September 22, 2025 at 6:21 PM
Want the full story behind the poster? 🎉
I broke down the methodology and results here 👇
🚀 Thrilled to share our new paper (the first of my PhD)!

How can we compare concept-based #XAI methods in #NLProc?

ConSim (arxiv.org/abs/2501.05855) provides the answer.

Read the thread to find out which method is the most interpretable! 🧵1/7
July 25, 2025 at 3:38 PM
All code is available on my GitHub: github.com/AntoninPoche...

🙏Thanks a lot to my amazing co-authors
AlonJacovi, Agustin Martin Picard, @victorboutin.bsky.social, and @fannyjrd.bsky.social. I learned a lot!

PhD as part of ANITI.

Thanks for reading, and stay tuned for more XAI papers soon!🤩
7/7
GitHub - AntoninPoche/ConSim
Contribute to AntoninPoche/ConSim development by creating an account on GitHub.
github.com
January 31, 2025 at 2:56 PM
Final ranking of methods in our experiments:

- NMF (best, but requires positive embeddings)
- SAE (second, though possibly underestimated due to tuning complexities)
- ICA
- SVD & PCA (performed worse than providing no explanation or no projection at all)

6/7
January 31, 2025 at 2:54 PM
Experimental Setup

We compare different decompositions (PCA, ICA, SVD, NMF, SAE) for defining the concept space (Step 1 of concept-based methods) across 4 classification datasets and 5 models, using 3 different LLMs as meta-predictors (23,360 settings).

5/7
January 31, 2025 at 2:54 PM
Automated Simulatability

To assess an explanation’s utility, we measure how well a meta-predictor—human or LLM—can learn a model’s decision process from concept explanations and replicate predictions on new samples. We focus on LLM-based simulators for scalable experiments.

4/7
January 31, 2025 at 2:54 PM
Concept-Based Explanation

Concept-based methods are a 3-step process:
(1) Defining a concept space (projecting features onto interpretable dimensions);
(2) Interpreting concepts using textual or labeled descriptors;
(3) Assigning importance to concepts for predictions.

3/7
January 31, 2025 at 2:53 PM
Motivations

Concept-based explanations still hold tremendous untapped potential! Our new evaluation framework aims to measure the quality of these concepts and how effectively they guide users toward interpreting model decisions.

2/7
January 31, 2025 at 2:53 PM