Antonin Poché @ ACL
banner
antoninpoche.bsky.social
Antonin Poché @ ACL
@antoninpoche.bsky.social
PhD Student doing XAI for NLP at @ANITI_Toulouse, IRIT, and IRT Saint Exupery.

🛠️ Xplique library development team member.
🔥 I am super excited to be presenting a poster at #ACL2025 in Vienna next week! 🌏

This is my first big conference!

📅 Tuesday morning, 10:30–12:00, during Poster Session 2.

💬 If you're around, feel free to message me. I would be happy to connect, chat, or have a drink!
July 25, 2025 at 3:37 PM
Final ranking of methods in our experiments:

- NMF (best, but requires positive embeddings)
- SAE (second, though possibly underestimated due to tuning complexities)
- ICA
- SVD & PCA (performed worse than providing no explanation or no projection at all)

6/7
January 31, 2025 at 2:54 PM
Automated Simulatability

To assess an explanation’s utility, we measure how well a meta-predictor—human or LLM—can learn a model’s decision process from concept explanations and replicate predictions on new samples. We focus on LLM-based simulators for scalable experiments.

4/7
January 31, 2025 at 2:54 PM
Concept-Based Explanation

Concept-based methods are a 3-step process:
(1) Defining a concept space (projecting features onto interpretable dimensions);
(2) Interpreting concepts using textual or labeled descriptors;
(3) Assigning importance to concepts for predictions.

3/7
January 31, 2025 at 2:53 PM
Motivations

Concept-based explanations still hold tremendous untapped potential! Our new evaluation framework aims to measure the quality of these concepts and how effectively they guide users toward interpreting model decisions.

2/7
January 31, 2025 at 2:53 PM
🚀 Thrilled to share our new paper (the first of my PhD)!

How can we compare concept-based #XAI methods in #NLProc?

ConSim (arxiv.org/abs/2501.05855) provides the answer.

Read the thread to find out which method is the most interpretable! 🧵1/7
January 31, 2025 at 2:51 PM
🤩 Thrilled to announce that I've started my PhD in
#XAI for #NLProc under the supervision of Pr. Nicholas Asher,
@philmuller.bsky.social, and @fannyjrd.bsky.social!

My project? Improve the transparency of LLMs through interactive explanations and user-tailored explanations. 🚀
January 24, 2025 at 4:10 PM