Explainable AI Berlin
xai-berlin.bsky.social
Explainable AI Berlin
@xai-berlin.bsky.social
Explainable AI research from the machine learning group of Prof. Klaus-Robert Müller at @tuberlin.bsky.social & @bifold.berlin
Counterfactual explainers for dynamic graphs by Qu et al. by @scadsai.bsky.social
arxiv.org/abs/2403.16846
Explainable Biomedical Claim Verification by Liang et al. from @dfki.bsky.social
arxiv.org/abs/2502.21014
November 6, 2025 at 3:00 PM
We were happy to see other explainability-themed posters:
A study of monosemanticity of SAE features in VLMs by Pach et al. from @munichcenterml.bsky.social
arxiv.org/abs/2504.02821
User-centered research for data attribution by Nguyen et al. from @tuebingen-ai.bsky.social
arxiv.org/abs/2409.16978
November 6, 2025 at 3:00 PM
Manuel Welte presented ongoing work on intrinsic interpretability of transformer models through a novel approach for restructuring internal representations.
November 6, 2025 at 3:00 PM
@lkopf.bsky.social and @eberleoliver.bsky.social presented the PRISM framework for multi-concept feature descriptions in LLMs.
arxiv.org/abs/2506.15538
November 6, 2025 at 3:00 PM