explainability is why a model makes a decision, e.g., shap.
interpretability is how the model makes a decision, e.g., transformer circuits.
explainability is why a model makes a decision, e.g., shap.
interpretability is how the model makes a decision, e.g., transformer circuits.