Explainability
Completeness
Attributions fully account for model output
9 techniques in this subcategory
9 techniques
| Goals | Models | Data Types | Description | |||
|---|---|---|---|---|---|---|
| SHapley Additive exPlanations | Algorithmic | Architecture/model Agnostic Requirements/black Box | Any | SHAP explains model predictions by quantifying how much each input feature contributes to the outcome. It assigns an... | ||
| Mean Decrease Impurity | Algorithmic | Architecture/tree Based Paradigm/supervised +1 | Tabular | Mean Decrease Impurity (MDI) quantifies a feature's importance in tree-based models (e.g., Random Forests, Gradient... | ||
| Integrated Gradients | Algorithmic | Architecture/neural Networks Paradigm/parametric +3 | Any | Integrated Gradients is an attribution technique that explains a model's prediction by quantifying the contribution of... | ||
| DeepLIFT | Algorithmic | Architecture/neural Networks Requirements/white Box +1 | Any | DeepLIFT (Deep Learning Important FeaTures) explains neural network predictions by decomposing the difference between... | ||
| Layer-wise Relevance Propagation | Algorithmic | Architecture/neural Networks Paradigm/parametric +2 | Any | Layer-wise Relevance Propagation (LRP) explains neural network predictions by working backwards through the network to... | ||
| Contextual Decomposition | Algorithmic | Architecture/neural Networks/recurrent Requirements/white Box +1 | Text | Contextual Decomposition explains LSTM and RNN predictions by decomposing the final hidden state into contributions from... | ||
| Taylor Decomposition | Algorithmic | Architecture/neural Networks Requirements/gradient Access +2 | Any | Taylor Decomposition is a mathematical technique that explains neural network predictions by computing first-order and... | ||
| Sobol Indices | Algorithmic | Architecture/model Agnostic Requirements/black Box | Any | Sobol Indices quantify how much each input feature contributes to the total variance in a model's predictions through... | ||
| Feature Attribution with Integrated Gradients in NLP | Algorithmic | Architecture/neural Networks/transformer Architecture/neural Networks/transformer/llm +4 | Text | Applies Integrated Gradients to natural language processing models to attribute prediction importance to individual... |
Rows per page
Page 1 of 1