Explainability

Decomposition

Breaks down predictions into components (e.g., Taylor Decomposition, Contextual Decomposition)

5 techniques in this subcategory

5 techniques
GoalsModelsData TypesDescription
DeepLIFT
Algorithmic
Architecture/neural Networks
Requirements/white Box
+1
Any
DeepLIFT (Deep Learning Important FeaTures) explains neural network predictions by decomposing the difference between...
Layer-wise Relevance Propagation
Algorithmic
Architecture/neural Networks
Paradigm/parametric
+2
Any
Layer-wise Relevance Propagation (LRP) explains neural network predictions by working backwards through the network to...
Contextual Decomposition
Algorithmic
Architecture/neural Networks/recurrent
Requirements/white Box
+1
Text
Contextual Decomposition explains LSTM and RNN predictions by decomposing the final hidden state into contributions from...
Taylor Decomposition
Algorithmic
Architecture/neural Networks
Requirements/gradient Access
+2
Any
Taylor Decomposition is a mathematical technique that explains neural network predictions by computing first-order and...
Classical Attention Analysis in Neural Networks
Algorithmic
Architecture/neural Networks/recurrent
Requirements/architecture Specific
+1
Any
Classical attention mechanisms in RNNs and CNNs create alignment matrices and temporal attention patterns that show how...
Rows per page
Page 1 of 1