All Techniques

Explore our comprehensive collection of 121 techniques for responsible AI development.

121 techniques
GoalsModelsData TypesDescription
SHapley Additive exPlanations
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Any
SHAP explains model predictions by quantifying how much each input feature contributes to the outcome. It assigns an...
Permutation Importance
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Any
Permutation Importance quantifies a feature's contribution to a model's performance by randomly shuffling its values and...
Mean Decrease Impurity
Algorithmic
Architecture/tree Based
Paradigm/supervised
+1
Tabular
Mean Decrease Impurity (MDI) quantifies a feature's importance in tree-based models (e.g., Random Forests, Gradient...
Coefficient Magnitudes (in Linear Models)
Metric
Architecture/linear Models
Paradigm/parametric
+2
Tabular
Coefficient Magnitudes assess feature influence in linear models by examining the absolute values of their coefficients....
Integrated Gradients
Algorithmic
Architecture/neural Networks
Paradigm/parametric
+3
Any
Integrated Gradients is an attribution technique that explains a model's prediction by quantifying the contribution of...
DeepLIFT
Algorithmic
Architecture/neural Networks
Requirements/white Box
+1
Any
DeepLIFT (Deep Learning Important FeaTures) explains neural network predictions by decomposing the difference between...
Layer-wise Relevance Propagation
Algorithmic
Architecture/neural Networks
Paradigm/parametric
+2
Any
Layer-wise Relevance Propagation (LRP) explains neural network predictions by working backwards through the network to...
Contextual Decomposition
Algorithmic
Architecture/neural Networks/recurrent
Requirements/white Box
+1
Text
Contextual Decomposition explains LSTM and RNN predictions by decomposing the final hidden state into contributions from...
Taylor Decomposition
Algorithmic
Architecture/neural Networks
Requirements/gradient Access
+2
Any
Taylor Decomposition is a mathematical technique that explains neural network predictions by computing first-order and...
Sobol Indices
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Any
Sobol Indices quantify how much each input feature contributes to the total variance in a model's predictions through...
Local Interpretable Model-Agnostic Explanations
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Any
LIME (Local Interpretable Model-agnostic Explanations) explains individual predictions by approximating the complex...
Ridge Regression Surrogates
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Any
This technique approximates a complex model by training a ridge regression (a linear model with L2 regularisation) on...
Partial Dependence Plots
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Any
Partial Dependence Plots show how changing one or two features affects a model's predictions on average. The technique...
Individual Conditional Expectation Plots
Visualization
Architecture/model Agnostic
Requirements/black Box
Any
Individual Conditional Expectation (ICE) plots display the predicted output for individual instances as a function of a...
Saliency Maps
Algorithmic
Architecture/neural Networks
Requirements/differentiable
+1
Image
Saliency maps are visual explanations for image classification models that highlight which pixels in an image most...
Gradient-weighted Class Activation Mapping
Algorithmic
Architecture/neural Networks/convolutional
Requirements/architecture Specific
+2
Image
Grad-CAM creates visual heatmaps showing which regions of an image a convolutional neural network focuses on when making...
Occlusion Sensitivity
Algorithmic
Architecture/model Agnostic
Requirements/black Box
Image
Occlusion sensitivity tests which parts of the input are important by occluding (masking or removing) them and seeing...
Classical Attention Analysis in Neural Networks
Algorithmic
Architecture/neural Networks/recurrent
Requirements/architecture Specific
+1
Any
Classical attention mechanisms in RNNs and CNNs create alignment matrices and temporal attention patterns that show how...
Factor Analysis
Algorithmic
Architecture/model Agnostic
Paradigm/unsupervised
+1
Tabular
Factor analysis is a statistical technique that identifies latent variables (hidden factors) underlying observed...
Principal Component Analysis
Algorithmic
Architecture/model Agnostic
Paradigm/unsupervised
+1
Any
Principal Component Analysis transforms high-dimensional data into a lower-dimensional representation by finding the...
Rows per page
Page 1 of 7