Reliability
52 techniques
Building AI systems that perform consistently and predictably.
52 techniques
Goals | Models | Data Types | Description | |||
---|---|---|---|---|---|---|
SHapley Additive exPlanations | Algorithmic | Model Agnostic | Any | SHAP explains model predictions by quantifying how much each input feature contributes to the outcome. It assigns an... | ||
Permutation Importance | Algorithmic | Model Agnostic | Any | Permutation Importance quantifies a feature's contribution to a model's performance by randomly shuffling its values and... | ||
Mean Decrease Impurity | Algorithmic | Tree Based | Tabular | Mean Decrease Impurity (MDI) quantifies a feature's importance in tree-based models (e.g., Random Forests, Gradient... | ||
Monte Carlo Dropout | Algorithmic | Neural Network | Any | Monte Carlo Dropout estimates prediction uncertainty by applying dropout (randomly setting neural network weights to... | ||
Out-of-DIstribution detector for Neural networks | Algorithmic | Neural Network | Any | ODIN (Out-of-Distribution Detector for Neural Networks) identifies when a neural network encounters inputs significantly... | ||
Permutation Tests | Algorithmic | Model Agnostic | Any | Permutation tests assess the statistical significance of observed results (such as model accuracy, feature importance,... | ||
Synthetic Data Generation | Algorithmic | Model Agnostic | Any | Synthetic data generation creates artificial datasets that aim to preserve the statistical properties, distributions,... | ||
Federated Learning | Algorithmic | Model Agnostic | Any | Federated learning enables collaborative model training across multiple distributed parties (devices, organisations, or... | ||
Prediction Intervals | Algorithmic | Model Agnostic | Any | Prediction intervals provide a range of plausible values around a model's prediction, expressing uncertainty as 'the... | ||
Quantile Regression | Algorithmic | Model Agnostic | Any | Quantile regression estimates specific percentiles (quantiles) of the target variable rather than just predicting the... | ||
Conformal Prediction | Algorithmic | Model Agnostic | Any | Conformal prediction provides mathematically guaranteed uncertainty quantification by creating prediction sets that... | ||
Empirical Calibration | Algorithmic | Model Agnostic | Any | Empirical calibration adjusts a model's predicted probabilities to match observed frequencies. For example, if events... | ||
Temperature Scaling | Algorithmic | Neural Network | Any | Temperature scaling adjusts a model's confidence by applying a single parameter (temperature) to its predictions. When a... | ||
Deep Ensembles | Algorithmic | Neural Network | Any | Deep ensembles combine predictions from multiple neural networks trained independently with different random... | ||
Bootstrapping | Algorithmic | Model Agnostic | Any | Bootstrapping estimates uncertainty by repeatedly resampling the original dataset with replacement to create many new... | ||
Jackknife Resampling | Algorithmic | Model Agnostic | Any | Jackknife resampling (also called leave-one-out resampling) assesses model stability and uncertainty by systematically... | ||
Cross-validation | Algorithmic | Model Agnostic | Any | Cross-validation evaluates model performance and robustness by systematically partitioning data into multiple subsets... | ||
Area Under Precision-Recall Curve | Algorithmic | Model Agnostic | Any | Area Under Precision-Recall Curve (AUPRC) measures model performance by plotting precision (the proportion of positive... | ||
Safety Envelope Testing | Testing | Model Agnostic | Any | Safety envelope testing systematically evaluates AI system performance at the boundaries of its intended operational... | ||
Red Teaming | Procedural | Model Agnostic | Any | Red teaming involves systematic adversarial testing of AI/ML systems by dedicated specialists who attempt to identify... |
Rows per page
Page 1 of 3