Local Interpretable Model-Agnostic Explanations

Description

LIME (Local Interpretable Model-agnostic Explanations) explains individual predictions by approximating the complex model's behaviour in a small neighbourhood around a specific instance. It works by creating perturbed versions of the input (e.g., removing words from text, changing pixel values in images, or varying feature values), obtaining the model's predictions for these variations, and training a simple interpretable model (typically linear regression) weighted by proximity to the original instance. The coefficients of this local surrogate model reveal which features most influenced the specific prediction.

Example Use Cases

Explainability

Explaining why a specific patient received a high-risk diagnosis by showing which symptoms (fever, blood pressure, age) contributed most to the prediction, helping doctors validate the AI's reasoning.

Debugging a text classifier's misclassification of a movie review by highlighting which words (e.g., sarcastic phrases) confused the model, enabling targeted model improvements.

Transparency

Providing transparent explanations to customers about automated decisions in insurance claims, showing which claim features influenced approval or denial to meet regulatory requirements.

Limitations

  • Explanations can be unstable due to random sampling, producing different results across multiple runs.
  • The linear surrogate may poorly approximate highly non-linear model behaviour in the local region.
  • Defining the neighbourhood size and perturbation strategy requires careful tuning for each data type.
  • Can be computationally expensive for explaining many instances due to repeated model queries.

Resources

marcotcr/lime
Software Package
thomasp85/lime (R package)
Software Package
Local Interpretable Model-Agnostic Explanations (lime) — lime 0.1 ...
Documentation
'Why Should I Trust You?' Explaining the Predictions of Any Classifier
Research PaperMarco Tulio Ribeiro, Sameer Singh, and Carlos GuestrinFeb 16, 2016
How to convince your boss to trust your ML/DL models - Towards Data Science
Tutorial
Enhanced LIME — ADS 2.6.5 documentation
Documentation

Tags

Applicable Models:
Data Requirements:
Data Type:
Expertise Needed:
Explanatory Scope:
Technique Type: