Contrastive Explanation Method

Description

The Contrastive Explanation Method (CEM) explains model decisions by generating contrastive examples that reveal what makes a prediction distinctive. It identifies 'pertinent negatives' (minimal features that could be removed to change the prediction) and 'pertinent positives' (minimal features that must be present to maintain the prediction). This approach helps users understand not just what led to a decision, but what would need to change to achieve a different outcome, providing actionable insights for decision-making.

Example Use Cases

Explainability

Explaining loan application rejections by showing that removing recent late payments (pertinent negative) or adding £5,000 more annual income (pertinent positive) would change the decision to approval, giving applicants clear actionable guidance.

Analysing medical diagnosis models by identifying that removing a specific symptom combination would change a high-risk classification to low-risk, helping clinicians understand the critical diagnostic factors.

Transparency

Providing transparent hiring decisions by showing job candidates exactly which qualifications (pertinent positives) they need to acquire or which application elements (pertinent negatives) might be hindering their success.

Limitations

  • Computationally expensive as it requires solving an optimisation problem for each individual instance to find minimal perturbations.
  • Results can be highly sensitive to hyperparameter settings, requiring careful tuning to produce meaningful explanations.
  • May generate unrealistic or impossible contrastive examples if constraints are not properly specified, leading to impractical recommendations.
  • Limited to scenarios where feature perturbations are meaningful and actionable, making it less suitable for immutable characteristics or highly constrained domains.

Resources

Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives
Research PaperAmit Dhurandhar et al.Feb 21, 2018
Interpretable Machine Learning
Documentation
Benchmarking and survey of explanation methods for black box models
DocumentationBodria Francesco et al.Jan 1, 2023

Tags

Applicable Models:
Data Requirements:
Data Type:
Evidence Type:
Expertise Needed:
Explanatory Scope:
Lifecycle Stage:
Technique Type: