Individual Conditional Expectation Plots
Description
Individual Conditional Expectation (ICE) plots display the predicted output for individual instances as a function of a feature, with all other features held fixed for each instance. Each line on an ICE plot represents one instance's prediction trajectory as the feature of interest changes, revealing whether different instances are affected differently by that feature. This complements Partial Dependence Plots by showing heterogeneous effects across individual instances rather than just the average trend.
Example Use Cases
Explainability
Examining how house price predictions vary with property age for individual properties, revealing that whilst most houses follow a declining price trend with age, historic properties (built before 1900) show different patterns due to heritage value.
Analysing how individual patients' diabetes risk predictions change with BMI, showing that whilst most patients follow the expected increasing risk pattern, some patients with specific genetic markers show different response curves.
Limitations
- Plots can become cluttered and difficult to interpret when displaying many instances simultaneously.
- Does not provide automatic summarisation of overall effects, requiring manual visual inspection to identify patterns.
- Still assumes all other features remain fixed at their observed values, which may not reflect realistic scenarios.
- Cannot reveal interactions between the plotted feature and other features for individual instances.
Resources
Research Papers
Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation
This article presents Individual Conditional Expectation (ICE) plots, a tool for visualizing the model estimated by any supervised learning algorithm. Classical partial dependence plots (PDPs) help visualize the average partial relationship between the predicted response and one or more features. In the presence of substantial interaction effects, the partial response relationship can be heterogeneous. Thus, an average curve, such as the PDP, can obfuscate the complexity of the modeled relationship. Accordingly, ICE plots refine the partial dependence plot by graphing the functional relationship between the predicted response and the feature for individual observations. Specifically, ICE plots highlight the variation in the fitted values across the range of a covariate, suggesting where and to what extent heterogeneities might exist. In addition to providing a plotting suite for exploratory analysis, we include a visual test for additive structure in the data generating model. Through simulated examples and real data sets, we demonstrate how ICE plots can shed light on estimated models in ways PDPs cannot. Procedures outlined are available in the R package ICEbox.
Bringing a Ruler Into the Black Box: Uncovering Feature Impact from Individual Conditional Expectation Plots
As machine learning systems become more ubiquitous, methods for understanding and interpreting these models become increasingly important. In particular, practitioners are often interested both in what features the model relies on and how the model relies on them--the feature's impact on model predictions. Prior work on feature impact including partial dependence plots (PDPs) and Individual Conditional Expectation (ICE) plots has focused on a visual interpretation of feature impact. We propose a natural extension to ICE plots with ICE feature impact, a model-agnostic, performance-agnostic feature impact metric drawn out from ICE plots that can be interpreted as a close analogy to linear regression coefficients. Additionally, we introduce an in-distribution variant of ICE feature impact to vary the influence of out-of-distribution points as well as heterogeneity and non-linearity measures to characterize feature impact. Lastly, we demonstrate ICE feature impact's utility in several tasks using real-world data.
Communicating Uncertainty in Machine Learning Explanations: A Visualization Analytics Approach for Predictive Process Monitoring
As data-driven intelligent systems advance, the need for reliable and transparent decision-making mechanisms has become increasingly important. Therefore, it is essential to integrate uncertainty quantification and model explainability approaches to foster trustworthy business and operational process analytics. This study explores how model uncertainty can be effectively communicated in global and local post-hoc explanation approaches, such as Partial Dependence Plots (PDP) and Individual Conditional Expectation (ICE) plots. In addition, this study examines appropriate visualization analytics approaches to facilitate such methodological integration. By combining these two research directions, decision-makers can not only justify the plausibility of explanation-driven actionable insights but also validate their reliability. Finally, the study includes expert interviews to assess the suitability of the proposed approach and designed interface for a real-world predictive process monitoring problem in the manufacturing domain.