Integrating a user-provided simulator in an end-to-end AutoEmulate workflow#

Overview#

In this workflow we demonstrate the integration of a Cardiovascular simulator, Naghavi Model from ModularCirc in the end-to-end AutoEmulate workflow.

Naghavi model is a 0D (zero-dimensional) computational model of the cardiovascular system, which is used to simulate blood flow and pressure dynamics in the heart and blood vessels.

This demo includes:

  • Setting up parameter ranges

  • Creating samples

  • Running the simulator to generate training data for the emulator

  • Using AutoEmulate to find the best pre-processing technique and model tailored to the simulation data

  • Applying history matching to refine the model and enhance parameter ranges

  • Sensitivity Analysis

Work Flow

Additional dependency requirements#

In this demonstration we are using the Naghavi Model Simulator from ModularCirc library. Therefore, the user needs to install the ModularCirc library in their existing AutoEmulate virtual environemnt as an additional dependency.

# ! pip install git+https://github.com/alan-turing-institute/ModularCirc.git@dev

Workflow#

1 - Create a dictionary called parameters_range which contains the name of the simulator input parameters and their range.#

from autoemulate.simulations.naghavi_cardiac_ModularCirc import extract_parameter_ranges
# Usage example:
parameters_range = extract_parameter_ranges('../data/naghavi_model_parameters.json')
parameters_range
{'ao.r': (120.0, 360.0),
 'ao.c': (0.15, 0.44999999999999996),
 'art.r': (562.5, 1687.5),
 'art.c': (1.5, 4.5),
 'ven.r': (4.5, 13.5),
 'ven.c': (66.65, 199.95000000000002),
 'av.r': (3.0, 9.0),
 'mv.r': (2.05, 6.1499999999999995),
 'la.E_pas': (0.22, 0.66),
 'la.E_act': (0.225, 0.675),
 'la.v_ref': (5.0, 15.0),
 'la.k_pas': (0.01665, 0.07500000000000001),
 'lv.E_pas': (0.5, 1.5),
 'lv.E_act': (1.5, 4.5),
 'lv.v_ref': (5.0, 15.0),
 'lv.k_pas': (0.00999, 0.045)}

2 - Use LatinHypercube method from AutoEmulate to generate initial samples using the parameters range.#

import pandas as pd
import numpy as np
from autoemulate.experimental_design import LatinHypercube

# Generate Latin Hypercube samples
N_samples = 100
lhd = LatinHypercube(list(parameters_range.values()))
sample_array = lhd.sample(N_samples)
sample_df = pd.DataFrame(sample_array, columns=parameters_range.keys())

print("Number of parameters:", sample_df.shape[1], "Number of samples from each parameter:", sample_df.shape[0])
sample_df.head()
Number of parameters: 16 Number of samples from each parameter: 100
ao.r ao.c art.r art.c ven.r ven.c av.r mv.r la.E_pas la.E_act la.v_ref la.k_pas lv.E_pas lv.E_act lv.v_ref lv.k_pas
0 181.183489 0.270318 1603.018317 2.959594 6.003299 195.369110 3.937637 3.884733 0.488983 0.513613 7.894707 0.071994 1.185785 4.021054 8.363500 0.028132
1 176.848557 0.351545 909.233665 4.223447 12.895963 183.225533 8.706071 3.621647 0.302796 0.577803 13.784270 0.026378 1.460897 2.891056 12.684470 0.024778
2 237.476807 0.167153 1628.006457 2.359281 5.509983 167.411090 5.175940 5.710833 0.578039 0.604609 9.962029 0.065479 1.492188 1.806558 14.004487 0.016258
3 277.067184 0.282469 732.473802 3.231590 11.126694 135.807780 4.876715 4.021932 0.644373 0.325448 12.339279 0.058467 0.752356 1.883375 14.215886 0.022485
4 272.561340 0.250207 1355.692548 1.860010 13.427768 114.692961 4.424005 2.191801 0.284561 0.282951 12.218897 0.072280 1.394213 2.948728 7.121979 0.032423

3 - Wrap your Simulator in the AutoEmulate Simulator Base Class.#

Work Flow
from autoemulate.simulations.naghavi_cardiac_ModularCirc import NaghaviSimulator
# Initialize simulator with specific outputs
simulator = NaghaviSimulator(
    parameters_range=parameters_range, 
    output_variables=['lv.P_i', 'lv.P_o'],  # Only the ones you're interested in
    n_cycles=300, 
    dt=0.001,
)

4 - Run the simulator using run_batch_simulations to obtain data for training AutoEmulate.#

run = True
save = False
read = False
if run:
    # Run batch simulations with the samples generated in Cell 1
    results = simulator.run_batch_simulations(sample_df)

    # Convert results to DataFrame for analysis
    results_df = pd.DataFrame(results)

if save:
    # Save the results to a CSV file
    results_df.to_csv('../data/simulator_results.csv', index=False)

if read:
    # Read the results from the CSV file
    results_df = pd.read_csv('../data/simulator_results.csv')
    results = results_df.to_numpy()
results_df

Note that the first 4 steps can be replaced by having stored the output of your simulation in a file and then reading them in to a dataframe. However the purpose of this article is to demonstrate the use of a User-provided simulator in an end-to-end workflow.

simulator.output_names

Test your simulator with our test function to make sure it is compatible with AutoEmulate pipeline (Feature not provided yet).

# this should be replaced with a test written specically to test the simulator written by the user
# ! pytest ../../tests/test_base_simulator.py

5 - Setup AutoEmulate.#

  • User should choose from the available target pre-processing methods the methods they would like to investigate.

  • User should choose from the available models the models they would like to investigate.

  • Setup AutoEmulate

import numpy as np
from autoemulate.compare import AutoEmulate
from autoemulate.plotting import _predict_with_optional_std


preprocessing_methods = [{"name" : "PCA", "params" : {"reduced_dim": 2}}]
em = AutoEmulate()
em.setup(sample_df, results, models=["gp"], scale_output = True, reduce_dim_output=True, preprocessing_methods=preprocessing_methods)

6 - Run compare to train AutoEmulate and extract the best model.#

best_model = em.compare()

7 - Examine the summary of cross-validation.#

em.summarise_cv()
best_model

8 - Extract the desired model, run evaluation and refit using the whole dataset.#

  • You can use the best_model selected by AutoEmulate

  • or you can extract the model and pre-processing technique displayed in em.summarise_cv()

gp = em.get_model('GaussianProcess')
em.evaluate(gp)
# for best model change the line above to:
# em.evaluate(best_model)
gp_final = em.refit(gp)
gp_final
em.plot_eval(gp_final)

9 - Sensitivity Analysis#

Use AutoEmulate to perform sensitivity analysis. This will help identify the parameters that have higher impact on the outputs to narrow down the search space for performing model calibration.

Sobol Interpretation:

  • \(S_1\) values sum to ≤ 1.0 (exact fraction of variance explained)

  • \(S_t - S_1\) = interaction effects involving that parameter

  • Large \(S_t - S_1\) gap indicates strong interactions

Morris Interpretation:

  • High \(\mu^*\), Low \(\sigma\): Important parameter with linear/monotonic effects

  • High \(\mu^*\), High \(\sigma\): Important parameter with non-linear effects or interactions

  • Low \(\mu^*\), High \(\sigma\): Parameter involved in interactions but not individually important

  • Low \(\mu^*\), Low \(\sigma\): Unimportant parameter

# Extract parameter names and bounds from the dictionary
parameter_names = list(parameters_range.keys())
parameter_bounds = list(parameters_range.values())

# Define the problem dictionary for Sobol sensitivity analysis
problem = {
    'num_vars': len(parameter_names),
    'names': parameter_names,
    'bounds': parameter_bounds
}
si = em.sensitivity_analysis(problem=problem, method='morris')
si
em.plot_sensitivity_analysis(si)

10 - History Matching#

Once you have the final model, running history matching can improve your model. The Implausibility metric is calculated using the following relation for each set of parameter:

\(I_i(\overline{x_0}) = \frac{|z_i - \mathbb{E}(f_i(\overline{x_0}))|}{\sqrt{\text{Var}[z_i - \mathbb{E}(f_i(\overline{x_0}))]}}\) Where if implosibility (\(I_i\)) exceeds a threshhold value, the points will be rulled out. The outcome of history matching are the NORY (Not Ruled Out Yet) and RO (Ruled Out) points.

  • create a dictionary of your observations, this should match the output names of your simulator

  • create the history matching object

  • run history matching

from autoemulate.history_matching import HistoryMatching

# Define observed data with means and variances
observations = {
    'lv.P_i_min': (5.0, 0.1),   # Minimum of minimum LV pressure
    'lv.P_i_max': (20.0, 0.1),   # Maximum of minimum LV pressure
    'lv.P_i_mean': (10.0, 0.1),  # Mean of minimum LV pressure
    'lv.P_i_range': (15.0, 0.5), # Range of minimum LV pressure
    'lv.P_o_min': (1.0, 0.1),  # Minimum of maximum LV pressure
    'lv.P_o_max': (13.0, 0.1),  # Maximum of maximum LV pressure
    'lv.P_o_mean': (12.0, 0.1), # Mean of maximum LV pressure
    'lv.P_o_range': (20.0, 0.5)  # Range of maximum LV pressure
}

# Create history matcher
hm = HistoryMatching(
    simulator=simulator,
    observations=observations,
    threshold=3.0
)

# Run history matching
all_samples, all_impl_scores, emulator = hm.run(
    n_waves=30,
    n_samples_per_wave=100,
    emulator_predict=True,
    initial_emulator=gp_final,
)
em.plot_eval(emulator)

10 - use the interactive dashboard to inspect the results of history matching#

from autoemulate.history_matching_dashboard import HistoryMatchingDashboard
dashboard = HistoryMatchingDashboard(
    samples=all_samples,
    impl_scores=all_impl_scores,
    param_names=simulator.param_names,  
    output_names=simulator.output_names, 
    )
dashboard.display()

<img src=

https://raw.githubusercontent.com/alan-turing-institute/autoemulate/refs/heads/main/misc/vis_dashboard_pic_sample.png” alt=”Work Flow” style=”width:100%;”/> “””

Footnote: Testing the dashboard#

Sometimes it is hard to know, if the results we are seeing is because the code is not working, or our simulation results are more interesting than we expected. Here is a little test dataset which tests the dashboard, so that you can see how the plots are supposed to look liek and what they shouldf show

# Create a test sample with KNOWN NROY regions
test_samples = np.array([[x, y] for x in np.linspace(0,1,100) 
                               for y in np.linspace(0,1,100)])
test_scores = (abs(test_samples[:, 0]-0.5)+abs(test_samples[:, 1]-0.5)).reshape(-1, 1)

# Should show a clear diagonal pattern
test_dash = HistoryMatchingDashboard(
    samples=test_samples,
    impl_scores=test_scores,
    param_names=["p1", "p2"],
    output_names=["out1"],
    threshold=0.7  # ~50% of points should be NROY
)
#test_dash.display()