autoemulate.calibration.bayes#

class BayesianCalibration(emulator, parameter_range, observations, observation_noise=0.01, model_uncertainty=False, model_discrepancy=0.0, calibration_params=None, device=None, log_level='progress_bar')[source]#

Bases: TorchDeviceMixin, BayesianMixin

Bayesian calibration using Markov Chain Monte Carlo (MCMC).

Bayesian calibration estimates the probability distribution over input parameters given observed data, providing uncertainty estimates.

model(predict=False)[source]#

Pyro model.

Parameters:

predict (bool) – Whether to run the model with existing samples to generate posterior predictive distribution. Used with pyro.infer.Predictive.

extract_log_probabilities(mcmc, model, device=None)[source]#

Extract log probabilities from MCMC samples for evidence computation.

This function extracts posterior samples from a Pyro MCMC object and computes the log probability of each sample under the given probabilistic model. The results are formatted for use with evidence estimation methods like Harmonic.

Parameters:
  • mcmc (MCMC) – Fitted Pyro MCMC object containing posterior samples. The MCMC object should have been run with multiple chains for best results.

  • model (Callable) – The Pyro probabilistic model used in MCMC sampling. This should be the same model function passed to the MCMC kernel during sampling.

  • device (DeviceLike | None, optional) – Device for tensor operations (e.g., ‘cpu’, ‘cuda’). If None, uses the default device. Default is None.

Returns:

A tuple containing: - samples: Tensor of shape (num_chains, num_samples_per_chain, ndim)

containing the posterior samples with parameters stacked in the last dimension.

  • log_probs: Tensor of shape (num_chains, num_samples_per_chain) containing the log probability of each sample under the model.

Return type:

tuple[torch.Tensor, torch.Tensor]

Raises:
  • ValueError – If the MCMC object has no samples or if sample extraction fails.

  • RuntimeError – If log probability computation fails for any sample.

Notes

This function performs the following steps: 1. Extracts samples from the MCMC object grouped by chain 2. For each sample, conditions the model on the sampled parameter values 3. Traces the conditioned model to compute log probabilities 4. Returns samples and log probabilities in a format suitable for Harmonic

The log probabilities include contributions from both the prior and likelihood, representing the unnormalized posterior density at each sample point.

Examples

>>> from autoemulate.calibration import (
...     BayesianCalibration,
...     extract_log_probabilities,
... )
>>> # After running MCMC calibration
>>> bc = BayesianCalibration(emulator, param_range, observations)
>>> mcmc = bc.run_mcmc(num_samples=1000, num_chains=4)
>>> samples, log_probs = extract_log_probabilities(mcmc, bc.model)
>>> print(samples.shape)  # (4, 1000, 2) for 2 parameters
>>> print(log_probs.shape)  # (4, 1000)

See also

BayesianCalibration

Class for Bayesian calibration with MCMC

EvidenceComputation

Class for computing Bayesian evidence