autoemulate.emulators.base#

class Emulator(x=None, y=None, **kwargs)[source]#

Bases: ABC, ValidationMixin, ConversionMixin, TorchDeviceMixin

Base class for all emulators.

This class provides the basic structure and methods for emulators in AutoEmulate. It includes methods for fitting, predicting, and handling device management.

is_fitted_ = False#
supports_grad = False#
scheduler_cls = None#
x_transform = None#
y_transform = None#
fit(x, y)[source]#

Fit the emulator to the provided data.

classmethod model_name()[source]#

Return the full name of the model.

classmethod short_name()[source]#

Return a short name for the model.

Take the capital letters of the class name and return them as a lower case string. For example, if the class name is GaussianProcess, this will return gp.

predict(x, with_grad=False)[source]#

Predict the output for the given input.

Parameters:
  • x (TensorLike) – Input tensor to make predictions for.

  • with_grad (bool) – Whether to enable gradient calculation. Defaults to False.

Returns:

The predicted output.

Return type:

OutputLike

abstract static is_multioutput()[source]#

Flag to indicate if the model is multioutput or not.

static get_tune_params()[source]#

Return a dictionary of hyperparameters to tune.

The keys in the TuneParams must be implemented as keyword arguments in the __init__ method of any subclasses.

e.g.

tune_params: TuneParams = {

“lr”: list[0.01, 0.1], “batch_size”: [16, 32], “mean”

}

model_params: ModelParams = {

“lr”: 0.01, “batch_size”: 16

}

class MySubClass(Emulator):
def __init__(lr, batch_size):

self.lr = lr self.batch_size = batch_size

classmethod get_random_params()[source]#

Return a random set of params for the model.

classmethod scheduler_params()[source]#

Return a random parameters for the learning rate scheduler.

This should be added to the get_tune_params() method of subclasses to allow tuning of the scheduler parameters.

scheduler_setup(kwargs=None)[source]#

Set up the learning rate scheduler for the emulator.

Parameters:

kwargs (dict | None) – Keyword arguments for the model. This should include scheduler_kwargs.

class DeterministicEmulator(x=None, y=None, **kwargs)[source]#

Bases: Emulator

A base class for deterministic emulators.

predict(x, with_grad=False)[source]#

Predict the output for the given input.

Parameters:
  • x (TensorLike) – Input tensor to make predictions for.

  • with_grad (bool) – Whether to enable gradient calculation. Defaults to False.

Returns:

The emulator predicted output for x.

Return type:

TensorLike

class ProbabilisticEmulator(x=None, y=None, **kwargs)[source]#

Bases: Emulator

A base class for probabilistic emulators.

predict(x, with_grad=False)[source]#

Predict the output distribution for the given input.

Parameters:
  • x (TensorLike) – Input tensor to make predictions for.

  • with_grad (bool) – Whether to enable gradient calculation. Defaults to False.

Returns:

The emulator predicted distribution for x.

Return type:

DistributionLike

predict_mean_and_variance(x, with_grad=False)[source]#

Predict mean and variance from the probabilistic output.

Parameters:
  • x (TensorLike) – Input tensor to make predictions for.

  • with_grad (bool) – Whether to enable gradient calculation. Defaults to False.

Returns:

The emulator predicted mean and variance for x.

Return type:

tuple[TensorLike, TensorLike]

class GaussianEmulator(x=None, y=None, **kwargs)[source]#

Bases: ProbabilisticEmulator

A base class for Gaussian emulators.

supports_grad = True#
predict(x, with_grad=False)[source]#

Predict the Gaussian distribution for the given input.

Parameters:
  • x (TensorLike) – Input tensor to make predictions for.

  • with_grad (bool) – Whether to enable gradient calculation. Defaults to False.

Returns:

The emulator predicted Gaussian distribution for x.

Return type:

GaussianLike

class GaussianProcessEmulator(x=None, y=None, **kwargs)[source]#

Bases: GaussianEmulator

A base class for Gaussian Process emulators.

predict(x, with_grad=False)[source]#

Predict the Gaussian distribution for the given input.

Parameters:
  • x (TensorLike) – Input tensor to make predictions for.

  • with_grad (bool) – Whether to enable gradient calculation. Defaults to False.

Returns:

The emulator predicted Gaussian distribution for x.

Return type:

GaussianLike

class PyTorchBackend(*args, **kwargs)[source]#

Bases: Module, Emulator

PyTorchBackend provides a backend for PyTorch models.

The class provides the basic structure and methods for PyTorch-based emulators to enable further subclassing and customization. This provides default implementations to simplify model-specific subclasses by only needing to implement:

  • .__init__(): the constructor for the model

  • .forward(): the forward pass of the model

  • .get_tune_params(): the hyperparameters to tune for the model

batch_size = 16#
shuffle = True#
epochs = 10#
loss_history = []#
verbose = False#
loss_fn = MSELoss()#
optimizer_cls#

alias of Adam

optimizer#
supports_grad = True#
lr = 0.1#
scheduler_cls = None#
loss_func(y_pred, y_true)[source]#

Loss function to be used for training the model.

class SklearnBackend(x=None, y=None, **kwargs)[source]#

Bases: DeterministicEmulator

SklearnBackend provides a backend for sklearn models.

The class provides the basic structure and methods for sklearn-based emulators to enable further subclassing and customization. This provides default implementations to simplify model-specific subclasses by only needing to implement:

  • .__init__(): the constructor for the model

  • .get_tune_params(): the hyperparameters to tune for the model

model#
normalize_y = False#
y_mean#
y_std#
supports_grad = False#
class DropoutTorchBackend(*args, **kwargs)[source]#

Bases: PyTorchBackend

PyTorch backend model that is able to support dropout.