deepsensor.model.nps

deepsensor.model.nps#

compute_encoding_tensor(model, task)[source]#

Compute the encoding tensor for a given task.

Parameters:
  • model – Model object.

  • task (Task) – Task object containing context and target sets.

Returns:

encoding

numpy.ndarray

Encoding tensor? #TODO

construct_neural_process(dim_x=2, dim_yc=1, dim_yt=1, dim_aux_t=None, dim_lv=0, conv_arch='unet', unet_channels=(64, 64, 64, 64), unet_resize_convs=True, unet_resize_conv_interp_method='bilinear', aux_t_mlp_layers=None, likelihood='cnp', unet_kernels=5, internal_density=100, encoder_scales=0.01, encoder_scales_learnable=False, decoder_scale=0.01, decoder_scale_learnable=False, num_basis_functions=64, epsilon=0.01)[source]#

Construct a neuralprocesses ConvNP model.

See: wesselb/neuralprocesses

Docstring below modified from neuralprocesses. If more kwargs are needed, they must be explicitly passed to neuralprocesses constructor (not currently safe to use **kwargs here).

Parameters:
  • dim_x (int, optional) – Dimensionality of the inputs. Defaults to 1.

  • dim_y (int, optional) – Dimensionality of the outputs. Defaults to 1.

  • dim_yc (int or tuple[int], optional) – Dimensionality of the outputs of the context set. You should set this if the dimensionality of the outputs of the context set is not equal to the dimensionality of the outputs of the target set. You should also set this if you want to use multiple context sets. In that case, set this equal to a tuple of integers indicating the respective output dimensionalities.

  • dim_yt (int, optional) – Dimensionality of the outputs of the target set. You should set this if the dimensionality of the outputs of the target set is not equal to the dimensionality of the outputs of the context set.

  • dim_aux_t (int, optional) – Dimensionality of target-specific auxiliary variables.

  • internal_density (int, optional) – Density of the ConvNP’s internal grid (in terms of number of points per 1x1 unit square). Defaults to 100.

  • likelihood (str, optional) – Likelihood. Must be one of "cnp" (equivalently "het"), "gnp" (equivalently "lowrank"), or "cnp-spikes-beta" (equivalently "spikes-beta"). Defaults to "cnp".

  • conv_arch (str, optional) – Convolutional architecture to use. Must be one of "unet[-res][-sep]" or "conv[-res][-sep]". Defaults to "unet".

  • unet_channels (tuple[int], optional) – Channels of every layer of the UNet. Defaults to six layers each with 64 channels.

  • unet_kernels (int or tuple[int], optional) – Sizes of the kernels in the UNet. Defaults to 5.

  • unet_resize_convs (bool, optional) – Use resize convolutions rather than transposed convolutions in the UNet. Defaults to False.

  • unet_resize_conv_interp_method (str, optional) – Interpolation method for the resize convolutions in the UNet. Can be set to "bilinear". Defaults to “bilinear”.

  • num_basis_functions (int, optional) – Number of basis functions for the low-rank likelihood. Defaults to 64.

  • dim_lv (int, optional) – Dimensionality of the latent variable. Setting to >0 constructs a latent neural process. Defaults to 0.

  • encoder_scales (float or tuple[float], optional) – Initial value for the length scales of the set convolutions for the context sets embeddings. Set to a tuple equal to the number of context sets to use different values for each set. Set to a single value to use the same value for all context sets. Defaults to 1 / internal_density.

  • encoder_scales_learnable (bool, optional) – Whether the encoder SetConv length scale(s) are learnable. Defaults to False.

  • decoder_scale (float, optional) – Initial value for the length scale of the set convolution in the decoder. Defaults to 1 / internal_density.

  • decoder_scale_learnable (bool, optional) – Whether the decoder SetConv length scale(s) are learnable. Defaults to False.

  • aux_t_mlp_layers (tuple[int], optional) – Widths of the layers of the MLP for the target-specific auxiliary variable. Defaults to three layers of width 128.

  • epsilon (float, optional) – Epsilon added by the set convolutions before dividing by the density channel. Defaults to 1e-2.

Returns:

model.Model – ConvNP model.

Raises:

NotImplementedError – If specified backend has no default dtype.

convert_task_to_nps_args(task)[source]#

Infer & build model call signature from task dict.

Parameters:

task (Task) – Task object containing context and target sets.

Returns:

tuple[list[tuple[numpy.ndarray, numpy.ndarray]], numpy.ndarray, numpy.ndarray, dict] – …

run_nps_model(neural_process, task, n_samples=None, requires_grad=False)[source]#

Run neuralprocesses model.

Parameters:
  • neural_process (neuralprocesses.Model) – Neural process model.

  • task (Task) – Task object containing context and target sets.

  • n_samples (int, optional) – Number of samples to draw from the model. Defaults to None (single sample).

  • requires_grad (bool, optional) – Whether to require gradients. Defaults to False.

Returns:

neuralprocesses.distributions.Distribution – Distribution object containing the model’s predictions.

run_nps_model_ar(neural_process, task, num_samples=1)[source]#

Run neural_process in AR mode.

Parameters:
  • neural_process (neuralprocesses.Model) – Neural process model.

  • task (Task) – Task object containing context and target sets.

  • num_samples (int, optional) – Number of samples to draw from the model. Defaults to 1.

Returns:

tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray] – Tuple of mean, variance, noiseless samples, and noisy samples.