Covariance System
Overview
Some surrogate models allow for a covariance function (often referred to as kernel functions or simply kernels) to project the input parameters into a different input space. The Gaussian process surrogate is an example of such a surrogate. These covariance functions can be defined in the [Covariance]
block.
Creating a Covariance Function
A covariance function is created by inheriting from CovarianceFunctionBase
and overriding the methods in the base class.
Using a Covariance Function
In the GaussianProcess
The GaussianProcess is a class which incorporates the necessary data structures and functions to create, train, and use Gaussian Processes. One of the most important members of this handler class is the covariance function:
CovarianceFunctionBase * _covariance_function = nullptr;
The covariance function can be initialized in the handler by following the examples given in source description. Objects like GaussianProcessTrainer or GaussianProcess can then access the covariance function through the handler class.
CovarianceInterface
Alternatively, by inheriting from CovarianceInterface
, the child classes can easily fetch covariance functions using the helper functions. Good examples are the GaussianProcessTrainer and GaussianProcess which utilize the helper functions to link an input covariance function to the GaussianProcess:
_gp.initialize(getCovarianceFunctionByName(parameters.get<UserObjectName>("covariance_function")),
tune_parameters,
lower_bounds,
upper_bounds);
_n_outputs = _gp.getCovarFunction().numOutputs();
In a Surrogate
If the surrogate loads the training data from a file, the LoadCovarianceDataAction automatically reconstructs the covariance object used in the training phase, and calls the surrogate setupCovariance()
method to make the linkage. This recreation is done by storing the hyper parameter map in the Gaussian Process handler of the trainer for use in the surrogate.
Dependencies between Covariance objects
Some covariance functions may depend on other covariance functions for adding complexity. A typical scenario can be a Gaussian Process built using the Linear Model of Coregionalization that predicts multiple outputs at once by handling the covariance between the outputs as well.
Example Input File Syntax
[Covariance]
[covar]
type = SquaredExponentialCovariance
signal_variance = 1 #Use a signal variance of 1 in the kernel
noise_variance = 1e-6 #A small amount of noise can help with numerical stability
length_factor = '0.38971 0.38971' #Select a length factor for each parameter (k and q)
[]
[]
Available Objects
- Stochastic Tools App
- ExponentialCovarianceExponential covariance function.
- LMCCovariance function for multioutput Gaussian Processes based on the Linear Model of Coregionalization (LMC).
- MaternHalfIntCovarianceMatern half-integer covariance function.
- SquaredExponentialCovarianceSquared Exponential covariance function.
Available Actions
- Stochastic Tools App
- AddCovarianceActionAdds Covariance objects contained within the
[Trainers]
and[Surrogates]
input blocks.
(contrib/moose/modules/stochastic_tools/include/utils/GaussianProcess.h)
// This file is part of the MOOSE framework
// https://www.mooseframework.org
//
// All rights reserved, see COPYRIGHT for full restrictions
// https://github.com/idaholab/moose/blob/master/COPYRIGHT
//
// Licensed under LGPL 2.1, please see LICENSE for details
// https://www.gnu.org/licenses/lgpl-2.1.html
#pragma once
#include "Standardizer.h"
#include <Eigen/Dense>
#include "CovarianceFunctionBase.h"
namespace StochasticTools
{
/**
* Utility class dedicated to hold structures and functions commont to
* Gaussian Processes. It can be used to standardize parameters, manipulate
* covariance data and compute additional stored matrices.
*/
class GaussianProcess
{
public:
GaussianProcess();
/**
* Initializes the most important structures in the Gaussian Process: the
* covariance function and a tuning map which is used if the user requires
* parameter tuning.
* @param covariance_function Pointer to the covariance function that
* needs to be used for the Gaussian Process.
* @param params_to_tune List of parameters which need to be tuned.
* @param min List of lower bounds for the parameter tuning.
* @param max List of upper bounds for parameter tuning.
*/
void initialize(CovarianceFunctionBase * covariance_function,
const std::vector<std::string> & params_to_tune,
const std::vector<Real> & min = std::vector<Real>(),
const std::vector<Real> & max = std::vector<Real>());
/// Structure containing the optimization options for
/// hyperparameter-tuning
struct GPOptimizerOptions
{
/// Default constructor
GPOptimizerOptions();
/**
* Construct a new GPOptimizerOptions object using
* input parameters that will control the optimization
* @param show_every_nth_iteration To show the loss value at every n-th iteration, if set to 0,
* nothing is displayed
* @param num_iter The number of iterations we want in the optimization of the GP
* @param batch_size The number of samples in each batch
* @param learning_rate The learning rate for parameter updates
* @param b1 Tuning constant for the Adam algorithm
* @param b2 Tuning constant for the Adam algorithm
* @param eps Tuning constant for the Adam algorithm
* @param lambda Tuning constant for the Adam algorithm
*/
GPOptimizerOptions(const bool show_every_nth_iteration = 1,
const unsigned int num_iter = 1000,
const unsigned int batch_size = 0,
const Real learning_rate = 1e-3,
const Real b1 = 0.9,
const Real b2 = 0.999,
const Real eps = 1e-7,
const Real lambda = 0.0);
/// Switch to enable verbose output for parameter tuning at every n-th iteration
const unsigned int show_every_nth_iteration = false;
/// The number of iterations for Adam optimizer
const unsigned int num_iter = 1000;
/// The batch isize for Adam optimizer
const unsigned int batch_size = 0;
/// The learning rate for Adam optimizer
const Real learning_rate = 1e-3;
/// Tuning parameter from the paper
const Real b1 = 0.9;
/// Tuning parameter from the paper
const Real b2 = 0.999;
/// Tuning parameter from the paper
const Real eps = 1e-7;
/// Tuning parameter from the paper
const Real lambda = 0.0;
};
/**
* Sets up the covariance matrix given data and optimization options.
* @param training_params The training parameter values (x values) for the
* covariance matrix.
* @param training_data The training data (y values) for the inversion of the
* covariance matrix.
* @param opts The optimizer options.
*/
void setupCovarianceMatrix(const RealEigenMatrix & training_params,
const RealEigenMatrix & training_data,
const GPOptimizerOptions & opts);
/**
* Sets up the Cholesky decomposition and inverse action of the covariance matrix.
* @param input The vector/matrix which right multiples the inverse of the covariance matrix.
*/
void setupStoredMatrices(const RealEigenMatrix & input);
/**
* Finds and links the covariance function to this object. Used mainly in the
* covariance data action.
* @param covariance_function Pointer to the covariance function that
* needs to be used for the Gaussian Process.
*/
void linkCovarianceFunction(CovarianceFunctionBase * covariance_function);
/**
* Sets up the tuning map which is used if the user requires parameter tuning.
* @param params_to_tune List of parameters which need to be tuned.
* @param min List of lower bounds for the parameter tuning.
* @param max List of upper bounds for parameter tuning.
*/
void generateTuningMap(const std::vector<std::string> & params_to_tune,
const std::vector<Real> & min = std::vector<Real>(),
const std::vector<Real> & max = std::vector<Real>());
/**
* Standardizes the vector of input parameters (x values).
* @param parameters The vector/matrix of input data.
* @param keep_moments If previously computed or new moments are to be used.
*/
void standardizeParameters(RealEigenMatrix & parameters, bool keep_moments = false);
/**
* Standardizes the vector of responses (y values).
* @param data The vector/matrix of input data.
* @param keep_moments If previously computed or new moments are to be used.
*/
void standardizeData(RealEigenMatrix & data, bool keep_moments = false);
// Tune hyperparameters using Adam
void tuneHyperParamsAdam(const RealEigenMatrix & training_params,
const RealEigenMatrix & training_data,
const GPOptimizerOptions & opts);
// Computes the loss function
Real getLoss(RealEigenMatrix & inputs, RealEigenMatrix & outputs);
// Computes Gradient of the loss function
std::vector<Real> getGradient(RealEigenMatrix & inputs) const;
/// Function used to convert the hyperparameter maps in this object to
/// vectors
void mapToVec(
const std::unordered_map<std::string, std::tuple<unsigned int, unsigned int, Real, Real>> &
tuning_data,
const std::unordered_map<std::string, Real> & scalar_map,
const std::unordered_map<std::string, std::vector<Real>> & vector_map,
std::vector<Real> & vec) const;
/// Function used to convert the vectors back to hyperparameter maps
void vecToMap(
const std::unordered_map<std::string, std::tuple<unsigned int, unsigned int, Real, Real>> &
tuning_data,
std::unordered_map<std::string, Real> & scalar_map,
std::unordered_map<std::string, std::vector<Real>> & vector_map,
const std::vector<Real> & vec) const;
/// @{
/**
* Get constant reference to the contained structures
*/
const StochasticTools::Standardizer & getParamStandardizer() const { return _param_standardizer; }
const StochasticTools::Standardizer & getDataStandardizer() const { return _data_standardizer; }
const RealEigenMatrix & getK() const { return _K; }
const RealEigenMatrix & getKResultsSolve() const { return _K_results_solve; }
const Eigen::LLT<RealEigenMatrix> & getKCholeskyDecomp() const { return _K_cho_decomp; }
const CovarianceFunctionBase & getCovarFunction() const { return *_covariance_function; }
const CovarianceFunctionBase * getCovarFunctionPtr() const { return _covariance_function; }
const std::string & getCovarType() const { return _covar_type; }
const std::string & getCovarName() const { return _covar_name; }
const std::vector<UserObjectName> & getDependentCovarNames() const
{
return _dependent_covar_names;
}
const std::map<UserObjectName, std::string> & getDependentCovarTypes() const
{
return _dependent_covar_types;
}
const unsigned int & getCovarNumOutputs() const { return _num_outputs; }
const unsigned int & getNumTunableParams() const { return _num_tunable; }
const std::unordered_map<std::string, Real> & getHyperParamMap() const { return _hyperparam_map; }
const std::unordered_map<std::string, std::vector<Real>> & getHyperParamVectorMap() const
{
return _hyperparam_vec_map;
}
///@}
/// @{
/**
* Get non-constant reference to the contained structures (if they need to be modified from the
* utside)
*/
StochasticTools::Standardizer & paramStandardizer() { return _param_standardizer; }
StochasticTools::Standardizer & dataStandardizer() { return _data_standardizer; }
RealEigenMatrix & K() { return _K; }
RealEigenMatrix & KResultsSolve() { return _K_results_solve; }
Eigen::LLT<RealEigenMatrix> & KCholeskyDecomp() { return _K_cho_decomp; }
CovarianceFunctionBase * covarFunctionPtr() { return _covariance_function; }
CovarianceFunctionBase & covarFunction() { return *_covariance_function; }
std::string & covarType() { return _covar_type; }
std::string & covarName() { return _covar_name; }
std::map<UserObjectName, std::string> & dependentCovarTypes() { return _dependent_covar_types; }
std::vector<UserObjectName> & dependentCovarNames() { return _dependent_covar_names; }
unsigned int & covarNumOutputs() { return _num_outputs; }
std::unordered_map<std::string, std::tuple<unsigned int, unsigned int, Real, Real>> & tuningData()
{
return _tuning_data;
}
std::unordered_map<std::string, Real> & hyperparamMap() { return _hyperparam_map; }
std::unordered_map<std::string, std::vector<Real>> & hyperparamVectorMap()
{
return _hyperparam_vec_map;
}
///@}
protected:
/// Covariance function object
CovarianceFunctionBase * _covariance_function = nullptr;
/// Contains tuning inforation. Index of hyperparam, size, and min/max bounds
std::unordered_map<std::string, std::tuple<unsigned int, unsigned int, Real, Real>> _tuning_data;
/// Number of tunable hyperparameters
unsigned int _num_tunable;
/// Type of covariance function used for this GP
std::string _covar_type;
/// The name of the covariance function used in this GP
std::string _covar_name;
/// The names of the covariance functions the used covariance function depends on
std::vector<UserObjectName> _dependent_covar_names;
/// The types of the covariance functions the used covariance function depends on
std::map<UserObjectName, std::string> _dependent_covar_types;
/// The number of outputs of the GP
unsigned int _num_outputs;
/// Scalar hyperparameters. Stored for use in surrogate
std::unordered_map<std::string, Real> _hyperparam_map;
/// Vector hyperparameters. Stored for use in surrogate
std::unordered_map<std::string, std::vector<Real>> _hyperparam_vec_map;
/// Standardizer for use with params (x)
StochasticTools::Standardizer _param_standardizer;
/// Standardizer for use with data (y)
StochasticTools::Standardizer _data_standardizer;
/// An _n_sample by _n_sample covariance matrix constructed from the selected kernel function
RealEigenMatrix _K;
/// A solve of Ax=b via Cholesky.
RealEigenMatrix _K_results_solve;
/// Cholesky decomposition Eigen object
Eigen::LLT<RealEigenMatrix> _K_cho_decomp;
/// Paramaters (x) used for training, along with statistics
const RealEigenMatrix * _training_params;
/// Data (y) used for training
const RealEigenMatrix * _training_data;
/// The batch size for Adam optimization
unsigned int _batch_size;
};
} // StochasticTools namespac
template <>
void dataStore(std::ostream & stream, Eigen::LLT<RealEigenMatrix> & decomp, void * context);
template <>
void dataLoad(std::istream & stream, Eigen::LLT<RealEigenMatrix> & decomp, void * context);
template <>
void dataStore(std::ostream & stream, StochasticTools::GaussianProcess & gp_utils, void * context);
template <>
void dataLoad(std::istream & stream, StochasticTools::GaussianProcess & gp_utils, void * context);
(contrib/moose/modules/stochastic_tools/src/trainers/GaussianProcessTrainer.C)
// This file is part of the MOOSE framework
// https://www.mooseframework.org
//
// All rights reserved, see COPYRIGHT for full restrictions
// https://github.com/idaholab/moose/blob/master/COPYRIGHT
//
// Licensed under LGPL 2.1, please see LICENSE for details
// https://www.gnu.org/licenses/lgpl-2.1.html
#include "GaussianProcessTrainer.h"
#include "Sampler.h"
#include "CartesianProduct.h"
#include <petsctao.h>
#include <petscdmda.h>
#include "libmesh/petsc_vector.h"
#include "libmesh/petsc_matrix.h"
#include <cmath>
registerMooseObject("StochasticToolsApp", GaussianProcessTrainer);
InputParameters
GaussianProcessTrainer::validParams()
{
InputParameters params = SurrogateTrainer::validParams();
params.addClassDescription("Provides data preperation and training for a single- or multi-output "
"Gaussian Process surrogate model.");
params.addRequiredParam<UserObjectName>("covariance_function", "Name of covariance function.");
params.addParam<bool>(
"standardize_params", true, "Standardize (center and scale) training parameters (x values)");
params.addParam<bool>(
"standardize_data", true, "Standardize (center and scale) training data (y values)");
// Already preparing to use Adam here
params.addParam<unsigned int>("num_iters", 1000, "Tolerance value for Adam optimization");
params.addParam<unsigned int>("batch_size", 0, "The batch size for Adam optimization");
params.addParam<Real>("learning_rate", 0.001, "The learning rate for Adam optimization");
params.addParam<unsigned int>(
"show_every_nth_iteration",
0,
"Switch to show Adam optimization loss values at every nth step. If 0, nothing is showed.");
params.addParam<std::vector<std::string>>("tune_parameters",
"Select hyperparameters to be tuned");
params.addParam<std::vector<Real>>("tuning_min", "Minimum allowable tuning value");
params.addParam<std::vector<Real>>("tuning_max", "Maximum allowable tuning value");
return params;
}
GaussianProcessTrainer::GaussianProcessTrainer(const InputParameters & parameters)
: SurrogateTrainer(parameters),
CovarianceInterface(parameters),
_predictor_row(getPredictorData()),
_gp(declareModelData<StochasticTools::GaussianProcess>("_gp")),
_training_params(declareModelData<RealEigenMatrix>("_training_params")),
_standardize_params(getParam<bool>("standardize_params")),
_standardize_data(getParam<bool>("standardize_data")),
_do_tuning(isParamValid("tune_parameters")),
_optimization_opts(StochasticTools::GaussianProcess::GPOptimizerOptions(
getParam<unsigned int>("show_every_nth_iteration"),
getParam<unsigned int>("num_iters"),
getParam<unsigned int>("batch_size"),
getParam<Real>("learning_rate"))),
_sampler_row(getSamplerData())
{
// Error Checking
if (parameters.isParamSetByUser("batch_size"))
if (_sampler.getNumberOfRows() < _optimization_opts.batch_size)
paramError("batch_size", "Batch size cannot be greater than the training data set size.");
std::vector<std::string> tune_parameters(
_do_tuning ? getParam<std::vector<std::string>>("tune_parameters")
: std::vector<std::string>{});
if (isParamValid("tuning_min") &&
(getParam<std::vector<Real>>("tuning_min").size() != tune_parameters.size()))
mooseError("tuning_min size does not match tune_parameters");
if (isParamValid("tuning_max") &&
(getParam<std::vector<Real>>("tuning_max").size() != tune_parameters.size()))
mooseError("tuning_max size does not match tune_parameters");
std::vector<Real> lower_bounds, upper_bounds;
if (isParamValid("tuning_min"))
lower_bounds = getParam<std::vector<Real>>("tuning_min");
if (isParamValid("tuning_max"))
upper_bounds = getParam<std::vector<Real>>("tuning_max");
_gp.initialize(getCovarianceFunctionByName(parameters.get<UserObjectName>("covariance_function")),
tune_parameters,
lower_bounds,
upper_bounds);
_n_outputs = _gp.getCovarFunction().numOutputs();
}
void
GaussianProcessTrainer::preTrain()
{
_params_buffer.clear();
_data_buffer.clear();
_params_buffer.reserve(getLocalSampleSize());
_data_buffer.reserve(getLocalSampleSize());
}
void
GaussianProcessTrainer::train()
{
_params_buffer.push_back(_predictor_row);
if (_rvecval && _rvecval->size() != _n_outputs)
mooseError("The size of the provided response (",
_rvecval->size(),
") does not match the number of expected outputs from the covariance (",
_n_outputs,
")!");
_data_buffer.push_back(_rvecval ? (*_rvecval) : std::vector<Real>(1, *_rval));
}
void
GaussianProcessTrainer::postTrain()
{
// Instead of gatherSum, we have to allgather.
_communicator.allgather(_params_buffer);
_communicator.allgather(_data_buffer);
_training_params.resize(_params_buffer.size(), _n_dims);
_training_data.resize(_data_buffer.size(), _n_outputs);
for (auto ii : make_range(_training_params.rows()))
{
for (auto jj : make_range(_n_dims))
_training_params(ii, jj) = _params_buffer[ii][jj];
for (auto jj : make_range(_n_outputs))
_training_data(ii, jj) = _data_buffer[ii][jj];
}
// Standardize (center and scale) training params
if (_standardize_params)
_gp.standardizeParameters(_training_params);
// if not standardizing data set mean=0, std=1 for use in surrogate
else
_gp.paramStandardizer().set(0, 1, _n_dims);
// Standardize (center and scale) training data
if (_standardize_data)
_gp.standardizeData(_training_data);
// if not standardizing data set mean=0, std=1 for use in surrogate
else
_gp.dataStandardizer().set(0, 1, _n_outputs);
// Setup the covariance
_gp.setupCovarianceMatrix(_training_params, _training_data, _optimization_opts);
}
(contrib/moose/modules/stochastic_tools/test/tests/surrogates/gaussian_process/GP_squared_exponential_training.i)
[StochasticTools]
[]
[Distributions]
[k_dist]
type = Uniform
lower_bound = 1
upper_bound = 10
[]
[q_dist]
type = Uniform
lower_bound = 9000
upper_bound = 11000
[]
[]
[Samplers]
[train_sample]
type = MonteCarlo
num_rows = 10
distributions = 'k_dist q_dist'
execute_on = PRE_MULTIAPP_SETUP
[]
[]
[MultiApps]
[sub]
type = SamplerFullSolveMultiApp
input_files = sub.i
sampler = train_sample
[]
[]
[Controls]
[cmdline]
type = MultiAppSamplerControl
multi_app = sub
sampler = train_sample
param_names = 'Materials/conductivity/prop_values Kernels/source/value'
[]
[]
[Transfers]
[data]
type = SamplerReporterTransfer
from_multi_app = sub
sampler = train_sample
stochastic_reporter = results
from_reporter = 'avg/value'
[]
[]
[Reporters]
[results]
type = StochasticReporter
parallel_type = ROOT
[]
[]
[Trainers]
[GP_avg_trainer]
type = GaussianProcessTrainer
execute_on = timestep_end
covariance_function = 'covar' #Choose a squared exponential for the kernel
standardize_params = 'true' #Center and scale the training params
standardize_data = 'true' #Center and scale the training data
sampler = train_sample
response = results/data:avg:value
[]
[]
[Covariance]
[covar]
type=SquaredExponentialCovariance
signal_variance = 1 #Use a signal variance of 1 in the kernel
noise_variance = 1e-6 #A small amount of noise can help with numerical stability
length_factor = '0.38971 0.38971' #Select a length factor for each parameter (k and q)
[]
[]
[Outputs]
file_base = gauss_process_training
[out]
type = SurrogateTrainerOutput
trainers = 'GP_avg_trainer'
execute_on = FINAL
[]
[]