- covariance_functionName of covariance function.
C++ Type:UserObjectName
Unit:(no unit assumed)
Controllable:No
Description:Name of covariance function.
- responseReporter value of response results, can be vpp with
/ or sampler column with 'sampler/col_ '. C++ Type:ReporterName
Unit:(no unit assumed)
Controllable:No
Description:Reporter value of response results, can be vpp with
/ or sampler column with 'sampler/col_ '. - samplerSampler used to create predictor and response data.
C++ Type:SamplerName
Unit:(no unit assumed)
Controllable:No
Description:Sampler used to create predictor and response data.
GaussianProcessTrainer
"Gaussian Processes for Machine Learning" Rasmussen and Williams (2005) provides a well written discussion of Gaussian Processes, and its reading is highly encouraged. Chapters 1-5 cover the topics presented here with far greater detail, depth, and rigor. Furthermore, for a detailed overview on Gaussian Processes that model multiple outputs together (multi-output Gaussian Process or MOGP) we refer the reader to Liu et al. (2018).
The documentation here is meant to give some practical insight for users to begin creating surrogate models with Gaussian Processes.
Given a set of inputs for which we have made observations of the correspond outputs using the system (). Given another set of inputs we wish to predict the associated outputs without evaluation of , which is presumed costly.
Parameter Covariance
In overly simplistic terms, Gaussian Process Modeling is driven by the idea that trials which are "close" in their input parameter space will be "close" in their output space. Closeness in the parameter space is driven by the covariance function (also called a kernel function, not to be confused with a MOOSE Framework kernel). This covariance function is used to generate a covariance matrix between the complete set of parameters , which can then be interpreted block-wise as various covariance matrices between and .
The Gaussian Process Model consists of an infinite collection of functions, all of which agree with the training/observation data. Importantly the collection has closed forms for 2nd order statistics (mean and variance). When used as a surrogate, the nominal value is chosen to be the mean value. The method can be broken down into two step: definition of the prior distribution then conditioning on observed data.
Gaussian processes
A Gaussian Process is a (potentially infinite) collection of random variables, such that the joint distribution of every finite selection of random variables from the collection is a Gaussian distribution.
(1)In an analogous way that a multivariate Gaussian is completely defined by its mean vector and its covariance matrix, a Gaussian Process is completely defined by its mean function and covariance function.
The (potentially) infinite number of random variables within the Gaussian Process correspond to the (potentially) infinite points in the parameter space our surrogate can be evaluated at.
Prior distribution:
We assume the observations (both training and testing) are pulled from an multivariate Gaussian distribution. The covariance matrix is the result of the choice of covariance function.
Note that and are a vector and matrix respectively, and are a result of the mean and covariance functions applied to the sample points.
Zero Mean Assumption: Discussions of Gaussian Process are typically presented under assumption that . This occurs without loss of generality since any sample can be made by subtracting the sample mean (or a variety of other preprocessing options). Note that in a training\testing paradigm, the testing data is unknown, so determination of what to use as is based on the information from the training data (or some other prior assumption).
Conditioning:
With the prior formed as above, conditioning on the available training data is performed. This alters the mean and variance to new values and , restricting the set of possible functions which agree with the training data.
When used as a surrogate, the nominal value is typically taken as the mean value, with providing variances which can be used to generate confidence intervals.
Notes on Multi-Output Gaussian Processes (MOGPs)
MOGPs model and predict the outputs which are vectors, each of size . For any input vector , the vector of outputs and the matrix of vectors is defined as:
(2)
where is of size and is of size . The matrix is vectorized and represented as with size . is modeled as a Gaussian distribution defined as described in Eq. (1).
In a multi-output Gaussian Process, captures covariances across the input variables and the vector of outputs and hence has size . can be modeled in several ways as discussed in (Liu et al., 2018; Alvarez et al., 2012). We will follow the linear model of coregionalization (LMC) which introduces latent functions with restrictons on the associated covariances.
Common Hyperparameters
While the only apparent decision in the above formulation is the choice of covariance function, most covariance functions will contain hyperparameters of some form which need to be selected in some manner. While each covariance function will have its own set of hyperparameters, a few hyperparameters of specific forms are present in many common covariance functions.
Length Factor or
Frequently Kernels consider the distance between two input parameters and . For system of only a single parameter this distance often takes the form of
In this form the factor set a relevant length scale for the distance measurements.
When multiple input parameters are to be considered, it may be advantageous to specify different length scales for each of the parameters, resulting in a vector . For example distance may be calculated as
When used with standardized parameters, can be interpreted in units of standard deviation for the relevant parameter.
Signal Variance
This serves as an overall scaling parameter. Given a covariance function (which is not a function of ), the multiplication of yields a new valid covariance function.
This multiplication can also be pulled out of the covariance matrix formation, and simply multiply the matrix formed by
Noise Variance
The represents noise in the collected data, and is as a additional factor on the variance terms (when ).
In the matrix representation this adds a factor of to diagonal of the noiseless matrix
Due to the addition of along the diagonal of the matrix, this hyperparameter can aid in the inversion of the covariance matrix. For this reason adding a small amount of may be preferable, even when you believe the data to be noise free.
Selected Covariance Functions
Table 1: Selected Covariance Functions
Covariance Function | Description |
---|---|
SquaredExponentialCovariance | Also referred to as a radial basis function (RBF) this is a widely used, general purpose covariance function. Serves as a common starting point for many. Used for single-output GPs |
ExponentialCovariance | A simple exponential covariance function. Used for single-output GPs |
MaternHalfIntCovariance | Implementation of the Matern class of covariance function, where the parameter takes on half-integer values. Used for single-output GPs |
LMC | Covariance function built using the linear model of coregionalization. Used for multi-output GPs |
Hyperparameter tuning options
The following options are available for tuning the hyperparameters:
Adaptive moment estimation (Adam)
Relies on the pseudocode provided in Kingma and Ba (2014). Adam permits stochastic optimization, wherein, a batch of the training data can be randomly chosen at each iteration.
The hyper-parameters of the GPs are inferred by optimizing the log-likelihood function. The MOGP log-likelihood function has a general form:
(3)
The optimization of GPs can be expensive. If there are training points each with outputs, each training iteration of Adam has a cost of . Adam permits using random training points during each iteration (mini-batches) which has a cost of .
Input Parameters
- batch_size0The batch size for Adam optimization
Default:0
C++ Type:unsigned int
Unit:(no unit assumed)
Controllable:No
Description:The batch size for Adam optimization
- converged_reporterReporter value used to determine if a sample's multiapp solve converged.
C++ Type:ReporterName
Unit:(no unit assumed)
Controllable:No
Description:Reporter value used to determine if a sample's multiapp solve converged.
- cv_n_trials1Number of repeated trials of cross-validation to perform.
Default:1
C++ Type:unsigned int
Unit:(no unit assumed)
Controllable:No
Description:Number of repeated trials of cross-validation to perform.
- cv_seed4294967295Seed used to initialize random number generator for data splitting during cross validation.
Default:4294967295
C++ Type:unsigned int
Unit:(no unit assumed)
Controllable:No
Description:Seed used to initialize random number generator for data splitting during cross validation.
- cv_splits10Number of splits (k) to use in k-fold cross-validation.
Default:10
C++ Type:unsigned int
Unit:(no unit assumed)
Controllable:No
Description:Number of splits (k) to use in k-fold cross-validation.
- cv_surrogateName of Surrogate object used for model cross-validation.
C++ Type:UserObjectName
Unit:(no unit assumed)
Controllable:No
Description:Name of Surrogate object used for model cross-validation.
- cv_typenoneCross-validation method to use for dataset. Options are 'none' or 'k_fold'.
Default:none
C++ Type:MooseEnum
Unit:(no unit assumed)
Controllable:No
Description:Cross-validation method to use for dataset. Options are 'none' or 'k_fold'.
- execute_onTIMESTEP_ENDThe list of flag(s) indicating when this object should be executed. For a description of each flag, see https://mooseframework.inl.gov/source/interfaces/SetupInterface.html.
Default:TIMESTEP_END
C++ Type:ExecFlagEnum
Unit:(no unit assumed)
Controllable:No
Description:The list of flag(s) indicating when this object should be executed. For a description of each flag, see https://mooseframework.inl.gov/source/interfaces/SetupInterface.html.
- filenameThe name of the file which will be associated with the saved/loaded data.
C++ Type:FileName
Unit:(no unit assumed)
Controllable:No
Description:The name of the file which will be associated with the saved/loaded data.
- learning_rate0.001The learning rate for Adam optimization
Default:0.001
C++ Type:double
Unit:(no unit assumed)
Controllable:No
Description:The learning rate for Adam optimization
- num_iters1000Tolerance value for Adam optimization
Default:1000
C++ Type:unsigned int
Unit:(no unit assumed)
Controllable:No
Description:Tolerance value for Adam optimization
- predictor_colsSampler columns used as the independent random variables, If 'predictors' and 'predictor_cols' are both empty, all sampler columns are used.
C++ Type:std::vector<unsigned int>
Unit:(no unit assumed)
Controllable:No
Description:Sampler columns used as the independent random variables, If 'predictors' and 'predictor_cols' are both empty, all sampler columns are used.
- predictorsReporter values used as the independent random variables, If 'predictors' and 'predictor_cols' are both empty, all sampler columns are used.
C++ Type:std::vector<ReporterName>
Unit:(no unit assumed)
Controllable:No
Description:Reporter values used as the independent random variables, If 'predictors' and 'predictor_cols' are both empty, all sampler columns are used.
- prop_getter_suffixAn optional suffix parameter that can be appended to any attempt to retrieve/get material properties. The suffix will be prepended with a '_' character.
C++ Type:MaterialPropertyName
Unit:(no unit assumed)
Controllable:No
Description:An optional suffix parameter that can be appended to any attempt to retrieve/get material properties. The suffix will be prepended with a '_' character.
- response_typerealResponse data type.
Default:real
C++ Type:MooseEnum
Unit:(no unit assumed)
Controllable:No
Description:Response data type.
- show_every_nth_iteration0Switch to show Adam optimization loss values at every nth step. If 0, nothing is showed.
Default:0
C++ Type:unsigned int
Unit:(no unit assumed)
Controllable:No
Description:Switch to show Adam optimization loss values at every nth step. If 0, nothing is showed.
- skip_unconverged_samplesFalseTrue to skip samples where the multiapp did not converge, 'stochastic_reporter' is required to do this.
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:True to skip samples where the multiapp did not converge, 'stochastic_reporter' is required to do this.
- standardize_dataTrueStandardize (center and scale) training data (y values)
Default:True
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:Standardize (center and scale) training data (y values)
- standardize_paramsTrueStandardize (center and scale) training parameters (x values)
Default:True
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:Standardize (center and scale) training parameters (x values)
- tune_parametersSelect hyperparameters to be tuned
C++ Type:std::vector<std::string>
Unit:(no unit assumed)
Controllable:No
Description:Select hyperparameters to be tuned
- tuning_maxMaximum allowable tuning value
C++ Type:std::vector<double>
Unit:(no unit assumed)
Controllable:No
Description:Maximum allowable tuning value
- tuning_minMinimum allowable tuning value
C++ Type:std::vector<double>
Unit:(no unit assumed)
Controllable:No
Description:Minimum allowable tuning value
- use_interpolated_stateFalseFor the old and older state use projected material properties interpolated at the quadrature points. To set up projection use the ProjectedStatefulMaterialStorageAction.
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:For the old and older state use projected material properties interpolated at the quadrature points. To set up projection use the ProjectedStatefulMaterialStorageAction.
Optional Parameters
- allow_duplicate_execution_on_initialFalseIn the case where this UserObject is depended upon by an initial condition, allow it to be executed twice during the initial setup (once before the IC and again after mesh adaptivity (if applicable).
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:In the case where this UserObject is depended upon by an initial condition, allow it to be executed twice during the initial setup (once before the IC and again after mesh adaptivity (if applicable).
- control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector<std::string>
Unit:(no unit assumed)
Controllable:No
Description:Adds user-defined labels for accessing object parameters via control logic.
- enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Unit:(no unit assumed)
Controllable:Yes
Description:Set the enabled status of the MooseObject.
- execution_order_group0Execution order groups are executed in increasing order (e.g., the lowest number is executed first). Note that negative group numbers may be used to execute groups before the default (0) group. Please refer to the user object documentation for ordering of user object execution within a group.
Default:0
C++ Type:int
Unit:(no unit assumed)
Controllable:No
Description:Execution order groups are executed in increasing order (e.g., the lowest number is executed first). Note that negative group numbers may be used to execute groups before the default (0) group. Please refer to the user object documentation for ordering of user object execution within a group.
- force_postauxFalseForces the UserObject to be executed in POSTAUX
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:Forces the UserObject to be executed in POSTAUX
- force_preauxFalseForces the UserObject to be executed in PREAUX
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:Forces the UserObject to be executed in PREAUX
- force_preicFalseForces the UserObject to be executed in PREIC during initial setup
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:Forces the UserObject to be executed in PREIC during initial setup
- use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Default:False
C++ Type:bool
Unit:(no unit assumed)
Controllable:No
Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Advanced Parameters
References
- M. A. Alvarez, L. Rosasco, and N. D. Lawrence.
Kernels for vector-valued functions: a review.
Foundations and Trends in Machine Learning, 4(3):195–266, 2012.[BibTeX]
- D. P. Kingma and J. Ba.
Adam: a method for stochastic optimization.
arXiv:1412.6980 [cs.LG], 2014.
URL: https://doi.org/10.48550/arXiv.1412.6980.[BibTeX]
- H. Liu, J. Cai, and Y. S. Ong.
Remarks on multi-output gaussian process regression.
Knowledge-Based Systems, 144:102–112, 2018.[BibTeX]
- Carl Edward Rasmussen and Christopher K. I. Williams.
Gaussian Processes for Machine Learning.
The MIT Press, 2005.
ISBN 026218253X.[BibTeX]