LibtorchANNSurrogate

Overview

The details of a simple feedforward neural network is discussed in LibtorchArtificialNeuralNet. This class is dedicated to evaluating the following function:

y=σ(W(n)σ(W(n1)σ(...σ(W(1)x+b(1))...)+b(n1))+b(n)),\textbf{y} = \sigma(\textbf{W}^{(n)}\sigma(\textbf{W}^{(n-1)} \sigma( ... \sigma(\textbf{W}^{(1)}\textbf{x}+\textbf{b}^{(1)}) ... )+\textbf{b}^{(n-1)})+\textbf{b}^{(n)}),(1)

which describes a neural network of nn layers. In this context, σ\sigma denotes an activation function, while x\textbf{x} and y\textbf{y} are the input and output arguments. respectively. The weight matrices (W\textbf{W}) and bias vectors (b\textbf{b}) are optimized by LibtorchANNTrainer and are fixed in the evaluation phase.

Example Input File Syntax

Let us consider an example where we evaluate the neural network trained here. For this, prepare another set of samples of from the same parameter space:

[Samplers]
  [test]
    type = CartesianProduct
    linear_space_items = '0 0.05 2
                          0 0.05 2
                          0 0.05 2'
  []
[]
(contrib/moose/modules/stochastic_tools/test/tests/surrogates/libtorch_nn/evaluate.i)

Following this, we load the surrogate from a file saved by the trainer:

[Surrogates]
  [surr]
    type = LibtorchANNSurrogate
    filename = 'train_out_train.rd'
  []
[]
(contrib/moose/modules/stochastic_tools/test/tests/surrogates/libtorch_nn/evaluate.i)

And evaluate it using a reporter which uses the samples and the surrogate to compute the approximate values of the target function at the new sample points:

[Reporters]
  [results]
    type = EvaluateSurrogate
    model = surr
    sampler = test
    execute_on = FINAL
    parallel_type = ROOT
  []
[]
(contrib/moose/modules/stochastic_tools/test/tests/surrogates/libtorch_nn/evaluate.i)
warningwarning

The detailed documentation of this object is only available when Moose is compiled with Libtorch. For instructions on how to compile Moose with Libtorch, visit the general installation webpage or click here.