ScaledAbsDifferenceDRLRewardFunction

Evaluates a scaled absolute difference reward function for a process which is controlled by a Deep Reinforcement Learning based surrogate.

Overview

Function describing the reward of for a Deep Reinforcement Learning algorithm in the form of:

r=C1xtargetxcurrent+C2r = C_1 |x_{target}-x_{current}| + C_2

where C1C_1 and C2C_2 constants can be determined by the user. Furthermore, xcurrentx_{current} is a measured data, typically supplied by a postprocessor. For an example on how to use it in a DRL setting, see LibtorchDRLControlTrainer.

Input Parameters

  • design_functionThe desired value to reach.

    C++ Type:FunctionName

    Unit:(no unit assumed)

    Controllable:No

    Description:The desired value to reach.

  • observed_valueThe name of the Postprocessor that contains the observed value.

    C++ Type:PostprocessorName

    Unit:(no unit assumed)

    Controllable:No

    Description:The name of the Postprocessor that contains the observed value.

Required Parameters

  • c1101st coefficient in the reward function.

    Default:10

    C++ Type:double

    Unit:(no unit assumed)

    Controllable:No

    Description:1st coefficient in the reward function.

  • c212nd coefficient in the reward function.

    Default:1

    C++ Type:double

    Unit:(no unit assumed)

    Controllable:No

    Description:2nd coefficient in the reward function.

Optional Parameters

  • control_tagsAdds user-defined labels for accessing object parameters via control logic.

    C++ Type:std::vector<std::string>

    Unit:(no unit assumed)

    Controllable:No

    Description:Adds user-defined labels for accessing object parameters via control logic.

  • enableTrueSet the enabled status of the MooseObject.

    Default:True

    C++ Type:bool

    Unit:(no unit assumed)

    Controllable:No

    Description:Set the enabled status of the MooseObject.

Advanced Parameters