Conjugate Heat Transfer for Laminar Pin Bundle Flow

In this tutorial, you will learn how to:

  • Couple NekRS with MOOSE for CHT in a 7-pin bundle

  • Control how flux normalization is performed in NekRS (by either lumping all sidesets together, or preserving for each sideset individually)

  • Reduce the amount of copy to/from commands between host and device for NekRS (an advanced user feature)

To access this tutorial,


cd cardinal/tutorials/sfr_7pin

This tutorial also requires you to download some mesh files from Box. Please download the files from the sfr_7pin folder here and place these files within the same directory structure in tutorials/sfr_7pin.

Geometry and Computational Model

The geometry is a shorter, 7-pin version of the fuel bundles in the Advanced Burner Test Reactor (ABTR) Cahalan et al. (2006). Relevant dimensions are summarized in Table 1.

Table 1: Geometric and operating conditions for the ABTR, based on Cahalan et al. (2006)

ParameterValue
Pellet diameter6.03 mm
Clad diameter8.00 mm
Clad thickness0.52 mm
Height0.2032 m
Flat-to-flat distance inside duct0.02625 m
Bundle power21 kW
Core inlet temperature355°C
Mass flowrate0.1 kg/s
Coolantsodium

Heat Conduction Model

The MOOSE heat transfer module is used to solve for energy conservation in the solid. The outer surface of the duct and the tops and bottoms of the pins and ducts are insulated. At fluid-solid interfaces, the solid temperature is imposed as a Dirichlet condition (computed by NekRS).

The gap region between the pellet and the cladding is unmeshed, and a quadrature-based thermal contact model is applied based on the sum of thermal conduction and thermal radiation (across a transparent medium). For a paired set of boundaries, each quadrature point on boundary is paired with the nearest quadrature point on boundary B. Then, the sum of the radiation and conduction heat fluxes imposed between pairs of quadrature points is

(1)

where is the Stefan-Boltzmann constant, is the temperature at a quadrature point, is the temperature of the nearest quadrature point across the gap, and are emissivities of boundaries and , respectively, and is the conduction resistance. For cylinders, the conduction is

(2)

where are the outer and inner radii of the cylindrical annulus, is the height of the annulus, and is the thermal conductivity of the annulus material.

Both the solid temperature and the surface temperature boundary condition (that will be provided by NekRS) are set to an initial condition of 500 K.

NekRS Model

NekRS is used to solve the incompressible Navier-Stokes equations. The inlet velocity is selected such that the mass flowrate is 0.1 kg/s, which is low enough that the flow is laminar and a turbulence model is not required for this example. At the outlet, a zero (gage) pressure is imposed and an outflow condition is applied for the energy conservation equation. On all solid-fluid interfaces, the velocity is set to the no-slip condition and a heat flux is imposed in the energy conservation equation (computed by MOOSE).

The initial pressure is set to zero. Both the velocity and temperature are set to uniform initial conditions that match the inlet conditions.

Meshing

MOOSE MeshGenerators are used to generate the meshes for the fluid and solid phases.

Solid Mesh

The solid mesh is shown in Figure 1; the different regions in this mesh are first created in 2-D, and then extruded into 3-D. You can view this mesh by either (i) running the simulation (and viewing the mesh on which the results are displayed), or (ii) by running the solid input file in mesh generation mode, which will create an output file named solid_in.e:

cardinal-opt -i solid.i --mesh-only

The boundary names are illustrated towards the right by showing the highlighted surface to which each boundary corresponds. A unique block ID is used for the set of elements in the cladding, the duct, and for the pellet elements. Two block IDs are required to describe the pellet regions because two different element types (quads and prisms) are present in the pellet region. The pellet-clad gap is unmeshed.

Figure 1: Mesh for the solid portions of the 7-pin bare Sodium Fast Reactor (SFR) bundle

Unique boundary names are set for each boundary to which we will apply a unique boundary condition. Because we will apply insulated boundary conditions on the top and bottom surfaces, as well as on the outside of the duct, we don't need to define sidesets ( the "do-nothing" boundary condition in the finite element method is a zero-flux condition).

Fluid Mesh

The complete fluid mesh is shown in Figure 2; this is first created using meshing software, and then converted into a NekRS mesh (.re2 format). The boundary names are illustrated by showing the highlighted surface to which each corresponds. Note that the meshes in Figure 1 and Figure 2 do not need to share nodes between the fluid and solid boundaries through which CHT coupling will be performed - interpolations using MOOSE's Transfers handle any differences in the mesh.

Figure 2: Mesh for the fluid portions of the 7-pin bare SFR bundle

CHT Coupling

In this section, NekRS and MOOSE are coupled for CHT.

Solid Input Files

The solid phase is solved with the MOOSE heat transfer module, and is described in the solid.i input file. At the top of this file, various problem parameters are defined as file-local variables to help with setting up the uniform heat source in the fuel.

d_pellet = 0.603e-2                           # pellet diameter (m)
d_pin = 8.0e-3                                # pin (clad) diameter (m)
t_clad = 0.52e-3                              # clad thickness (m)
L = 20.32e-2                                  # height (m)
bundle_pitch = 0.02625                        # flat-to-flat distance inside the duct (m)
duct_thickness = 0.004                        # duct thickness (m)
pin_power = 21e3                              # bundle power (kW)
(tutorials/sfr_7pin/solid.i)

Next, the solid mesh is specified through a combination of MeshGenerators. First, we make the geometry in 2-D, and then extrude it into 3-D. Many of the mesh generator objects are customizing the sideset names.

[Mesh]

  # --- DUCT --- #

  # Make the duct mesh in 2-D; we first create a "solid" hexagon with two blocks
  # so that we can set the inner wall sideset as the boundary between these two blocks,
  # and then delete an inner block to get just the duct walls
  [duct]
    type = PolygonConcentricCircleMeshGenerator
    num_sides = 6
    polygon_size = '${fparse bundle_pitch / 2.0 + duct_thickness}'
    num_sectors_per_side = '14 14 14 14 14 14'
    uniform_mesh_on_sides = true

    duct_sizes = '${fparse bundle_pitch / 2.0}'
    duct_block_ids = '10'
    duct_intervals = '4'
    duct_sizes_style = apothem
  []
  [inner_sideset]
    type = SideSetsBetweenSubdomainsGenerator
    input = duct
    new_boundary = '10'
    primary_block = '10'
    paired_block = '1'
  []
  [delete_inside_block]
    type = BlockDeletionGenerator
    input = inner_sideset
    block = '1'
  []

  # rotate the duct by 30 degrees, then add names for sidesets. Delete sideset 1 because
  # we will name some other part of our mesh with sideset 1.
  [rotate]
    type = TransformGenerator
    input = delete_inside_block
    transform = rotate
    vector_value = '30.0 0.0 0.0'
  []
  [rename_sideset_names]
    type = RenameBoundaryGenerator
    input = rotate
    old_boundary = '10000 10'
    new_boundary = 'duct_outer duct_inner'
  []
  [delete_extraneous]
    type = BoundaryDeletionGenerator
    input = rename_sideset_names
    boundary_names = '1'
  []

  # --- CLAD --- #

  [clad]
    type = AnnularMeshGenerator
    nr = 3
    nt = 20
    rmin = '${fparse d_pin / 2.0 - t_clad}'
    rmax = '${fparse d_pin / 2.0}'
    quad_subdomain_id = 1
    tri_subdomain_id = 0
  []
  [rename_clad]
    # this renames some sidesets on the clad to avoid name clashes
    type = RenameBoundaryGenerator
    input = clad
    old_boundary = '1 0' # outer surface, inner surface
    new_boundary = '5 4'
  []
  [rename_clad_names]
    # this renames some names on the clad to avoid name clashes
    type = RenameBoundaryGenerator
    input = rename_clad
    old_boundary = '5 4' # outer surface, inner surface
    new_boundary = 'clad_outer clad_inner'
  []

  # --- FUEL --- #

  [fuel]
    type = AnnularMeshGenerator
    nr = 10
    nt = 20
    rmin = 0
    rmax = '${fparse d_pellet / 2.0}'
    quad_subdomain_id = 2
    tri_subdomain_id = 3
    growth_r = -1.2
  []

  # --- COMBINE --- #

  [combine]
    # this combines the fuel and clad together to make one pin
    type = CombinerGenerator
    inputs = 'rename_clad_names fuel'
  []
  [repeat]
    # this repeats the pincell 7 times to get the 7 pins, and adds the duct
    type = CombinerGenerator
    inputs = 'combine combine combine combine combine combine combine delete_extraneous'
    positions = '+0.00000000 +0.00000000 +0.00000000
                 +0.00452000 +0.00782887 +0.00000000
                 -0.00452000 +0.00782887 +0.00000000
                 -0.00904000 +0.00000000 +0.00000000
                 -0.00452000 -0.00782887 +0.00000000
                 +0.00452000 -0.00782887 +0.00000000
                 +0.00904000 +0.00000000 +0.00000000
                 +0.0        +0.0        +0.0'
  []
  [extrude]
    type = AdvancedExtruderGenerator
    input = repeat
    direction = '0 0 1'
    num_layers = '40'
    heights = '${L}'
  []
[]
(tutorials/sfr_7pin/solid.i)

The heat transfer module will solve for temperature, which is defined as a nonlinear variable.

[Variables]
  [T]
    initial_condition = 500.0
  []
[]
(tutorials/sfr_7pin/solid.i)

The Transfer system is used to communicate variables across applications; a boundary heat flux will be computed by MOOSE and applied as a boundary condition in NekRS. In the opposite direction, NekRS will compute a surface temperature that will be applied as a boundary condition in MOOSE. Therefore, we define auxiliary variables to hold the flux computed by MOOSE (avg_flux) and the surface temperature received from NekRS (nek_temp).

[AuxVariables]
  [nek_temp]
    initial_condition = 500.0
  []
  [avg_flux]
    family = MONOMIAL
    order = CONSTANT
  []
[]
(tutorials/sfr_7pin/solid.i)

Next, the governing equation solved by MOOSE is specified with the Kernels block as the HeatConduction kernel with a volumetric heat source in the pellets with the BodyForce kernel. Notice how we can do math with the file-local variables that were defined at the top of the file, with ${fparse <math statement>} syntax.

[Kernels]
  [diffusion]
    type = HeatConduction
    variable = T
  []
  [source]
    # the "units" of a kernel in the heat equation are W/m^3, so we need to divide the power by the pellet volumes
    type = BodyForce
    variable = T
    function = '${fparse pin_power / (pi * (d_pellet * d_pellet / 4.0) * L)}'
    block = '2 3'
  []
[]
(tutorials/sfr_7pin/solid.i)

For computing the heat flux on the boundaries coupled to NekRS (the clad outer surface and the duct inner surface), the DiffusionFluxAux auxiliary kernel is used.

[AuxKernels]
  [avg_flux]
    type = DiffusionFluxAux
    diffusion_variable = T
    component = normal
    diffusivity = thermal_conductivity
    variable = avg_flux
    boundary = '5 10'
  []
[]
(tutorials/sfr_7pin/solid.i)

The HeatConductionMaterial is then used to specify functional-forms for the thermal conductivity of the pellet, clad, and duct. For the HeatConductionMaterial, you can use t in ParsedFunctions to represent temperature.

[Functions]
  [k_sodium]
    type = ParsedFunction
    expression = '1.1045e2 + -6.5112e-2 * t + 1.5430e-5 * t * t + -2.4617e-9 * t * t * t'
  []
  [k_HT9]
    type = ParsedFunction
    expression = 'if (t < 1030, 17.622 + 2.428e-2 * t - 1.696e-5 * t * t,
                           12.027 + 1.218e-2 * t)'
  []
  [k_U]
    type = ParsedFunction
    expression = 'if (t < 255.4, 16.170,
                            if (t < 1173.2, (5.907e-6 * t * t + 1.591e-2 * t + 11.712),
                                             38.508))'
  []
[]

[Materials]
  [clad_and_duct]
    type = HeatConductionMaterial
    thermal_conductivity_temperature_function = k_HT9
    temp = T
    block = '1 10'
  []
  [pellet]
    type = HeatConductionMaterial
    thermal_conductivity_temperature_function = k_U
    temp = T
    block = '2 3'
  []
[]
(tutorials/sfr_7pin/solid.i)

Next, we define boundary conditions for the solid. Between the pellet surface and the clad inner surface, we impose a thermal contact model as described in Eq. (1). On fluid-solid interfaces, the solid temperature is set equal to the surface temperature computed by NekRS.

[ThermalContact]
  # This adds boundary conditions bewteen the fuel and the cladding, which represents
  # the heat flux in both directions as
  # q''= h * (T_1 - T_2)
  # where h is a conductance that accounts for conduction through a material and
  # radiation between two infinite parallel plate gray bodies.
  [one_to_two]
    type = GapHeatTransfer
    variable = T
    primary = '1'
    secondary = '4'

    # we will use a quadrature-based approach to find the gap width and cross-side temperature
    quadrature = true

    # emissivity of the fuel
    emissivity_primary = 0.8

    # emissivity of the clad
    emissivity_secondary = 0.8

    # thermal conductivity of the gap material
    gap_conductivity_function = k_sodium
    gap_conductivity_function_variable = T

    # geometric terms related to the gap
    gap_geometry_type = CYLINDER
    cylinder_axis_point_1 = '0 0 0'
    cylinder_axis_point_2 = '0 0 ${L}'
  []
[]

[BCs]
  [pin_and_duct]
    type = MatchedValueBC
    variable = T
    v = nek_temp
    boundary = '5 10'
  []
[]
(tutorials/sfr_7pin/solid.i)

Next, the MultiApps and Transfers blocks describe the interaction between Cardinal and MOOSE. The MOOSE heat transfer module is here run as the main application, with the NekRS wrapping run as the sub-application. Three transfers are required to couple Cardinal and MOOSE; the first is a transfer of surface temperature from Cardinal to MOOSE. The second is a transfer of heat flux from MOOSE to Cardinal. And the third is a transfer of the total integrated heat flux from MOOSE to Cardinal (computed as a postprocessor), which is then used internally by NekRS to re-normalize the heat flux (after interpolation onto its GLL points).

[MultiApps]
  [nek]
    type = TransientMultiApp
    input_files = 'nek.i'
    sub_cycling = true
    execute_on = timestep_end
  []
[]

[Transfers]
  [nek_temp] # grabs temperature from nekRS and stores it in nek_temp
    type = MultiAppGeneralFieldNearestLocationTransfer
    source_variable = temp
    from_multi_app = nek
    variable = nek_temp
  []
  [avg_flux] # sends heat flux in avg_flux to nekRS
    type = MultiAppGeneralFieldNearestLocationTransfer
    source_variable = avg_flux
    to_multi_app = nek
    variable = avg_flux
  []
  [flux_integral_to_nek] # sends the heat flux integral (for normalization) to nekRS
    type = MultiAppPostprocessorTransfer
    to_postprocessor = flux_integral
    from_postprocessor = flux_integral
    to_multi_app = nek
  []
  [synchronize_in]
    type = MultiAppPostprocessorTransfer
    to_postprocessor = transfer_in
    from_postprocessor = synchronize
    to_multi_app = nek
  []
[]
(tutorials/sfr_7pin/solid.i)

In addition to these three transfers necessary to couple NekRS with MOOSE, there is a fourth transfer - synchronize_in, which transfers the synchronize postprocessor to NekRS. The synchronize postprocessor is simply a Receiver postprocessor that is set to a value of 1. No applications will transfer anything in to synchronize, so the value of this postprocessor remains always fixed at 1. In addition to the synchronize postprocessor, below are listed other postprocessors used to facilitate the data transfers and output certain quantities of interest.

[Postprocessors]
  [flux_integral]
    # evaluate the total heat flux for normalization
    type = SideDiffusiveFluxIntegral
    diffusivity = thermal_conductivity
    variable = T
    boundary = '5 10'
  []
  [max_fuel_T]
    type = NodalExtremeValue
    variable = T
    value_type = max
    block = '2 3'
  []
  [max_clad_T]
    type = NodalExtremeValue
    variable = T
    value_type = max
    block = '1'
  []
  [max_duct_T]
    type = NodalExtremeValue
    variable = T
    value_type = max
    block = '10'
  []
  [synchronize]
    type = Receiver
    default = 1
  []
[]
(tutorials/sfr_7pin/solid.i)

To understand the purpose of this (optional) transfer, we need to describe in more detail the data transfers that occur when sub-cycling. Please note that this is an advanced feature added for very large runs to squeeze out as much performance as possible - understanding this feature is not necessary for beginner users.

Consider the case where the main application has a time step of 1, but NekRS has a time step of 0.2. After the solution of the main application, the heat flux transfer into NekRS consists of several steps:

  1. Transfer from avg_flux in the main application to the avg_flux receiver variable in the MOOSE-wrapped NekRS app. This is the transfer that happens in the Transfers block.

  2. Once the heat flux is available in the avg_flux variable in the nek.i input, transfer that heat flux into NekRS on the host (i.e. the CPU) by interpolating from the NekRSMesh to the GLL points.

  3. Once the heat flux has been normalized on the host, it is then copied from the host to the device (i.e. the parallel backend, which will be either a CPU or GPU).

If NekRS is run with a much smaller time step than the main application, steps 2 and 3 can be omitted to save on the interpolation from the NekRSMesh to NekRS's GLL points and on the copy from the host to the device. However, MOOSE's design means that the sub-application doesn't really know anything about how it fits into the hierarchical multiapp tree - it is agnostic. So, the user of this postprocessor (plus some settings in NekRSProblem, to be discussed shortly) can be used to only perform steps 2 and 3 only on the synchronization points between NekRS and MOOSE. In other words, if NekRS runs with a time step 100 times smaller than a main application, this feature reduces the mesh interpolation and host-to-device copying by a factor of 100. By transferring this "dummy" postprocessor to the NekRS wrapping, NekRS will have a signal of when the synchronization points occur.

Finally, we specify an executioner and an exodus output for the solid solution.

[Executioner]
  type = Transient
  dt = 5e-3
  num_steps = 10
  nl_abs_tol = 1e-5
  nl_rel_tol = 1e-16
  petsc_options_value = 'hypre boomeramg'
  petsc_options_iname = '-pc_type -pc_hypre_type'
[]

[Outputs]
  exodus = true
  execute_on = 'final'
[]
(tutorials/sfr_7pin/solid.i)

Fluid Input Files

The fluid phase is solved with NekRS, and is specified in the nek.i file. For CHT coupling, first we construct a mirror of NekRS's mesh on the boundaries of interest - the IDs associated with the fluid-solid interfaces (as known to NekRS) are boundaries 1 and 2.

[Mesh]
  type = NekRSMesh
  boundary = '1 2'
[]
(tutorials/sfr_7pin/nek.i)

Next, NekRSProblem is used to describe all aspects of the NekRS wrapping.

[Problem]
  type = NekRSProblem
  casename = 'sfr_7pin'
  synchronization_interval = parent_app
[]
(tutorials/sfr_7pin/nek.i)

We use synchronization_interval = parent_app to indicate that we want to transfer data into NekRS's .oudf backend only when new coupling data is available from the parent application. When this option is used, NekRSProblem automatically adds a Receiver postprocessor named transfer_in, as if the following were added to the input file:

[Postprocessors]
  [transfer_in]
    type = Receiver
  []
[]

The transfer_in postprocessor simply receives the synchronize postprocessor from the main application, as shown in the MultiAppPostprocessorTransfer in the solid input file.

We specify a number of other postprocessors in order to query the NekRS solution for each time step. Note that the flux_integral receiver postprocessor that the main application sends the flux integral to does not appear in the input - this postprocessor, like temp and avg_flux auxiliary variables, are added automatically by NekRSProblem.

[Postprocessors]
  [pin_flux_in_nek]
    type = NekHeatFluxIntegral
    boundary = '1'
  []
  [duct_flux_in_nek]
    type = NekHeatFluxIntegral
    boundary = '2'
  []
  [max_nek_T]
    type = NekVolumeExtremeValue
    field = temperature
    value_type = max
  []
  [min_nek_T]
    type = NekVolumeExtremeValue
    field = temperature
    value_type = min
  []
[]
(tutorials/sfr_7pin/nek.i)

Finally, we specify a Transient executioner and NekTimeStepper in order for NekRS to choose its time step (subject to any synchronization points specified by the main application). We also specify an Exodus output file format.

[Executioner]
  type = Transient

  [TimeStepper]
    type = NekTimeStepper
  []
[]

[Outputs]
  exodus = true
  execute_on = 'final'
[]
(tutorials/sfr_7pin/nek.i)

Additional files necessary to set up the NekRS problem are the same files you'd need to set up a standalone NekRS simulation -

  • sfr_7pin.re2: NekRS mesh

  • sfr_7pin.par: High-level settings for the solver, boundary condition mappings to sidesets, and the equations to solve

  • sfr_7pin.udf: User-defined C++ functions for on-line postprocessing and model setup

  • sfr_7pin.oudf: User-defined OCCA kernels for boundary conditions and source terms

Begin with the sfr_7pin.par file.

[GENERAL]
  stopAt = numSteps
  numSteps = 1000
  dt = 5.0e-4
  timeStepper = tombo2
  writeControl = steps
  writeInterval = 25
  polynomialOrder = 3

[VELOCITY]
  viscosity = 2.37e-4
  density = 834.5
  boundaryTypeMap = W, W, v, O
  residualTol = 1.0e-6

[PRESSURE]
  residualTol = 1.0e-6

[TEMPERATURE]
  conductivity = 64.21
  rhoCp = 1024766.0
  boundaryTypeMap = f, f, t, O
  residualTol = 1.0e-5
(tutorials/sfr_7pin/sfr_7pin.par)

Boundaries 1 and 2 will receive heat flux from MOOSE, so these two boundaries are set to flux boundaries, or f for the [TEMPERATURE] block. Other settings are largely the same.

The assignment of boundary condition values is performed in the sfr_7pin.oudf file, shown below. Note that for boundaries 1 and 2, where we want to receive heat flux from MOOSE, we set the value of the flux equal to bc->usrwrk[bc->idM], or the scratch array that is written by NekRSProblem.

void velocityDirichletConditions(bcData * bc)
{
  if (bc->id == 3)
  {
    bc->u = 0.0; // x-velocity
    bc->v = 0.0; // y-velocity
    bc->w = Vz;  // z-velocity
  }
}

void scalarDirichletConditions(bcData * bc)
{
  if (bc->id == 3)
    bc->s = inlet_T;
}

void scalarNeumannConditions(bcData * bc)
{
  // note: when running with Cardinal, Cardinal will allocate the usrwrk
  // array. If running with NekRS standalone (e.g. nrsmpi), you need to
  // replace the usrwrk with some other value or allocate it youself from
  // the udf and populate it with values.
  if (bc->id == 1 || bc->id == 2)
    bc->flux = bc->usrwrk[bc->idM];
}
(tutorials/sfr_7pin/sfr_7pin.oudf)

Finally, the sfr_7pin.udf file contains C++ functions to set up boundary conditions and perform other post-processing operations. In UDF_Setup, we set initial conditions for velocity, pressure, and temperature. For convenience, we define local functions like mass_flowrate() to be able to set problem parameters in a single place and use them multiple places (these functions are not NekRS syntax - i.e. we could equivalently have done something like #define mdot 0.1).

#include "udf.hpp"

float mass_flowrate()
{
  return 0.1;
}

float inlet_temp()
{
  return 355 + 273.15;
}

float inlet_velocity()
{
  setupAide & options = platform->options;

  // flow area (comes from Cubit area measurement)
  float flow_area = 0.000245;

  // get density from input file
  float rho;
  options.getArgs("DENSITY", rho);

  return mass_flowrate() / flow_area / rho;
}

void UDF_LoadKernels(occa::properties & kernelInfo)
{
  // inlet axial velocity (assuming zero radial and azimuthal components)
  kernelInfo["defines/Vz"] = inlet_velocity();

  // inlet temperature
  kernelInfo["defines/inlet_T"] = inlet_temp();
}

void UDF_Setup(nrs_t *nrs)
{
  // set initial conditions for the velocity, temperature, and pressure
  mesh_t * mesh = nrs->cds->mesh[0];

  float inlet_vel = inlet_velocity();

  // loop over all the GLL points and assign directly to the solution arrays by
  // indexing according to the field offset necessary to hold the data for each
  // solution component
  int n_gll_points = mesh->Np * mesh->Nelements;
  for (int n = 0; n < n_gll_points; ++n)
  {
    nrs->U[n + 0 * nrs->fieldOffset] = 0.0;
    nrs->U[n + 1 * nrs->fieldOffset] = 0.0;
    nrs->U[n + 2 * nrs->fieldOffset] = inlet_vel;

    nrs->P[n] = 0.0;

    nrs->cds->S[n + 0 * nrs->cds->fieldOffset[0]] = inlet_temp();
  }
}
(tutorials/sfr_7pin/sfr_7pin.udf)

Execution and Postprocessing

To run the pseudo-steady model,


mpiexec -np 4 cardinal-opt -i solid.i

Because we used synchronization_interval = parent_app, you will see in the screen output that the data transfers into NekRS only occur on synchronization points with the main application - all other time steps will omit the messages about normalizing heat flux and extracting temperatures from NekRS.

After converting the NekRS output files to a format viewable in Paraview (see instructions here), the simulation results can be displayed. The fluid temperature is shown in Figure 3 along with the mesh lines of the solid phase, while the solid temperature is shown in Figure 4 along with the lines connecting GLL points for the fluid phase. The temperature color scale is the same in both figures.

Figure 3: Fluid temperature computed by NekRS for CHT coupling for a bare 7-pin SFR bundle with solid mesh lines shown in blue.

Figure 4: Solid temperature computed by MOOSE for CHT coupling for a bare 7-pin SFR bundle with fluid mesh lines shown in blue.

Preserving Flux on Individual Sidesets

By default, NekRSProblem will lump all "receiving" sidesets in NekRS together for the purpose of normalization. For this example, this means that the pin outer surface flux will not exactly match the pin outer flux from MOOSE (and similarly for the duct inner surface flux), because these surfaces are lumped together. In this section, we illustrate a more advanced option to individually preserve flux by sideset.

The input files we will use are the solid_vpp.i and nek_vpp.i files. These files are almost identical to the files described in the previous section, so we only emphasize the differences. First, in the solid model we need to set up individual postprocessors for the heat flux corresponding to each NekRS boundary. Then, we need to set up a VectorOfPostprocessors to fill a vector with each flux postprocessor. Note that the order of the postprocessors must match the boundaries they get mapped to in NekRSMesh.

[VectorPostprocessors]
  [flux]
    type = VectorOfPostprocessors
    postprocessors = 'pin_flux duct_flux'
  []
[]

[Postprocessors]
  [pin_flux]
    type = SideDiffusiveFluxIntegral
    diffusivity = thermal_conductivity
    variable = T
    boundary = '5'
  []
  [duct_flux]
    type = SideDiffusiveFluxIntegral
    diffusivity = thermal_conductivity
    variable = T
    boundary = '10'
  []
  [max_fuel_T]
    type = NodalExtremeValue
    variable = T
    value_type = max
    block = '2 3'
  []
  [max_clad_T]
    type = NodalExtremeValue
    variable = T
    value_type = max
    block = '1'
  []
  [max_duct_T]
    type = NodalExtremeValue
    variable = T
    value_type = max
    block = '10'
  []
  [synchronize]
    type = Receiver
    default = 1
  []
[]
(tutorials/sfr_7pin/solid_vpp.i)

Then, we simply need to replace the MultiAppPostprocessorTransfer with a MultiAppReporterTransfer.

[Transfers]
  [nek_temp]
    # grabs temperature from nekRS and stores it in nek_temp
    type = MultiAppGeneralFieldNearestLocationTransfer
    source_variable = temp
    from_multi_app = nek
    variable = nek_temp
  []
  [avg_flux]
    # sends heat flux in avg_flux to nekRS
    type = MultiAppGeneralFieldNearestLocationTransfer
    source_variable = avg_flux
    to_multi_app = nek
    variable = avg_flux
  []
  [flux_integral_to_nek]
    # sends the heat flux integral (for normalization) to nekRS
    type = MultiAppReporterTransfer
    to_reporters = 'flux_integral/value'
    from_reporters = 'flux/flux'
    to_multi_app = nek
  []
  [synchronize_in]
    type = MultiAppPostprocessorTransfer
    to_postprocessor = transfer_in
    from_postprocessor = synchronize
    to_multi_app = nek
  []
[]
(tutorials/sfr_7pin/solid_vpp.i)

Finally, we just need to set conserve_flux_by_sideset = true in NekRSProblem in the Nek input file.

[Problem]
  type = NekRSProblem
  casename = 'sfr_7pin'
  conserve_flux_by_sideset = true
  synchronization_interval = parent_app
[]
(tutorials/sfr_7pin/nek_vpp.i)

To run the model,


mpirun -np 4 cardinal-opt -i solid_vpp.i

The physics predictions are nearly identical to those displayed earlier.

References

  1. J. Cahalan, L. Deitrich, F. Dunn, D. Fallin, M. Farmer, T. Fanning, T. Kim, L. Krajtl, S. Lomperski, A. Moisseytsev, Y. Momzaki, J. Sienicki, Y. Park, Y. Tang, C. Reed, C. Tzanos, S. Wiedmeyer, W. Yang, and Y. Chikazawa. Advanced Burner Test Reactor Preconceptual Design Report. Technical Report ANL-ABR-1, ANL, 2006.[BibTeX]