Cardinal System Requirements Specification
This template follows INL template TEM-135, "IT System Requirements Specification".
This document serves as an addendum to Framework System Requirements Specification and captures information for SRS specific to the Cardinal application.
- Framework System Requirements Specification
- Fluid Properties System Requirements Specification
- Heat Transfer System Requirements Specification
- Navier Stokes System Requirements Specification
- Reactor System Requirements Specification
- Ray Tracing System Requirements Specification
- Solid Mechanics System Requirements Specification
- Solid Properties System Requirements Specification
- Stochastic Tools System Requirements Specification
- Thermal Hydraulics System Requirements Specification
Introduction
System Purpose
The purpose of Cardinal is to perform fully integrated, high-fidelity, multiphysics simulations of nuclear energy systems at high-fidelity with a variety of materials, system configurations, and component designs in order to better understand system performance. Cardinal's main goal is to bring together the combined multiphysics capabilities of the MOOSE ecosystem with leading high-performance tools for radiation transport (OpenMC) and Navier-Stokes fluid flow (NekRS) in an open platform for research, safety assessment, engineering, and design studies of nuclear energy systems.
System Scope
Cardinal is an application for performing high-fidelity simulation of nuclear systems incorporating Monte Carlo neutron-photon transport and/or spectral element Computational Fluid Dynamics (CFD). These physics can be combined with one another and with the MOOSE modules to accomplish "multiphysics" simulation. High-fidelity simulations can also be performed independently, for the purpose of data postprocessing, to generate constitutive models suitable for lower-fidelity tools, a process referred to as "multiscale" simulation.
Interfaces to other MOOSE-based codes, including systems-level thermal-hydraulics (SAM), heat pipe flows (Sockeye), and fuel performance (Bison) are also optionally included to support Cardinal simulations. Cardinal enables high-fidelity modeling of heat transfer, fluid flow, passive scalar transport, fluid-structure interaction, nuclear heating, tritium breeding, shielding effectiveness, material activation, material damage, and sensor response. The MultiApp System is leveraged to allow for the multiscale, multiphysics coupling. Further, other MOOSE capabilities in the modules, such as the Stochastic Tools Module enable engineering studies with uncertainty quantification and sensitivity analysis. Cardinal therefore supports design, safety, engineering, and research projects.
System Overview
System Context
The Cardinal application is command-line driven. Like MOOSE, this is typical for a high-performance software that is designed to run across several nodes of a cluster system. As such, all usage of the software is through any standard terminal program generally available on all supported operating systems. Similarly, for the purpose of interacting through the software, there is only a single user, "the user", which interacts with the software through the command-line. Cardinal does not maintain any back-end database or interact with any system daemons. It is an executable, which may be launched from the command line and writes out various result files as it runs.
Figure 1: Usage of Cardinal and other MOOSE-based applications.
System Functions
Since Cardinal is a command-line driven application, all functionality provided in the software is operated through the use of standard UNIX command line flags and the extendable MOOSE input file. Cardinal is completely extendable so individual design pages should be consulted for specific behaviors of each user-defined object.
User Characteristics
Cardinal has two main classes of users:
Cardinal Developers: These are the core developers of Cardinal. They are responsible for designing, implementing, and maintaining the software, while following and enforcing its software development standards. These scientists and engineers modify or add capabilities to Cardinal for their own purposes, which may include research or extending its capabilities. They will typically have a background in thermal-fluids and/or radiation transport, as well as in modeling and simulation techniques.
Analysts: These are users that run Cardinal to run simulations, but do not develop code. The primary interface of these users with Cardinal is the input files that define their simulations. These users may interact with developers of the system requesting new features and reporting bugs found.
Assumptions and Dependencies
The Cardinal application is developed using MOOSE and is based on various modules, as such the SRS for Cardinal is dependent upon the files listed at the beginning of this document. Any further assumptions or dependencies are outlined in the remainder of this section.
Cardinal has no constraints on hardware and software beyond those of the MOOSE framework, MOOSE modules, OpenMC, NekRS, and DAGMC, as listed in their respective SRS documents, which are accessible through the links at the beginning of this document.
Cardinal provides access to a number of code objects that perform computations, such as particle transport, material behavior, and boundary conditions. These objects each make their own physics-based assumptions, such as the units of the inputs and outputs. Those assumptions are described in the documentation for those individual objects.
References
Definitions and Acronyms
This section defines, or provides the definition of, all terms and acronyms required to properly understand this specification.
Definitions
Verification: (1) The process of: evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. (2) Formal proof of program correctness (e.g., requirements, design, implementation reviews, system tests) (24765:2010(E), 2010).
Acronyms
Acronym | Description |
---|---|
CFD | Computational Fluid Dynamics |
INL | Idaho National Laboratory |
MOOSE | Multiphysics Object Oriented Simulation Environment |
NQA-1 | Nuclear Quality Assurance Level 1 |
POSIX | Portable Operating System Interface |
SRS | Software Requirement Specification |
System Requirements
In general, the following is required for MOOSE-based development:
A POSIX compliant Unix-like operating system. This includes any modern Linux-based operating system (e.g., Ubuntu, Fedora, Rocky, etc.), or a Macintosh machine running either of the last two MacOS releases.
Hardware | Information |
---|---|
CPU Architecture | x86_64, ARM (Apple Silicon) |
Memory | 8 GB (16 GBs for debug compilation) |
Disk Space | 30GB |
Libraries | Version / Information |
---|---|
GCC | 8.5.0 - 12.2.1 |
LLVM/Clang | 10.0.1 - 16.0.6 |
Intel (ICC/ICX) | Not supported at this time |
Python | 3.7 - 3.11 |
Python Packages | packaging pyaml jinja2 |
Functional Requirements
- cardinal: Auxkernels
- 1.1.1The system shall divide space into a 3-D Cartesian grid and assign an integer value for each bin to an auxiliary variable.
- 1.1.2The system shall compute a convective heat transfer coefficient using userobjects for the wall heat flux, wall temperature, and bulk temperature.
- 1.1.3System shall error if auxkernel is not paired with a compatible bin user object.
- 1.1.4System shall error if auxkernel is not paired with a velocity vector bin user object.
- cardinal: Bison Coupling
- 1.2.1Cardinal shall be able to run BISON as a main application without any data transfers. This test just ensures correct setup of BISON as a submodule with app registration.
- 1.2.2Cardinal shall be able to run BISON as a sub-application without any data transfers. This test just ensures correct setup of BISON as a submodule with app registration.
- cardinal: Cht
- 1.3.1The system shall correctly evaluate volume postprocessors for a dimensional NekRS solve on a conjugate heat transfer mesh. We test this by repeating the same integrals of the mapped data on the MOOSE mesh, and show exact equivalence.
- 1.3.2A coupled MOOSE-nekRS pincell-fluid flow problem shall predict correct conservation of energy and realistic thermal solutions when nekRS is run in nondimensional form. A wide variety of postprocessors are measured and compared against the same problem in dimensional form in the ../sfr_pincell directory. Most measurements match exactly, but there is about a 0.1% difference in temperatures (this is to be expected, though, because the _solve_ is not exactly the same between a nondimensional and dimensional case - the governing equation is the same, but not necessarily the number of iterations, etc.
- 1.3.3A coupled MOOSE-nekRS pincell-fluid flow problem shall predict correct conservation of energy and realistic thermal solutions when nekRS is run in nondimensional form using an exact mesh mirror. This solution was compared to the sfr_pincell case, and results are very similar with only small differences due to the different mesh mirror representations. The usrwrk_output feature was also used to check the correctness of the flux map into NekRS.
- 1.3.4A coupled MOOSE-nekRS pebble flow problem shall predict physically realistic conjugate heat transfer when using an initial offset in the scratch space. The gold file was created when using no offset to prove equivalence.
- 1.3.5A coupled MOOSE-nekRS pebble flow problem shall predict physically realistic conjugate heat transfer.
- 1.3.6The system shall be able to output the pressure and velocity solution from NekRS when coupling to MOOSE.
- 1.3.7A coupled MOOSE-nekRS pincell-fluid flow problem shall predict correct conservation of energy and realistic thermal solutions. Exact conservation of energy (based on the power imposed in the solid) will not be observed because some heat flux GLL points are also on Dirichlet boundaries, which win in boundary condition ties.
- 1.3.8Individually conserving heat flux sideset by sideset shall give equivalent results to the all-combined option when there is just one coupling sideset. The gold file for this test is identical to that for the sfr_pincell case.
- 1.3.9The system shall allow imposing heat flux through a dummy main application, instead of coupling NekRS via conjugate heat transfer. This is verified by computing the heat flux on the NekRS mesh, which adequately matches an initial value set in a postprocessor. This gold file is also identical to that obtained by running a dummy main app (solid_dummy) that passes in the desired flux_integral initial condition.
- cardinal: Conduction
- 1.4.1A coupled MOOSE-nekRS heat conduction problem shall produce the correct temperature distribution when (1) a heat source is applied in the nekRS volume and (2) a heat flux is imposed in nekRS through a boundary. The same problem is created in a standalone MOOSE simulation, in moose.i. Temperatures agree to within 0.2% degrees, and the agreement can be made better by using finer meshes in the coupled Cardinal case.
- 1.4.2The system shall couple NekRS to MOOSE through simultaneous boundary heat flux and volumetric heating when using an exact mesh mirror. The solution is nearly identical to the pyramid test and the usrwrk output for flux and volumetric heating match the auxiliary variables in the Nek-wrapped input.
- 1.4.3The system shall error if the user specifies a duplicate variable with a name overlapping with special names reserved for Cardinal data transfers.
- 1.4.4A coupled MOOSE-nekRS slab heat conduction problem shall predict the correct interface and volume temperatures based on an analytic solution. The MOOSE portion of the domain (the left half) is compared against an exodiff with this test, while the nekRS portion of the domain (the right half) has been compared off-line at the time of the generation of this test to the known analytic solution via line plot in Paraview (this has to be done offline because the 'visnek' script needs to be run to get the nekRS output results into the exodus format. A heat conduction simulation performed with MOOSE over the combined nekRS-MOOSE domain in the 'moose.i' input file also matches the coupled results very well.
- 1.4.5A coupled MOOSE-nekRS two-region pyramid heat conduction problem shall predict the correct interface and volume temperatures that are obtained from a standalone MOOSE heat conduction simulation in the
moose.i
file of both pyramid blocks. Because only a constant heat flux is passed for each element during the data transfer from MOOSE to nekRS, we do not expect the results to match perfectly with the MOOSE standalone case (which represents the heat flux internally as a continuous function (as opposed to elementwise-constant). We compare extrema temperature values on the two pyramid blocks with the MOOSE solution, and see that (1) the overall temperature distribution matches qualitatively, and (2) the difference between the Cardinal case and the MOOSE standalone case decreases as the mesh is refined (i.e. as the effect of the constant monomial surface heat flux is lowered). Even with the fairly coarse mesh used in this test, the errors in temperature are less than 1% of the total temperature range in the problem (but can locally be high relative differences). - 1.4.6A coupled MOOSE-nekRS slab heat conduction problem shall produce the correct temperature distribution when a heat source is applied to the nekRS problem. A reference MOOSE standalone problem (in moose.i) solves the equation k*nabla(T)7*T, i.e. the 'heat source' is 7*T. This surrogate problem is then repeated with Cardinal, where nekRS solves the k*nabla(T)q, where q is a heat source 'computed' by MOOSE to be 7*T_n, where T_n is the temperature from nekRS. This problem does not really represent any interesting physical case, but is solely intended to show that nekRS correctly solves the heat equation with a heat source 'computed' by some other app. The solution matches the standalone MOOSE case very well - with a nekRS mesh of 15x15x15 elements, postprocessors for the side- and volume-averaged temperature, as well as maximum temperature, match the MOOSE standalone case to within 0.1%. To keep the gold files small here, this test is only performed on a 10x10x10 nekRS mesh.
- 1.4.7A coupled MOOSE-nekRS two-region cylinder heat conduction problem shall predict the correct interface and volume temperatures that are obtained from a standalone MOOSE heat conduction simulation in the
moose.i
file of both cylinders. Because only a constant heat flux is passed for each element during the data transfer from MOOSE to nekRS, we do not expect the results to match perfectly with the MOOSE standalone case (which represents the heat flux internally as a continuous function (as opposed to elementwise-constant). We compare extrema temperature values on the two cylinders with the MOOSE solution, and see that (1) the overall temperature distribution matches qualitatively, and (2) the difference between the Cardinal case and the MOOSE standalone case decreases as the mesh is refined (i.e. as the effect of the constant monomial surface heat flux is lowered). Even with the fairly coarse mesh used in this test, the errors in temperature are less than 0.5%.As the BISON mesh is refined, the error continually decreases until the temperatures are very close to those predicted by the standalone MOOSE case. - 1.4.8The same solution shall be obtained as the cylinder_conduction case when nekRS is run with a smaller time step than MOOSE with subcycling. The run will not require the the same overall number of time steps as the cylinder_conduction test, so we just compare some postprocessor values at the end of the simulation. These CSV results are less than 1e-3 different from those for the cylinder_conduction case, so the simulation process is equivalent. We don't use exactly the same CSV gold file because the number of time steps differs, and would trigger a failure.
- 1.4.9The same solution shall be obtained when nekRS is run before MOOSE. We compare CSV results against those for the cylinder_conduction case, which match wo within 1e-5. We don't use exactly the same CSV gold file because the number of time steps differ, and would trigger a failure.
- 1.4.10The same solution shall be obtained when nekRS is run as a sub-app with minimized transfers for in the incoming and outgoing data transfers. We compare against the same CSV file used for the cylinder_conduction_subcycle case because the results should be exactly the same.
- 1.4.11The system shall support an exact NekRS mesh mirror. The solution is compared against the cylinder_conduction case and nearly identical solutions are obtained.
- 1.4.12A coupled MOOSE-nekRS cylinder heat conduction problem shall produce the correct temperature distribution when a heat source is applied to the nekRS problem, and when the meshes do not perfectly line up (i.e. the volumes are different). A reference MOOSE standalone problem (in moose.i) solves the equation k*nabla(T)7*T, i.e. the 'heat source' is f(T,x,z). This surrogate problem is then repeated with Cardinal, where nekRS solves the k*nabla(T)q, where q is a heat source 'computed' by MOOSE to be f(T_n,x,z), where T_n is the temperature from nekRS. This problem does not really represent any interesting physical case, but is solely intended to show that nekRS correctly solves the heat equation with a heat source 'computed' by some other app. The solution matches the standalone MOOSE case very well - postprocessors for the volume-averaged and maximum temperature match the MOOSE standalone case to within 0.1%.
- 1.4.13The system shall couple NekRS to MOOSE through volumetric heating when using an exact mesh mirror. The output file was compared against the cylinder_heat_source test, giving very similar answers. The heat source sent into NekRS was also checked with the usrwrk_output feature.
- 1.4.14A coupled MOOSE-nekRS cylinder heat conduction problem shall produce the correct temperature distribution when a heat source is applied to the nekRS problem, and the nekRS solve is conducted in nondimensional scales. Temperatures match to within 0.1% between the nondimensional version and the dimensional version in ../cylinder.
- 1.4.15The NekRS wrapping shall allow a zero total flux to be sent from MOOSE to NekRS when all sidesets are conserved together.
- 1.4.16The NekRS wrapping shall allow a zero total flux to be sent from MOOSE to NekRS when sidesets are individually conserved.
- 1.4.17The NekRS wrapping shall allow unique sideset fluxes provided that the sidesets do not share any nodes in the NekRS mesh.
- 1.4.18The NekRS wrapping shall allow unique sideset fluxes provided that the sidesets do not share any nodes in the NekRS mesh, with some sidesets being zero flux.
- 1.4.19The system shall print a helpful error message if the sideset flux reporter does not have the correct length.
- The system shall error if conserving flux on each unique sideset, but with nodes shared across multiple sidesets.
- cardinal: Controls
- 1.5.1The system shall error if the controls is not used with the proper user object
- 1.5.2The system shall change OpenMC material compositions via a controls
- cardinal: Deformation
- 1.6.1A boundary displacement in the main app will displace the mesh in NekRS when using a boundary mesh mirror. The deformation is verified by looking at the change in area on the (i) main app, (ii) mesh mirror, and (iii) internal NekRS meshes, which agree very well. Improved agreement is obtained by decreasing the time step of the data transfers, due to the first-order finite difference approximation made for velocity.
- 1.6.2A boundary displacement in the main app will displace the mesh in NekRS when using a volume mesh mirror. We use the same gold file as for the boundary test, proving equivalence.
- 1.6.3The system shall be able to run the mv_cyl NekRS example with a thin wrapper when using a volume mesh mirror. We show that the volume and area on the NekRS side is changing, whereas the volume/area in MOOSE will not change because we currently do not send displacements from NekRS to MOOSE.
- 1.6.4The system shall be able to run the mv_cyl NekRS example with a thin wrapper when using a boundary mesh mirror. We show that the volume and area on the NekRS side is changing, whereas the volume/area in MOOSE will not change because we currently do not send displacements from NekRS to MOOSE.
- 1.6.5This test solves the steady state heat conduction equation with a source term of 3*sin(x)sin(y)sin(z) and a conductivityof 1. The temperature obtained should be sin(x)sin(y)sin(z).The domain is a cube that is deforming at eachtime step in the to the arbitrary functions t*x*z*(2-z)0.1 for x-coordinates,tx*z*(2-z)0.05 in y coordinates, and t(y+1)(y-1)0.1 in z coordinates.The gold solution was verified by comparing itto the analytic solution and a solution obtained by MOOSE's heat conduction solve.
- 1.6.6An arbitrary mesh displacement in the main app will displace the meshin the sub-app equivalently at each time-step. This shall be verified bycomparing the areas of each sideset in both the main and the sub-app.The domain is a cube, with the initial area of each of thesidesets being 4.0. The areas across the main and sub-app should matchexactly, provided we are using Gauss Lobatto quadrature for MOOSE's areapost-processors, in order to match NekRS's GLL quadrature.
- cardinal: Fluid Prop Subs
- 1.7.1Cardinal shall be able to use fluid properties from the sodium submodule (in a master app).
- 1.7.2Cardinal shall be able to use fluid properties from the potassium submodule (in a master app).
- 1.7.3Cardinal shall be able to use fluid properties from the IAPWS95 submodule (in a master app).
- 1.7.4Cardinal shall be able to use fluid properties from the sodium submodule (in a sub app).
- 1.7.5Cardinal shall be able to use fluid properties from the potassium submodule (in a sub app).
- 1.7.6Cardinal shall be able to use fluid properties from the IAPWS95 submodule (in a sub app).
- cardinal: Griffin Coupling
- 1.8.1Cardinal shall be able to run Griffin as the main application.
- cardinal: Ics
- 1.9.1The volume postprocessor must have execute_on initial
- 1.9.2The system shall error if invalid parameters are provided
- 1.9.3The system shall be able to apply a normalized sinusoidal initial condition.
- 1.9.4The system shall error if the pairing heat source action is missing
- 1.9.5The system shall be able to apply a normalized sinusoidal initial condition.
- cardinal: Markers
- 1.10.1The system shall be able to prioritize refinement while ANDing markers together.
- 1.10.2The system shall be able to prioritize coarsening while ANDing markers together.
- 1.10.3The system shall be able to prioritize refinement while ORing markers together.
- 1.10.4The system shall be able to prioritize coarsening while ORing markers together.
- cardinal: Mesh
- 1.11.1The system shall be able to construct a triangular lattice mesh for three pin rings of gaps.
- 1.11.2The system shall be able to construct a triangular lattice mesh for two pin rings of gaps.
- 1.11.3The system shall be able to construct a triangular lattice mesh for one pin ring of gaps.
- 1.11.4The system shall be able to construct a triangular lattice mesh for three pin rings.
- 1.11.5The system shall be able to construct a triangular lattice mesh for two pin rings.
- 1.11.6The system shall be able to construct a triangular lattice mesh for one pin ring.
- 1.11.7The system shall be able to construct a triangular lattice mesh for three pin rings as face meshes.
- 1.11.8The system shall be able to construct a triangular lattice mesh for two pin rings as face meshes.
- 1.11.9The system shall be able to construct a triangular lattice mesh for one pin ring as face meshes.
- cardinal: Meshgenerators
- 1.12.1The system shall error if the boundary specified for corner fitting does not exist
- 1.12.2The system shall error if an invalid polygon boundary is provided
- 1.12.3The system shall error if the radius of curvature is too big to fit inside the polygon
- 1.12.4The system shall curve corners of a six-sided polygon with zero boundary layers
- 1.12.5The system shall curve corners of a six-sided polygon with zero boundary layers with translation in space
- 1.12.6The system shall error if applying a rotation angle to a polygon not centered on (0, 0)
- 1.12.7The system shall curve corners of a six-sided polygon when the input mesh does not have one edge of the polygon horizontal.
- 1.12.8The system shall error if the length of smoothing adjustments does not equal the number of polygon layers
- 1.12.9The system shall error if the length of smoothing adjustments are not set to valid values
- 1.12.10The system shall curve corners of a six-sided polygon with zero boundary layers and output QUAD8 elements
- 1.12.11The system shall error if an invalid element type is used with a quad9 to quad8 converter
- 1.12.12The system shall error if attempting to move elements to a circular surface when those elements have more than one face on the circular sideset.
- 1.12.13The system shall error if attempting to move elements to a circular surface when those elements have more than one face on the circular sidesets.
- 1.12.14The system shall error if the boundary specified for circular sideset fitting does not exist
- 1.12.15The system shall correctly rebuild a QUAD9 mesh as QUAD8 with default behavior of rebuilding all boundaries, and without any circular movement.
- 1.12.16The system shall correctly rebuild a QUAD9 mesh as QUAD8 with no sidesets retained.
- 1.12.17The system shall correctly rebuild a QUAD9 mesh as QUAD8 with custom behavior of rebuilding only some boundaries, and without any circular movement.
- 1.12.18The system shall error if the boundary specified for rebuilding sidesets does not exist
- 1.12.19The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving the outer boundary to fit a circle.
- 1.12.20The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving the inner boundary to fit a circle.
- 1.12.21The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving multiple boundaries to fit a circle.
- 1.12.22The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving multiple boundaries to fit a circle with origins not at (0, 0, 0).
- 1.12.23The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving one boundary to fit a circle when the origin is not at (0, 0, 0)
- 1.12.24The system shall error if a point is already located on the origin, because then it lacks a nonzero unit vector to move it to a circular surface.
- 1.12.25The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving one boundary to fit circles based on multiple origins, with boundary layers.
- 1.12.26The system shall error if there is an obvious mismatch between the number of boundary layers and the mesh.
- 1.12.27The system shall correctly rebuild a QUAD9 mesh as QUAD8 when moving one boundary to fit a circle and when the alignment axis is not the default (z).
- 1.12.28The system shall error if an invalid element type is used with a hex27 to hex20 converter
- 1.12.29The system shall error if attempting to move elements to a circular surface when those elements have more than one face on the circular sideset.
- 1.12.30The system shall error if attempting to move elements to a circular surface when those elements have more than one face on the circular sidesets.
- 1.12.31The system shall error if the boundary specified for circular sideset fitting does not exist
- 1.12.32The system shall correctly rebuild a HEX27 mesh as HEX20 with default behavior of rebuilding all boundaries, and without any circular movement.
- 1.12.33The system shall correctly rebuild a HEX27 mesh as HEX20 with no sidesets retained.
- 1.12.34The system shall correctly rebuild a HEX27 mesh as HEX20 with custom behavior of rebuilding only some boundaries, and without any circular movement.
- 1.12.35The system shall error if the boundary specified for rebuilding sidesets does not exist
- 1.12.36The system shall correctly rebuild a HEX27 mesh as HEX20 when moving the outer boundary to fit a circle.
- 1.12.37The system shall correctly rebuild a HEX27 mesh as HEX20 when moving the inner boundary to fit a circle.
- 1.12.38The system shall correctly rebuild a HEX27 mesh as HEX20 when moving multiple boundaries to fit a circle.
- 1.12.39The system shall correctly rebuild a HEX27 mesh as HEX20 when moving multiple boundaries to fit a circle with origins not at (0, 0, 0).
- 1.12.40The system shall correctly rebuild a HEX27 mesh as HEX20 when moving one boundary to fit a circle when the origin is not at (0, 0, 0)
- 1.12.41The system shall error if the invalid values are used for the radii.
- 1.12.42The system shall error if the radius and boundary are not the same length.
- 1.12.43The system shall error if the origin and boundary are not the same length.
- 1.12.44The system shall error if the origin and boundary are not the same length.
- 1.12.45The system shall error if the origin entries are not the correct length.
- 1.12.46The system shall error if the origin entries are not the correct length.
- 1.12.47The system shall correctly rebuild a HEX27 mesh as HEX20 when moving one boundary to fit circles based on multiple origins.
- 1.12.48The system shall error if the number of layers does not match the number of boundaries to move
- 1.12.49The system shall correctly rebuild a HEX27 mesh as HEX20 when moving sidesets with boundary layers to fit a cylinder surface.
- 1.12.50The system shall error if there is an obvious mismatch between the number of boundary layers and the mesh.
- 1.12.51The system shall correctly rebuild a HEX27 mesh as HEX20 when moving sidesets with multiple boundary layers to fit a cylinder surface.
- 1.12.52The system shall error if trying to set the origins in more than one manner
- 1.12.53The system shall error if invalid points are provided for the origins
- 1.12.54The system shall correctly rebuild a HEX27 mesh as HEX20 when moving one boundary to fit circles based on multiple origins provided from a file.
- 1.12.55The system shall correctly rebuild a HEX27 mesh as HEX20 when moving one boundary to fit a circle and when the alignment axis is not the default (z).
- 1.12.56The system shall generate a mesh for a sphere.
- 1.12.57The system shall move the nodes on a sphere boundary to the curved surface
- 1.12.58The system shall move the nodes on a sphere boundary to the curved surface when the sphere is offset from the origin
- cardinal: Multiple Nek Apps
- 1.13.1Cardinal shall be able to run NekRS input files nested within subdirectories in the same fashion that a standalone NekRS case will
- 1.13.2The system shall error if an invalid directory path is provided for the case
- 1.13.3The system shall error if trying to write output files for a Nek input without sibling apps
- 1.13.4The system shall be able to run multiple Nek simulations translated throughout a master app's domain. This test compares against a MOOSE standalone case, and gives less than 0.3% difference in various temperature metrics.
- 1.13.5The correct output files shall be written when writing separate output files for repeated Nek simulation instances.
- cardinal: Nek Errors
- 1.14.1The system shall error if Cardinal has displacements associated with NekRSMesh, but there is no mesh solver.
- 1.14.2The system shall error if Cardinal is using NekRSStandaloneProblem with NekRS's moving mesh problems.
- 1.14.3The system shall error if the Nek .par file has a mesh solverbut the nekRS .par file has no moving mesh (codedFixedValue) boundary in the Mesh block.
- 1.14.4The system shall error if Cardinal is using the NekRS mesh blending solver without indicating the moving boundary of interest
- 1.14.5The system shall error if NekRSMesh is not paired with displacements for moving mesh problems.
- 1.14.6The system shall error if Cardinal has solver=user in the par file's MESH block, but there is no volume mesh mirror.
- 1.14.7The system shall error if the nekRS .par file has a moving mesh solverbut the problem type in Cardinal is NekRSSeprateDomainProblem.
- 1.14.8The system shall not error if there is no scratch space conflict for standalone Nek cases.
- 1.14.9The system shall error if the user tries to allocate scratch from Cardinal for standalone NekRS cases, but the Nek case files are also separately trying to allocate scratch.
- 1.14.10The system shall error if the user enters too small a scratch space allocation; NekRSProblem always requires at least 1 slot
- 1.14.11The system shall error if the user enters too small a scratch space allocation
- 1.14.12The system shall error if the user enters too small a scratch space allocation
- 1.14.13The system shall error if the user enters too small a scratch space allocation
- 1.14.14The system shall error if the user enters too small a scratch space allocation
- 1.14.15The system shall error if the user enters too small a scratch space allocation
- 1.14.16The system shall error if the user enters too small a scratch space allocation
- 1.14.17The system shall error if the user enters too small a scratch space allocation; an initial shift cannot exceed the number of allocated slots
- 1.14.18The system shall error if the user enters too small a scratch space allocation, when using a front shift for the slots
- 1.14.19MOOSE shall throw an error if an invalid boundary is specified for the construction of nekRS's mesh as a MooseMesh.
- 1.14.20The system shall throw an error if trying to use temperature as a field for cases that do not have a temperature variable
- 1.14.21The system shall throw an error if trying to use scalar01 as a field for problems that don't have a scalar01 variable.
- 1.14.22The system shall throw an error if trying to use scalar02 as a field for problems that don't have a scalar02 variable.
- 1.14.23The system shall throw an error if trying to use scalar03 as a field for problems that don't have a scalar03 variable.
- 1.14.24The system shall throw an error if trying to use usrwrk00 as a field for problems that don't have sufficient usrwrk slots.
- 1.14.25The system shall throw an error if trying to use usrwrk01 as a field for problems that don't have sufficient usrwrk slots.
- 1.14.26The system shall throw an error if trying to use usrwrk02 as a field for problems that don't have sufficient usrwrk slots.
- 1.14.27The system shall correctly allocate scratch by accessing only quantities on the flow mesh if there is no temperature variable
- 1.14.28The system shall error if the user manually specifies a duplicate name for an output field.
- 1.14.29The system shall error if the user manually specifies a duplicate name for an output field.
- 1.14.30The system shall error if NekRSProblem is not paired with the correct executioner.
- 1.14.31The system shall error if the Dimensionalize action is not paired with the correct problem.
- 1.14.32The system shall error if NekRSProblem is not paired with the correct mesh type.
- 1.14.33The system shall error if a Nek object is not paired with the correct problem.
- 1.14.34The system shall error if a NekRSMesh is used without a corresponding Nek-wrappedproblem.
- 1.14.35The system shall error if NekRSProblem is not paired with the correct time stepper.
- 1.14.36MOOSE shall throw an error if there is no receiving heat flux boundary condition on the nekRS boundaries that are coupled to MOOSE.
- 1.14.37MOOSE shall throw an error if an invalid boundary is specified for the construction of nekRS's mesh as a MooseMesh.
- 1.14.38MOOSE shall throw an error if 'boundary' is empty for separate domain coupling, because the correct internal arrays would not be initialized
- 1.14.39The system shall error if there is a mismatch between the scaling of the mesh and NekRS problem.
- 1.14.40When using the minimized transfers setting, the default value for the postprocessor in the master application must not be zero.
- 1.14.41The system shall error if the same MPI communicator is used to set up more than one Nek case.
- 1.14.42The system shall throw an error if there is no heat source kernel when using volume coupling
- 1.14.43MOOSE shall throw an error if there is no temperature passive scalar variable initialized in nekRS.
- 1.14.44MOOSE shall throw an error if the user attempts to allocate the scratch space arrays in NekRS, since they are automatically allocated by Cardinal.
- cardinal: Nek File Output
- 1.15.1The correct output file writing sequence shall occur when relying on nrs->isOutputStep
- 1.15.2The correct output file writing sequence shall occur based on .par settings when NekRS is the master application and when an uneven time step division occurs.
- 1.15.3The correct output file writing sequence shall occur based on .par settings when NekRS is the master application and when an even time step division occurs.
- 1.15.4The correct output file writing sequence shall occur based on master executioner settings when NekRS is the sub application and when an uneven time step division occurs.
- 1.15.5The correct output file writing sequence shall occur based on master executioner settings when NekRS is the sub application and when the time steps are evenly divisible.
- 1.15.6The system shall allow the user to write the user scratch space slots to field files.
- cardinal: Nek Mesh
- 1.16.1The system shall be able to generate an exact first-order boundary mesh mirror.
- 1.16.2The system shall be able to generate an exact first-order volume mesh mirror.
- 1.16.3The system shall error if trying to build an exact mesh mirror that is second order.
- 1.16.4NekRSMesh shall construct a first-order surface mesh from a list of boundary IDs. The ordering for the nodes shall be based on the libMesh ordering.
- 1.16.5NekRSMesh shall construct a first-order volume mesh. The ordering for the nodes shall be based on the libMesh ordering.
- 1.16.6NekRSMesh shall construct a second-order surface mesh from a list of boundary IDs. The ordering of the nodes for each element shall also be based on the libMesh ordering.
- 1.16.7NekRSMesh shall construct a second-order volume mesh. The ordering of the nodes for each element shall also be based on the libMesh ordering.
- 1.16.8NekRSMesh shall correctly assign sideset IDs based on the nekRS boundary IDs. This is verified here by performing area integrals on sidesets defined in MOOSE, which exactly match area integrals performed internally in nekRS.
- 1.16.9NekRSMesh shall correctly assign sideset IDs based on the nekRS boundary IDs when using an exact mesh mirror. The areas are matched to the 'cube_sidesets' test.
- 1.16.10NekRSMesh shall correctly assign sideset IDs based on the nekRS boundary IDs. This is verified here by performing area integrals on sidesets defined in MOOSE, which exactly match area integrals performed internally in nekRS.
- 1.16.11NekRSMesh shall correctly assign sideset IDs based on the nekRS boundary IDs when using an exact mesh mirror. The areas are matched to the 'pyramid_sidesets' test.
- cardinal: Nek Output
- 1.17.1Nek-wrapped MOOSE cases shall be able to output the passive scalars from the Nek solution onto the mesh mirror.
- 1.17.2The system shall error if trying to write a usrwrk slot greater than the total number of allocated slots
- 1.17.3The system shall error if there is a mismatch between parameter lengths for writing usrwrk field files
- cardinal: Nek Separatedomain
- 1.18.1The system shall throw an error if trying to assign inlet_boundary to multiple boundary IDs.
- 1.18.2The system shall throw an error if trying to assign inlet_boundary to an ID not contained in NekRSMesh's boundary input.
- 1.18.3The system shall throw an error if trying to assign inlet_boundary to an ID not contained in NekRS boundary IDs.
- 1.18.4The system shall throw an error if trying to assign outlet_boundary to multiple boundary IDs.
- 1.18.5The system shall throw an error if trying to assign outlet_boundary to an ID not contained in NekRS boundary IDs.
- 1.18.6Cardinal shall be able to transfer inlet temperature and velocity to the inlet_boundary of NekRS. We check this by applying those boundary conditions in NekRS, and looking at postprocessors on those boundaries to match the values we sent in.
- 1.18.7Cardinal shall be able to transfer inlet temperature and velocity to the inlet_boundary of NekRS. We check this by applying those boundary conditions in NekRS, and looking at postprocessors on those boundaries to match the values we sent in. Cardinal shall also be able to extract outlet temperature and velocity from outlet_boundary. We check this by fetching these values using postprocessors.
- 1.18.8Cardinal shall be able to extract outlet temperature and velocity from outlet_boundary of NekRS. We check this by fetching those values using postprocessors.
- cardinal: Nek Standalone
- 1.19.1The system shall support adaptive time stepping in NekRS when running as the main application.
- 1.19.2The system shall support adaptive time stepping in NekRS when running as the main application and with a non-dimensional formulation.
- 1.19.3Cardinal shall be able to run the channel NekRS example with a thin wrapper. We check postprocessor differences between equivalent operations taken directly on the NekRS solution arrays (for instance, a NekVolumeAverage) and directly on the extracted solution (for instance, an ElementAverageValue). We require that all the postprocessors match between these two renderings of the solution (on the GLL points versus on the mesh mirror). This verifies correct extraction of the NekRS solution with the 'output' parameter feature.
- 1.19.4Cardinal shall be able to run the conj_ht NekRS example with a thin wrapper while using postprocessors acting on either the fluid mesh or fluid+solid mesh.
- 1.19.5The system shall throw an error if trying to act on only the NekRS solid mesh for side postprocessors.
- 1.19.6Cardinal shall be able to run the ethier NekRS example with a thin wrapper. We add postprocessors to let us compare min/max values printed to the screen by NekRS.
- 1.19.7Postprocessing shall be possible directly on restart files without running a simulation. Here, we compare several postprocessors from a case restarted from a field file against the identical case where variables are instead initialized in the .udf from functions.
- 1.19.8Cardinal shall be able to run the ktauChannel NekRS example with a thin wrapper when using a volume mesh mirror.
- 1.19.9Cardinal shall be able to run the lowMach NekRS example with a thin wrapper when using a volume mesh mirror. We add postprocessors to let us compare min/max values printed to the screen by NekRS. We also check postprocessor differences between equivalent operations taken directly on the NekRS solution arrays (for instance, a NekVolumeAverage) and directly on the extracted solution (for instance, an ElementAverageValue). We require that all the postprocessors match between these two renderings of the solution (on the GLL points versus on the mesh mirror). This verifies correct extraction of the NekRS solution with the 'output' parameter feature.
- 1.19.10Cardinal shall be able to run the channel NekRS example with a thin wrapper when using a boundary mesh mirror. We add postprocessors to let us compare min/max values printed to the screen by NekRS. We also check postprocessor differences between equivalent operations taken directly on the NekRS solution arrays (for instance, a NekSideAverage) and directly on the extracted solution (for instance, an ElementAverageValue). We require that all the postprocessors match between these two renderings of the solution (on the GLL points versus on the mesh mirror). This verifies correct extraction of the NekRS solution with the 'output' parameter feature.
- 1.19.11The system shall properly modify both the NekRS start time and the time recognized by the MOOSE wrapping.
- 1.19.12The system shall be able to force NekRS to start on a MOOSE-specified start time. This is checked by looking for the existence of an output file that NekRS only writes at t = 1.0005, which is only reached if MOOSE properly sets a custom start time.
- cardinal: Nek Stochastic
- 1.20.1The system shall precompile a NekRS case.
- 1.20.2The system shall stochastic values to be sent from MOOSE to NekRS. This example sends time-dependent random values from MOOSE to NekRS. The values of the random variables are used to apply a Dirichlet boundary condition on temperature, which we confirm by looking at the value of temperature on the boundary using postprocessors.
- 1.20.3The system shall error if trying to write stochastic input into a scratch space slot that has not been allocated
- 1.20.4The system shall error if trying to write stochastic input into a scratch space slot that is needed for other physics coupling purposes.
- 1.20.5The system shall error if the scratch space is not allocated via Cardinal for stochastic cases for standalone Nek runs. We only need to test this for the standalone case because the other two coupling classes (NekRSProblem, NekRSSeparateDomainProblem) automatically require scratch allocated from Cardinal.
- 1.20.6The system shall error if there is a gap between the slots used for coupling (0 to n) and the slots needed for NekScalarValue, because this would mess up the host to device data transfer.
- 1.20.7The system shall error if there is a gap between the slots used for NekScalarValues, because this would mess up the host to device data transfer.
- 1.20.8The system shall clear the cache before attempting the test with another set of ranks.
- 1.20.9The system shall stochastic values to be sent from MOOSE to NekRS. This example sends 3 values to 3 unique NekRS solves, without any restart or overlap of MPI communicators. We check that the values are properly received in the scratch space by using that value to set a dummy scalar01. We purposefully put this test in its own directory to prove that there are no requirements on precompilation.
- 1.20.10The system shall clear the cache before attempting the test with another set of ranks.
- 1.20.11The system shall stochastic values to be sent from MOOSE to NekRS. This example sends 3 values to 3 unique NekRS solves, without any restart or overlap of MPI communicators. We check that the values are properly received in the scratch space by using that value to set a dummy scalar01. We purposefully put this test in its own directory to prove that there are no requirements on precompilation.
- 1.20.12The system shall clear the cache before attempting the test with another set of ranks.
- 1.20.13The system shall stochastic values to be sent from MOOSE to NekRS. This example sends 3 values to 3 unique NekRS solves, without any restart or overlap of MPI communicators. We check that the values are properly received in the scratch space by using that value to set a dummy scalar01. We purposefully put this test in its own directory to prove that there are no requirements on precompilation.
- 1.20.14The system shall clear the cache before attempting the test with another set of ranks.
- 1.20.15The system shall allow an arbitrary offset at the start of the scratch space. We check this by using the first slot in the scratch space to set the value of scalar01, but use the subsequent slots to obtain data from MOOSE. We show that the scalar01 is unaffected by the Cardinal copy into the other part of the scratch space.
- 1.20.16The system shall stochastic values to be sent from MOOSE to NekRS. This example sends time-dependent random values from MOOSE to NekRS, where time synchronization is driven by the main app. The values of the random variables are used to fill SCALAR02 with a constant value, which we then measure by outputting the scalar and applying postprocessors to it. This confirms that the random data we send to NekRS does correctly make it to device
- 1.20.17The system shall stochastic values to be sent from MOOSE to NekRS and write unique NekRS field files for each. By limiting this test to fewer MPI ranks than Apps, we also check that we still get the correct naming scheme
- 1.20.18The system shall launch multiple independent Nek solves (with multiple separate output files) when running in stochastic mode. We check this by loading the field files created by the driver_multi_fld test into new NekRS runs (the read0, read1, and read2 tests) tests as initial conditions and check that the values of the loaded fields match the stochastic values sent there.
- 1.20.19The system shall launch multiple independent Nek solves (with multiple separate output files) when running in stochastic mode. We check this by loading the field files created by the driver_multi_fld test into new NekRS runs (the read0, read1, and read2 tests) tests as initial conditions and check that the values of the loaded fields match the stochastic values sent there.
- 1.20.20The system shall launch multiple independent Nek solves (with multiple separate output files) when running in stochastic mode. We check this by loading the field files created by the driver_multi_fld test into new NekRS runs (the read0, read1, and read2 tests) tests as initial conditions and check that the values of the loaded fields match the stochastic values sent there.
- cardinal: Nek Temp
- 1.21.1The nekRS temperature solution shall be accurately reconstructed on the nekRSMesh with an exact surface transfer. By setting an intial condition for temperature on the nekRS side and then setting 'solver = none', we show that the max/min error in that reconstructed temperature compared to a MOOSE function of the same form is O(1e-16).
- 1.21.2The nekRS temperature solution shall be accurately reconstructed on the nekRSMesh with an exact volume transfer. By setting an intial condition for temperature on the nekRS side and then setting 'solver = none', we show that the max/min error in that reconstructed temperature compared to a MOOSE function of the same form is O(1e-16).
- 1.21.3The nekRS temperature solution shall be accurately reconstructed on the nekRSMesh with a first-order surface transfer. By setting an intial condition for temperature on the nekRS side and then setting 'solver = none', we show that the max/min error in that reconstructed temperature compared to a MOOSE function of the same form is O(1e-16).
- 1.21.4The nekRS temperature solution shall be accurately reconstructed on the nekRSMesh with a first-order volume transfer. By setting an intial condition for temperature on the nekRS side and then setting 'solver = none', we show that the max/min error in that reconstructed temperature compared to a MOOSE function of the same form is O(1e-16).
- 1.21.5The nekRS temperature solution shall be accurately reconstructed on the nekRSMesh with a second-order surface transfer. By setting an intial condition for temperature on the nekRS side and then setting 'solver = none', we show that the max/min error in that reconstructed temperature compared to a MOOSE function of the same form is O(1e-16).
- 1.21.6The nekRS temperature solution shall be accurately reconstructed on the nekRSMesh with a second-order volume transfer. By setting an intial condition for temperature on the nekRS side and then setting 'solver = none', we show that the max/min error in that reconstructed temperature compared to a MOOSE function of the same form is O(1e-16).
- cardinal: Nek Warnings
- 1.22.1MOOSE shall throw a warning if there is no temperature passive scalar solve in nekRS.
- 1.22.2The system shall print a warning if both volume and boundary mesh mirror information is set on NekRSStandaloneProblem, since the combined specification of these quantities for NekRSMesh is exclusively used for CHT coupling + volume coupling.
- cardinal: Neutronics
- 1.23.1The system shall allow problems which contain adaptivity on the mesh mirror for cell tallies.
- 1.23.2The system shall allow problems which contain adaptivity on the mesh mirror for mesh tallies.
- 1.23.3The system shall error if adaptivity is active and tallying on a mesh template instead of the mesh block.
- 1.23.4The system shall error if adaptivity is active and a relaxation scheme is requested.
- 1.23.5The system shall skip running OpenMC when the mesh is unchanged by adaptivity.
- 1.23.6The system shall run OpenMC on the first Picard iteration regardless of the mesh being previouslyunchanged by adaptivity. This test relies on noise in the solution; if OpenMC runs more than onceper Picard iteration the PRNG seed changes and so the tally results will be different.
- 1.23.7The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning as compared to both (i) a CSG-equivalent version of the geometry and (ii) the same input file run with the skinner disabled. The CSG file used for comparison is in the csg_step_1 directory.
- 1.23.8The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning as compared to a CSG-equivalent version of the geometry, which is in the csg_step_2 directory.
- 1.23.9The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning with a single density bin when compared to a case without any density skinning at all.
- 1.23.10The system shall scale a mesh by multiplying by 100.
- 1.23.11The system shall give identical results when scaling the Mesh by an arbitrary multiplier. The gold file was compared against a case with no scaling (scaling = 1) to get identical results.
- 1.23.12The system shall warn the user when there is a mismatch between the mesh mirror and the initial OpenMC DAGMC geometry.
- 1.23.13The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning by density as compared to a CSG-equivalent version of the geometry, which is in the csg_step_2 directory.
- 1.23.14The system shall give identical Monte Carlo solution when skinning as compared to a CSG-equivalent version of the geometry, which is in the csg_step_2 directory. For this case, a bin is split across disjoint elements.
- 1.23.15The system shall error if attempting to apply density skinning without any fluid blocks ready to receive variable density.
- 1.23.16The system shall error if there is an obvious mismatch between the Mesh and DAGMC model for the case where the number of DAGMC materials which map to each Mesh subdomain do not match.
- 1.23.17The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning as compared to both (i) a CSG-equivalent version of the geometry and (ii) the same input file run with the skinner disabled. The CSG file used for comparison is in the csg_step_1 directory.
- 1.23.18The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning as compared to a CSG-equivalent version of the geometry, which is in the csg_step_2 directory.
- 1.23.19The system shall give identical Monte Carlo solution (direct mesh tallies and k) when skinning as compared to a CSG-equivalent version of the geometry, which is in the csg_step_2 directory.
- 1.23.20The system shall give identical Monte Carlo solution (file mesh tallies and k) when skinning as compared to a CSG-equivalent version of the geometry, which is in the csg_step_2 directory. For this case, a bin is split across disjoint elements.
- 1.23.21The system shall scale a mesh by multiplying by 100.
- 1.23.22The system shall give identical results when scaling the Mesh by an arbitrary multiplier. The gold file was compared against a case with no scaling (scaling = 1) to get identical results.
- 1.23.23The system shall error if the skinner user object is not the correct type
- 1.23.24The system shall warn if the graveyard is missing for OpenMC skinned models
- 1.23.25The system shall error if applying a symmetry mapping to an OpenMC model which must already exactly match the mesh.
- 1.23.26The system shall error if loading properties from HDF5 for skinned problems
- 1.23.27The system shall error if the cell containing the DAGMC universe is not contained in the root universe. If so, we cannot guarantee that the DAGMC geometry is not replicated and the skinner may produce an incorrect skin.
- 1.23.28The system shall error if the DAGMC universe is used as a lattice element. If so, the DAGMC geometry may be replicated and so the skinner may produce an incorrect skin.
- 1.23.29The system shall error if the DAGMC universe is used as a lattice element. If so, the DAGMC geometry may be replicated and so the skinner may produce an incorrect skin.
- 1.23.30The system shall error if the user attempts to map both CSG and DAGMC geometry to the MOOSE mesh.
- 1.23.31The system shall error if the DAGMC universe is used by multiple cells. If so, the DAGMC geometry is replicated and so the skinner will produce an incorrect skin.
- 1.23.32The system shall error if there are more than one DAGMC universe. If so, the universe to skin cannot be automatically determined.
- 1.23.33The system shall allow arbitrary combination of density-only, temperature-only, both, or no coupling. The file was compared against a standalone OpenMC run and gave identical values for k.
- 1.23.34The system shall give a warning when T+rho feedback is specified, but not all specified elements mapped into OpenMC
- 1.23.35The system shall give a warning when T feedback is specified, but not all specified elements mapped into OpenMC
- 1.23.36The system shall give a warning when rho feedback is specified, but not all specified elements mapped into OpenMC
- 1.23.37The system shall allow coupling of an OpenMC model with a length scale of centimeters to an applications with a length scale of meters. This is verified by ensuring exact agreement between two versions of the same problem - one with a length scale of centimeters (openmc_master_cm.i), and the other with a length scale of meters (openmc_master.i).
- 1.23.38The system shall correctly re-initialize the same mapping when the MooseMesh does not change during a simulation.
- 1.23.39The system shall allow OpenMC tallies to be extracted on cell blocks which do not correspond to any multiphysics feedback settings. The tallies were compared against separate standalone OpenMC runs to confirm that the presence/absence of feedback does not change their values.
- 1.23.40The system shall allow OpenMC tallies to be extracted on cell blocks which also have temperature feedback.The tallies were compared against separate standalone OpenMC runs.
- 1.23.41The system shall allow OpenMC tallies to be extracted on cell blocks which also have temperature/density feedback.The tallies were compared against separate standalone OpenMC runs.
- 1.23.42The system shall allow OpenMC to be run within Cardinal without any data transfers in or out of the code.
- 1.23.43The system shall allow OpenMC tallies to be extracted on cell blocks which also have temperature feedback.The tallies were compared against separate standalone OpenMC runs.
- 1.23.44The system shall allow OpenMC tallies to be extracted on cell blocks which also have temperature/density feedback.The tallies were compared against separate standalone OpenMC runs.
- 1.23.45Temperatures, densities, and a heat source shall be coupled between OpenMC and MOOSE and a solid pincell model when the model is set up with distributed cells. The solution for temperature, density, and heat source show an exact agreement with a case built without distributed cells in ../single_level.
- 1.23.46Temperatures, densities, and a heat source shall be coupled between OpenMC and MOOSE and a solid pincell model when the model is set up with distributed cells, but with material feedback applied by material. The gold file is identical to a cell-based feedback because we still have one unique material per cell.
- 1.23.47The system shall correctly re-initialize the same mapping when the MooseMesh does not change during a simulation.
- 1.23.48The system shall allow the user to specify a 'heating' score in the OpenMC tally.
- 1.23.49The system shall allow the user to specify a 'heating-local' score in the OpenMC tally.
- 1.23.50The system shall allow the user to specify a 'damage-energy' score in the OpenMC tally.
- 1.23.51The system shall allow the user to specify a 'fission-q-prompt' score in the OpenMC tally.
- 1.23.52The system shall allow the user to specify a 'fission-q-recoverable' score in the OpenMC tally.
- 1.23.53The system shall error if the user adds a duplicate variable with a name Cardinal reserves for OpenMC coupling.
- 1.23.54The system shall error if trying to set density in a cell filled by a universe or lattice.
- 1.23.55The system shall allow density feedback in OpenMC models by material, as opposed to individual cell mappings. This model was compared against an OpenMC standalone case where the density of the water was manually set to the average value applied from MOOSE.
- 1.23.56The system shall allow reading density from user-defined names, with one density variable per cell. Reference values were obtained by running OpenMC standalone.
- 1.23.57The system shall allow reading density from user-defined names, with more than one density variable per cell
- 1.23.58The system shall allow lumping of subdomains together, equivalent to explicitly listing density correspondence to blocks
- 1.23.59The system shall error if the blocks and variables are not the same length
- 1.23.60The system shall error if a sub-vector is empty
- 1.23.61The system shall error if an entry in density_variables is not of unity length
- 1.23.62The system shall error if trying to collate multiple density variables onto the same block due to undefined behavior
- 1.23.63The system shall error if the blocks and variables are not the same length
- 1.23.64The system shall error if a sub-vector is empty
- 1.23.65The system shall error if an entry in temperature_variables is not of unity length
- 1.23.66The system shall error if trying to collate multiple temperature variables onto the same block due to undefined behavior
- 1.23.67The system shall allow reading temperature from user-defined names, with one temperature variable per cell
- 1.23.68The system shall allow reading temperature from user-defined names, with more than one temperature variable per cell
- 1.23.69The system shall allow lumping of subdomains together, equivalent to explicitly listing temperature correspondence to blocks
- 1.23.70The OpenMC wrapping shall support using the locally-lowest particle level in geometry regions where the cell_level does not exist. This input sets up a pincell with lattices where all cells of interest are on level 1. Surrounding this pincell is an exterior cell on level 0. Cell IDs, instances, and temperatures all correctly reflect using the lowest available level in the exterior region.
- 1.23.71Temperatures, densities, and a heat source shall be coupled between OpenMC and MOOSE and a solid pincell model.
- 1.23.72The system shall correctly re-initialize the same mapping when the MooseMesh does not change during a simulation.
- 1.23.73The system shall exactly predict eigenvalue when sending temperatures which map 1:1 between subdomain and cell to OpenMC. We compare against a standalone OpenMC case to exactly match k.
- 1.23.74The system shall exactly predict eigenvalue when sending temperatures which map from one subdomain to multiple cells. We compare against a standalone OpenMC case to exactly match k.
- 1.23.75The system shall exactly predict eigenvalue when sending temperatures which map from multiple subdomains to one cell. We compare against a standalone OpenMC case to exactly match k.
- 1.23.76The system shall exactly predict eigenvalue when sending temperatures which map from multiple subdomains to one cell, when different temperature variables are used on each subdomain. This gold file exactly matches that used in the 'two_to_one' test.
- 1.23.77The system shall exactly predict eigenvalue when sending temperatures which map from multiple subdomains to one cell, when using the default name of 'temp' for the temperature field.
- 1.23.78The system shall allow any cell which maps to a particular subdomain to be set with an identical cell fill. Here, the gold files were created using an input which does not use this feature (which we also compare to a standalone OpenMC run).
- 1.23.79The system shall warn the user if the identical cell fill is unused because all mapped cells are simple, material-fills.
- 1.23.80The system shall error if trying to utilize identical cell fills but the filling cell IDs are not identical among the cells
- 1.23.81The system shall error if trying to utilize identical cell fills for a non-solid block
- 1.23.82The system shall error if inconsistent settings are applied for the identical cell mapping
- 1.23.83The system shall error if the single-increment applied to the tally cell contained material instances fails, such as when a TRISO universe is not being tallied.
- 1.23.84The system shall correctly apply unique temperature feedback to an identical universe filled into multiple (non-lattice) cells. The reference solution from an identical OpenMC-only case is in the standalone directory.
- 1.23.85The system shall error if Cardinal tries to change the temperature of a given OpenMC cell more than once, since this indicates a problem with model setup.
- 1.23.86OpenMC postprocessors shall evaluate to -1 for unmapped MOOSE elements.
- 1.23.87The system shall be capable of adding an AzimuthalAngleFilter to a CellTally with bins that are provided. This test also ensures the binned fluxes sum to the total flux through the use of global normalization.
- 1.23.88The system shall be capable of adding an AzimuthalAngleFilter to a CellTally with equally spaced bins. This test also ensures the binned fluxes sum to the total flux through the use of global normalization.
- 1.23.89The system shall be capable of adding an AzimuthalAngleFilter to a MeshTally.
- 1.23.90The system shall correctly compute azimuthal binned fluxes with mesh tallies such that the sum of the flux over each bin equals the total flux. The gold file was generated with an input file that scored the flux without a AzimuthalAngleFilter.
- 1.23.91The system shall error if 'azimuthal_angle_boundaries' doesn't contain enough boundaries to form bins.
- 1.23.92The system shall automatically sort bins to ensure they're monotonically increasing.
- 1.23.93The system shall error if neither 'num_equal_divisions' or 'azimuthal_angle_boundaries' are provided.
- 1.23.94The system shall error if both 'num_equal_divisions' and 'azimuthal_angle_boundaries' are provided.
- 1.23.95The system shall be capable of adding an EnergyFilter (with energy boundaries provided) to a CellTally. This test also ensures multi-group fluxes sum to the total flux for cell tallies.
- 1.23.96The system shall be capable of adding an EnergyFilter (with a group structure) to a CellTally. This test also ensures multi-group fluxes sum to the total flux for cell tallies.
- 1.23.97The system shall be capable of adding an EnergyFilter to a MeshTally.
- 1.23.98The system shall correctly compute multi-group fluxes with mesh tallies such that the sum of the flux over each group equals the total flux. The gold file was generated with an input file that scored the flux without an EnergyFilter.
- 1.23.99The system shall error if there aren't enough energy boundaries to form an EnergyTally.
- 1.23.100The system shall automatically sort bins to ensure they're monotonically increasing.
- 1.23.101The system shall error if no energy bins are provided.
- 1.23.102The system shall error if no energy bins are provided.
- 1.23.103The system shall be capable of adding an PolarAngleFilter to a CellTally with bins that are provided. This test also ensures the binned fluxes sum to the total flux through the use of global normalization.
- 1.23.104The system shall be capable of adding an PolarAngleFilter to a CellTally with equally spaced bins. This test also ensures the binned fluxes sum to the total flux through the use of global normalization.
- 1.23.105The system shall be capable of adding an PolarAngleFilter to a MeshTally.
- 1.23.106The system shall correctly compute polar binned fluxes with mesh tallies such that the sum of the flux over each bin equals the total flux. The gold file was generated with an input file that scored the flux without a PolarAngleFilter.
- 1.23.107The system shall error if 'polar_angle_boundaries' doesn't contain enough boundaries to form bins.
- 1.23.108The system shall automatically sort bins to ensure they're monotonically increasing.
- 1.23.109The system shall error if neither 'num_equal_divisions' or 'polar_angle_boundaries' are provided.
- 1.23.110The system shall error if both 'num_equal_divisions' and 'polar_angle_boundaries' are provided.
- 1.23.111The system shall support multiple filters within a tally.
- 1.23.112The system shall support the automatic addition of the requested source rate normalization score to a single tally when using filters.
- 1.23.113The system shall error if a non-existent filter is requested by a tally.
- 1.23.114The system shall error if a filter is added when an OpenMCCellAverageProblem is not present.
- 1.23.115The system shall allow cell tallies to access filters added in the OpenMC tallies XML file.
- 1.23.116The system shall allow mesh tallies to access filters added in the OpenMC tallies XML file.
- 1.23.117The system shall error if a filter with the id requested has not been added by the tallie xml file.
- 1.23.118The system shall error if the user selects a spatial filter.
- 1.23.119The system shall warn the user if they have selected a functional expansion filter and set allow_expansion_filters = true.
- 1.23.120The system shall error if the user selected a functional expansion filter without setting allow_expansion_filters = true.
- 1.23.121The system shall correctly normalize tallies from a fixed source simulation when there is perfect overlap between the OpenMC model and the MOOSE mesh.
- 1.23.122The system shall notify the user that the settings related to normalizing by global or local tallies are inconsequential for fixed source mode.
- 1.23.123The system shall error if the total tally sum does not match the system-wide value for fixed source simulations.
- 1.23.124The system shall correctly normalize tallies from a fixed source simulation when there is only partial overlap between the OpenMC model and MOOSE domain.
- 1.23.125The system shall correctly normalize flux tallies for fixed source simulations.
- 1.23.126The system shall correctly normalize flux tallies for eigenvalue simulations.
- 1.23.127The system shall correctly normalize flux tallies for eigenvalue simulations when listed in an arbitrary order.
- 1.23.128The system shall correctly normalize flux tallies for eigenvalue simulations when listed in an arbitrary order and with user-defined names.
- 1.23.129The system shall correctly normalize flux tallies for eigenvalue simulations when the source rate normalization tally is not already added.
- 1.23.130The system shall error if the user tries to name only a partial set of the total tally scores.
- 1.23.131The system shall error if the user omits the required normalization tally for non-heating scores in eigenvalue mode
- 1.23.132The heat source shall be correctly mapped if the solid cell level is not the highest level.
- 1.23.133The heat source shall be extracted and normalized correctly from OpenMC for perfect model overlap with fissile fluid and solid phases.
- 1.23.134The power shall be provided by a postprocessor
- 1.23.135The heat source shall be extracted and normalized correctly from OpenMC for perfect model overlap with fissile fluid and solid phases, but heat source coupling only performed for the solid phase.
- 1.23.136The system shall allow the user to specify a custom tally variable name. This test is identical to the overlap_solid test, but with a different name for the auxiliary variable.
- 1.23.137The heat source shall be extracted and normalized correctly from OpenMC for perfect model overlap with fissile fluid and solid phases, but heat source coupling only performed for the fluid phase.
- 1.23.138The heat source shall be extracted and normalized correctly from OpenMC for partial overlap of the OpenMC and MOOSE meshes, where all MOOSE elements map to OpenMC cells, but some OpenMC cells are not mapped.
- 1.23.139A warning shall be printed if any portion of the MOOSE solid blocks did not get mapped to OpenMC cells.
- 1.23.140The heat source shall be extracted and normalized correctly from OpenMC for partial overlap of the OpenMC and MOOSE meshes, where all OpenMC cells map to MOOSE elements, but some MOOSE elements are not mapped.
- 1.23.141For single-level geometries, tallies shall be added to all MOOSE blocks if tally blocks are not specified. The gold file for this test is simply a copy of overlap_all_out.e.
- 1.23.142The mapped cell volumes shall be correctly computed by the wrapping.
- 1.23.143The system shall be able to write multiple different tally scores with normalization by a global tally.
- 1.23.144The system shall be able to write multiple different tally scores with normalization by a local tally.
- 1.23.145The system shall be able to write multiple different tally scores with normalization by a local tally with the spatial separate assumption.
- 1.23.146The system shall be able to write multiple different tally scores with normalization by a global tally.
- 1.23.147The system shall be able to write multiple different tally scores with normalization by a global tally for mesh tallies.
- 1.23.148The system shall be able to write multiple different tally scores with normalization by a local tally for mesh tallies.
- 1.23.149The system shall allow for the calculation of the optical depth using the min/max vertex separation or the cube root of the element volume.
- 1.23.150The system shall error if a non-reaction rate score is provided to ElementOpticalDepthIndicator.
- 1.23.151The system shall error if a a reaction rate score is requested, but not available in a tally.
- 1.23.152The system shall error if the flux is not available in a tally.
- 1.23.153The system shall allow a mesh tally for coupling OpenMC, without any physics feedback.
- 1.23.154The heat source shall be tallied on an unstructured mesh and normalized against a local tally when a single mesh is used.
- 1.23.155This test is nearly identical to one_mesh. The difference lies in having no mesh_template in the input file. Without one, the system should be able to directly tally on a moose mesh instead of a file
- 1.23.156The system shall error if attempting to directly tally on a MOOSE mesh that is distributed, since all meshes are always replicated in OpenMC.
- 1.23.157The system shall error if attempting to directly tally on a MOOSE mesh that has a scaling not equal to 1.0.
- 1.23.158The heat source shall be tallied on an unstructured mesh and normalized against a global tally when a single mesh is used. This test was run with successively finer meshes (from 256 elements to 94k elements) to show that the power of the mesh tally approaches the value of a cell tally as the difference in volume decreases.
- 1.23.159Mesh tallies shall allow for block restrictions to be applied.
- 1.23.160The heat source shall be tallied on an unstructured mesh and normalized against a local tally when multiple identical meshes are used.
- 1.23.161The heat source shall be tallied on an unstructured mesh and normalized against a global tally when multiple identical meshes are used. This test was run with successively finer meshes (from 256 elements to 94k elements) to show that the power of the mesh tally approaches the value of a cell tally as the difference in volume decreases.
- 1.23.162The heat source shall be correctly projected onto a Mesh in units of meters when the tally mesh template is in units of centimeters.
- 1.23.163The heat source shall be correctly projected onto a Mesh in units of meters when the tally mesh template and translations are in units of centimeters. The output was compared against the multiple_meshes case, which used an input entirely specified in terms of centimeters.
- 1.23.164The fission tally standard deviation shall be output correctly for unstructured mesh tallies.
- 1.23.165Mesh tallies shall temporarily require disabled renumbering until capability is available
- 1.23.166Mesh tallies shall error if the user attempts to apply a block restriction when using a mesh template.
- 1.23.167Mesh tallies shall error if the user attempts to apply a block restriction with no blocks.
- 1.23.168The number of particles shall optionally be set through the Cardinal input file. While the XML files set 100 particles, we change the number of particles to 200 in the Cardinal input file, and compare the eigenvalue against a standalone OpenMC run (with 'openmc –particles 200') to ensure correctness.
- 1.23.169The number of inactive batches shall optionally be set through the Cardinal input file. While the XML files set 10 inactive batches, we change the number of inactive batches to 20 in the Cardinal input file, and compare the eigenvalue against a standalone OpenMC run (with 20 inactive batches) to ensure correctness.
- 1.23.170The number of batches shall optionally be set through the Cardinal input file. While the XML files set 50 batches, we change the number of batches to 60 in the Cardinal input file, and compare the eigenvalue against a standalone OpenMC run (with 60 batches) to ensure correctness.
- 1.23.171The system shall allow skipping of statepoint output files
- 1.23.172The system shall allow for the output of the unrelaxed tally relative error for cell tallies.
- 1.23.173The system shall allow for the output of the unrelaxed tally relative error for mesh tallies.
- 1.23.174The system shall error if using incompatible tally estimator with a photon transport heating score.
- 1.23.175A unity relaxation factor shall be equivalent to an unrelaxed case with globally-normalzed cell tallies on a model with perfect alignment between OpenMC model and the mesh mirror
- 1.23.176The wrapping shall apply constant relaxation for a case with globally-normalized cell tallies with perfect alignment between the OpenMC model and the mesh mirror. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc.i run (without any command line parameter settings).
- 1.23.177The wrapping shall apply Robbins-Monro relaxation for a case with globally-normalized cell tallies with perfect alignment between the OpenMC model and the mesh mirror. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc.i run (without any command line parameter settings).
- 1.23.178The wrapping shall apply Dufek-Gudowski relaxation for a case with globally-normalized cell tallies with perfect alignment between the OpenMC model and the mesh mirror. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the same input file, but with the relaxation part commented out in the source code (so that we can compare directly against runs that only differ by changing the number of particles.
- 1.23.179A unity relaxation factor shall be equivalent to an unrelaxed case with locally-normalzed cell tallies on a model with perfect alignment between OpenMC model and the mesh mirror
- 1.23.180The wrapping shall apply constant relaxation for a case with locally-normalized cell tallies with perfect alignment between the OpenMC model and the mesh mirror. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc.i case with normalize_by_global_tally=false.
- 1.23.181A unity relaxation factor shall be equivalent to an unrelaxed case with globally-normalzed cell tallies on a model with imperfect alignment between OpenMC model and the mesh mirror.
- 1.23.182The wrapping shall apply constant relaxation for a case with globally-normalized cell tallies with imperfect alignment between the OpenMC model and the mesh mirror. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc_nonaligned.i case (without additional command line parameters)
- 1.23.183A unity relaxation factor shall be equivalent to an unrelaxed case with locally-normalzed cell tallies on a model with imperfect alignment between OpenMC model and the mesh mirror
- 1.23.184The wrapping shall apply constant relaxation for a case with locally-normalized cell tallies with imperfect alignment between the OpenMC model and the mesh mirror. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc_nonaligned.i case with normalize_by_global_tally=false
- 1.23.185The system shall correctly re-initialize the same mapping when the MooseMesh does not change during a simulation.
- 1.23.186The wrapping shall allow output of the unrelaxed heat source for cell tallies
- 1.23.187The wrapping shall allow output of multiple tally scores, with relaxation applied independently to each score
- 1.23.188The system shall correctly error if trying to use relaxation with a time-varying mesh.
- 1.23.189A unity relaxation factor shall be equivalent to an unrelaxed case with globally-normalized mesh tallies.
- 1.23.190The wrapping shall apply constant relaxation for a case with globally-normalized mesh tallies. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc.i run (without any additional command line parameters).
- 1.23.191The system shall correctly re-initialize the same mapping when the MooseMesh does not change during a simulation.
- 1.23.192A unity relaxation factor shall be equivalent to an unrelaxed case with locally-normalized mesh tallies.
- 1.23.193The wrapping shall apply constant relaxation for a case with locally-normalized mesh tallies. This test is verified by comparing the heat source computed via relaxation with the un-relaxed iterations from the openmc.i run with normalize_by_global_tally=false.
- 1.23.194The wrapping shall allow output of the unrelaxed heat source for mesh tallies
- 1.23.195A coupled OpenMC-MOOSE problem with zero power set in OpenMC should give exactly the same results as a standalone MOOSE heat conduction simulation of the same problem with zero heat source. The gold file was created with the zero_power.i input file, which does not have OpenMC as a sub-app.
- 1.23.196The system shall error if the heating tallies are missing power from other parts of the problem.
- 1.23.197The correct mesh shall be created for the sources test.
- 1.23.198The correct source files shall be created when re-using the source between iterations
- 1.23.199The system shall create the mesh mirror for later steps of these tests
- 1.23.200The system shall correctly reflect points about a partial symmetry sector
- 1.23.201The system shall create the mesh mirror for later steps of these tests
- 1.23.202The system shall correctly reflect points about a symmetry plane
- 1.23.203The system shall error if the symmetry mapper is not of the correct type
- 1.23.204The system shall create the mesh mirror for later steps of these tests
- 1.23.205The system shall correctly reflect points about a symmetry plane
- 1.23.206The system shall allow tallying of absorption/fission/scattering/total reaction rates in fixed source mode.
- 1.23.207The system shall allow tallying of tritium production rates in fixed source mode.
- 1.23.208The system shall allow for the approximation of tally gradients using finite differences.
- 1.23.209The system shall error if the variable provided to FDTallyGradAux is not of type CONSTANT MONOMIAL_VEC.
- 1.23.210The system shall error if a score is requested, but not available in a tally.
- 1.23.211The system shall error if the external filter bin provided by the user is out of bounds for the filters applied to the given score.
- 1.23.212The system shall correctly label elements in blocks which don't contain temperature feedback, density feedback, or a cell tally as unmapped.
- 1.23.213The system shall correctly normalize local tallies with different estimators using multiple global tallies.
- 1.23.214The system shall correctly apply and normalize two different CellTally objects with different scores. The gold file was generated using an input that had a single CellTally with multiple scores.
- 1.23.215The system shall correctly apply and normalize two different MeshTally objects with different scores. The gold file was generated using an input that had a single MeshTally with multiple scores.
- 1.23.216The system shall correctly apply and normalize two different CellTally objects with different scores and triggers. The gold file was generated using an input that had a single CellTally with multiple scores and triggers.
- 1.23.217The system shall correctly apply and normalize two different MeshTally objects with different scores and triggers. The gold file was generated using an input that had a single MeshTally with multiple scores and triggers.
- 1.23.218The system shall correctly apply and normalize two different CellTally objects with different scores when using relaxation. The gold file was generated using an input that had a single CellTally with multiple scores when using relaxation.
- 1.23.219The system shall correctly apply and normalize two different MeshTally objects with different scores when using relaxation. The gold file was generated using an input that had a single MeshTally with multiple scores when using relaxation.
- 1.23.220The system shall error if the user provides multiple tallies with overlapping scores.
- 1.23.221The system shall error if more than one tally is provided and the requested heating score is in none of the tallies.
- 1.23.222The system shall allow calculations with multiple different tallies.
- 1.23.223The system shall allow calculations with multiple different tally outputs.
- 1.23.224The system shall ignore zero bins when executing tally triggers. This is done by measuring the maximum tally relative error of the H3 score. A smaller tally relative error indicates the simulation ran to max batches and didn't ignore the zero bin. If the zero bin is ignored, the simulation should only run 50 batches.
- 1.23.225The system shall enforce correct trigger ignore zero length
- 1.23.226The system shall ensure that the users provide values of trigger_ignore_zeros when the parameter is set.
- 1.23.227The system shall correctly terminate the OpenMC simulation once reaching a desired k standard deviation.
- 1.23.228The system shall terminate the OpenMC simulation once reaching a desired k standard deviation unless first reaching a maximum number of batches.
- 1.23.229The system shall terminate the OpenMC simulation once reaching a desired k variance.
- 1.23.230The system shall terminate the OpenMC simulation once reaching a desired k relative error.
- 1.23.231The system shall correctly terminate the OpenMC simulation once reaching a desired tally relative error with cell tallies.
- 1.23.232The system shall allow the user to customize the tally estimator for cell tallies
- 1.23.233The system shall correctly terminate the OpenMC simulation once reaching a desired tally relative error with mesh tallies.
- 1.23.234The system shall correctly terminate the OpenMC simulation once reaching a desired tally relative error when applying the same trigger to multiple scores.
- 1.23.235The system shall enforce correct trigger length
- 1.23.236The system shall enforce correct trigger threshold length
- 1.23.237The system shall correctly terminate the OpenMC simulation once reaching a desired tally relative error when applying a different trigger to multiple scores.
- 1.23.238The system shall correctly terminate the OpenMC simulation once reaching a desired tally relative error when applying a different trigger to multiple scores.
- cardinal: Openmc Errors
- 1.24.1The system shall error if the user specifies a block for coupling that does not exist.
- 1.24.2The system shall error if an empty vector is provided for the blocks
- 1.24.3The system shall error if the MOOSE blocks and OpenMC cells don't overlap
- 1.24.4The system shall print a warning if some MOOSE elements are unmapped
- 1.24.5The system shall error if one OpenMC cell maps to more than one type of feedback
- 1.24.6The system shall error if one OpenMC cell maps to multiple subdomains that don't all have the same tally setting
- 1.24.7The system shall error if the user sets feedback blocks, but none of the elements map to OpenMC
- 1.24.8The system shall error if the user enforces equal mapped tally volumes but the mapped volumes are not identical across tally bins
- 1.24.9The system shall error if we attempt to set a density in a material that is repeated throughout our list of fluid cells.
- 1.24.10If the same material appears in more than one solid cell, there should be no error like there is for the fluid case, because we do not change the density of solid cells.
- 1.24.11The system shall error if attempting to pass temperature feedback to a lattice outer universe due to lack of instance support in OpenMC
- 1.24.12The system shall error if we attempt to set a density less than or equal to zero in OpenMC
- 1.24.13The system shall error if we attempt to set a density in a void OpenMC cell
- 1.24.14The system shall error if attempting to extract the eigenvalue from an OpenMC run that is not run with eigenvalue mode.
- 1.24.15The system shall error if attempting to extract the eigenvalue standard deviation from an OpenMC run that is not run with eigenvalue mode.
- 1.24.16The system shall error if attempting to set a k trigger for an OpenMC mode that doesn't have a notion of eigenvalues
- 1.24.17The system shall error if an OpenMC object is used without the correct OpenMC wrapped problem.
- 1.24.18The system shall error if a auxkernel that queries OpenMC data structures is not used with the correct variable type.
- 1.24.19The system shall error if attempting to set a negative number of active batches
- 1.24.20The system shall error if attempting to set a negative number of active batches
- 1.24.21The system shall error if a properties file is loaded but does not exist
- 1.24.22The system shall error if using a skinner without a DagMC geometry
- 1.24.23The system shall error if both or none of the cell level options have been prescribed.
- 1.24.24The system shall error if the specified coordinate level for a phase is within the maximum coordinate levels across the OpenMC domain, but invalid for the particular region of the geometry.
- 1.24.25The system shall error if the specified coordinate level for finding a cell is greater than the maximum number of coordinate levels throughout the geometry.
- 1.24.26The system shall error if a fluid material exists in non-fluid parts of the domain, because density would inadvertently be changed in other parts of the OpenMC model.
- 1.24.27The system shall error if the mesh template is not provided in the same units as the Mesh.
- 1.24.28The system shall error if the mesh template does not exactly match the Mesh, such as when the order of mesh translations does not match the order of inputs in a CombinerGenerator.
- 1.24.29The system shall error if the mesh template does not exactly match the Mesh, such as when a totally different mesh is used (pincell versus pebbles).
- 1.24.30The system shall error if a tracklength estimator is attempted with unstructured mesh tallies, since this capability is not supported in libMesh.
- 1.24.31The system shall error if a particular set of coordinates in the mesh translations does not have all requisite x, y, and z components.
- 1.24.32The system shall error if attempting to run OpenMC in particle restart mode through Cardinal.
- 1.24.33The system shall error if attempting to run OpenMC in plotting mode through Cardinal.
- 1.24.34The system shall error if attempting to run OpenMC in volume mode through Cardinal.
- 1.24.35The system shall warn the user if they did not set the temperature range, protecting against seg faults within the tracking loop when trying to access nuclear data at temperatures that Cardinal wants to apply, but that weren't actually loaded at initialization.
- 1.24.36The system shall error if attempting to use separate tallies when a global tally exists
- 1.24.37The system shall error if name and score are not the same length.
- 1.24.38The system shall error if trigger and trigger_threshold are not simultaneously specified
- 1.24.39The system shall error if name has duplicate entries.
- 1.24.40The system shall error if score has duplicate entries.
- 1.24.41The system shall error if we attempt to set a temperature in OpenMC below the lower bound of available data when using the interpolation method.
- 1.24.42The system shall error if we attempt to set a temperature in OpenMC above the upper bound of available data when using the interpolation method.
- 1.24.43The system shall allow a global tally to be zero.
- cardinal: Postprocessors
- 1.25.1The system shall output the number of coupled OpenMC cells
- 1.25.2The system shall correctly compute the Reynolds and Peclet numbers for a dimensional NekRS case.
- 1.25.3The system shall correctly output the number of NekRS MPI ranks
- 1.25.4The system shall correctly compute the Reynolds and Peclet numbers for a nondimensional NekRS case.
- 1.25.5The system shall allow zero scratch space allocation for NekRSStandaloneProblem.
- 1.25.6The k-eigenvalue and its standard deviation shall be correctly retrieved from the OpenMC solution.
- 1.25.7The maximum and minimum tally relative errors shall be correctly retrieved from the OpenMC solution.
- 1.25.8The maximum and minimum tally relative errors shall be correctly retrieved from the OpenMC solution when using multiple scores.
- 1.25.9The system shall error if trying to extract score information that does not exist
- 1.25.10The system shall prove equivalence between a by-hand calculation of relative error (std_dev output divided by tally value) as compared to the tally relative error postprocessor.
- 1.25.11NekHeatFluxIntegral shall correctly compute the heat flux integral on the nekRS mesh. The gold file was created by running the moose.i input, which computes the same integrals using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the pyramid.udf. Perfect agreement is not to be expected, since the underlying basis functions and quadrature rules are different between nekRS and MOOSE's linear Lagrange variables - we just require that they are reasonably close. A fairly fine mesh is used in MOOSE to get closer to the higher-polynomial-order integration in nekRS.
- 1.25.12The system shall interpolate the NekRS solution onto a given point. This is tested by analytically comparing known initial conditions from NekRS against function evaluations for
- dimensional form
- non-dimensional form
- non-dimensional scaling of usrwrk slots for which the units are known
- 1.25.13The system shall warn the user if dimensionalization is requested and cannot be performed, for
- usrwrk slot 0
- usrwrk slot 1
- usrwrk slot 2
- 1.25.14The system shall allow pressure drag to be computed in the x, y, and z directions in dimensional form. This test compares drag as computed via Nek with by-hand calculations using combinations of native MOOSE postprocessors acting on the NekRS pressure mapped to the mesh mirror, for dimensional form
- 1.25.15The system shall allow pressure drag to be computed in the x, y, and z directions in dimensional form. This test compares drag as computed via Nek with by-hand calculations using combinations of native MOOSE postprocessors acting on the NekRS pressure mapped to the mesh mirror, for nondimensional form
- 1.25.16The system shall error if trying to compute pressure drag on non-fluid NekRS boundaries
- 1.25.17NekSideAverage shall correctly compute area-averaged temperatures on the nekRS mesh. The gold file was created by running the moose.i input, which computes the same averages using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the pyramid.udf. Perfect agreement is not to be expected, since the underlying basis functions and quadrature rules are different between nekRS and MOOSE's linear Lagrange variables - we just require that they are reasonably close.
- 1.25.18NekSideExtremeValue shall correctly compute max/min values on the nekRS mesh boundaries. The gold file was created by running the moose.i input, which computes the same max/min operations using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the pyramid.udf.
- 1.25.19System shall error if using an unsupported field with a side extrema postprocessor.
- 1.25.20NekSideIntegral shall correctly compute boundary areas and area-integrated temperatures on the nekRS mesh. The gold file was created by running the moose.i input, which computes the same integrals using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the pyramid.udf. Perfect agreement is not to be expected, since the underlying basis functions and quadrature rules are different between nekRS and MOOSE's linear Lagrange variables - we just require that they are reasonably close.
- 1.25.21The system shall error if the requested usrwrk slot to integrate exceeds the number allocated
- 1.25.22The system shall allow total viscous drag to be computed in dimensional form. This test compares drag as computed via Nek with by-hand calculations using combinations of native MOOSE postprocessors using the analytic expression for velocity.
- 1.25.23The system shall error if trying to compute viscous drag on non-fluid NekRS boundaries
- 1.25.24dimensional form
- 1.25.25nondimensional form
- 1.25.26NekVolumeExtremeValue shall correctly compute max/min values on the nekRS volume mesh. The gold file was created by running the moose.i input, which computes the same max/min operations using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the pyramid.udf.
- 1.25.27System shall error if using an unsupported field with a volume extrema postprocessor.
- 1.25.28dimensional form
- 1.25.29nondimensional form
- 1.25.30NekMassFluxWeightedSideAverage shall correctly compute a mass flux weightedaverage of temperatures on the nekRS mesh. The gold file was created by running the moose.i input, which computes the same integrals using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the brick.udf. Here, we don't technically make the gold file with a MOOSE run because MOOSE doesn't have a postprocessor that lets you divide two other postprocessors (such as mdot_side1 / weighted_T_side1 to get what NekMassFluxWeightedSideAverage is computing), but we do the division off-line to confirm that the results are correct.Perfect agreement is not to be expected, since the underlying basis functions and quadrature rules are different between nekRS and MOOSE's linear Lagrange variables - we just require that they are reasonably close.
- 1.25.31NekMassFluxWeightedSideIntegral shall correctly compute a mass flux weightedarea integral of temperatures on the nekRS mesh. The gold file was created by running the moose.i input, which computes the same integrals using existing MOOSE postprocessors on the same mesh on auxvariables that match the functional form of the solution fields initialized in the brick.udf. Perfect agreement is not to be expected, since the underlying basis functions and quadrature rules are different between nekRS and MOOSE's linear Lagrange variables - we just require that they are reasonably close.
- 1.25.32System shall error if using an unsupported field with a mass flux weighted postprocessor.
- 1.25.33The system shall allow the number of particles run by OpenMC to be extracted as a postprocessor
- 1.25.34The k-eigenvalue and its standard deviation shall be correctly retrieved from the OpenMC solution.
- cardinal: Sam Coupling
- 1.26.1Cardinal shall be able to run SAM as the master application without any data transfers. This test just ensures correct setup of SAM as a submodule with app registration.
- 1.26.2Cardinal shall be able to run SAM as a sub-application without any data transfers. This test just ensures correct setup of SAM as a submodule with app registration.
- cardinal: Sockeye Coupling
- 1.27.1Cardinal shall be able to run Sockeye as a master-application without any data transfers. This test just ensures correct setup of Sockeye as a submodule with app registration.
- 1.27.2Cardinal shall be able to run Sockeye as a sub-application without any data transfers. This test just ensures correct setup of Sockeye as a submodule with app registration.
- cardinal: Symmetry
- 1.28.1The OpenMC wrapping shall allow visualization of point reflection transformations
- 1.28.2The OpenMC wrapping shall allow visualization of rotation transformations
- cardinal: Transfers
- 1.29.1The system shall allow nearest point receiver transfers both to and from the multiapp.
- cardinal: Userobjects
- 1.30.1Spatially-binned volume integrals and averages shall be correctly dimensionalized for nondimensional cases. An equivalent setup with a dimensional problem is available at ../dimensional. The user object averages/integrals computed here exactly match.
- 1.30.2A hexagonal subchannel bin shall divide space according to subchannel discretization in the gaps of a hexagonal lattice.
- 1.30.3A hexagonal gap and 1-D layered bin shall be combined to give a multi-dimensional binning and demonstrate correct results for side integrals and averages in directions normal to the gap planes.
- 1.30.4The correct unit normals shall be formed for a layered Cartesian gap bin.
- 1.30.5The system shall error if the userobjects aren't derived from the correct base class.
- 1.30.6A hexagonal gap and 1-D layered bin shall be combined to give a multi-dimensional binning and demonstrate correct results for side integrals and averages.
- 1.30.7A layered gap and 2-D hexagonal subchannel bin shall be combined to give a multi-dimensional binning and demonstrate correct results for side integrals and averages.
- 1.30.8The system shall error if there are zero contributions to a gap bin.
- 1.30.9A hexagonal gap and 1-D layered bin shall be combined to give a multi-dimensional binning and demonstrate correct results for side averages of velocity along a user-specified direction.
- 1.30.10A hexagonal subchannel bin shall divide space according to subchannel discretization in a hexagonal lattice.
- 1.30.11A hexagonal subchannel bin shall divide space according to a pin-centered subcannel discretization in a hexagonal lattice with two pin rings.
- 1.30.12A hexagonal subchannel bin shall divide space according to a pin-centered subcannel discretization in a hexagonal lattice with three pin rings.
- 1.30.13The system shall allow the Nek user objects to only evaluate on a fixed interval of time steps. The gold files were created by setting interval = 1 to ensure that the output files are exactly identical at the specified interval steps.
- 1.30.14The system shall allow the Nek to mesh-mirror interpolation to only occur on a fixed interval of time steps. The gold files were created by setting interval = 1 to ensure that the output files are exactly identical at the specified interval steps.
- 1.30.15A layered bin shall divide space according to 1-D layers in a given direction.
- 1.30.16A layered bin shall divide space according to 1-D layers in a given direction.
- 1.30.17A layered bin shall divide space according to 1-D layers in a given direction.
- 1.30.18Multiple 1-D layered bins shall be combined to give a multi-dimensional binning and demonstrate correct results for volume integrals and averages.
- 1.30.19System shall error if user attemps to combine multiple bins that specify the same coordinate direction.
- 1.30.20The output points shall be automatically output for a 1-D Cartesian volume distribution and a 1-D Cartesian surface distribution.
- 1.30.21The output points shall be automatically output for a 3-D Cartesian distribution.
- 1.30.22System shall error if no points map to a spatial bin
- 1.30.23The system shall error if the maximum temperature is lower than the minimum temperature for the skinner bins
- 1.30.24The system shall error if the maximum density is lower than the minimum density for the skinner bins
- 1.30.25The system shall error if the temperature is below the minimum bin bound
- 1.30.26The system shall error if the temperature is above the maximum bin bound
- 1.30.27The system shall error if the density is below the minimum bin bound
- 1.30.28The system shall error if the density is above the maximum bin bound
- 1.30.29The system shall error if the skinned mesh does not contain tetrahedral elements
- 1.30.30The system shall error if the outer graveyard surface is not larger than the inner graveyard surface
- 1.30.31The system shall error if the specified temperature auxiliary variable cannot be found
- 1.30.32The system shall error if the specified density auxiliary variable cannot be found
- 1.30.33The system shall error if the specified density and temperature auxiliary variables are the same
- 1.30.34The system shall error if trying to run in distributed mesh mode
- 1.30.35The system shall error if the material_names provided to the skinner do not match the required length
- 1.30.36The system shall bin elements according to temperature and density on an expanding mesh and be able to visualize the bins on the mesh volume.
- 1.30.37The system shall be able to convert a .h5m file to gmsh
- 1.30.38The system shall be able to convert a .h5m file to gmsh
- 1.30.39The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions
- 1.30.40The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions after expansion with the correct displacements.
- 1.30.41
- 1.30.42The system shall be able to convert a .h5m file to gmsh
- 1.30.43The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions, with a graveyard region outside the original mesh.
- 1.30.44The system shall output the MOAB mesh copied from libMesh into the .h5m format
- 1.30.45The system shall be able to convert a .h5m file to gmsh
- 1.30.46The system shall properly convert from libMesh to MOAB meshes in-memory. We check this in a somewhat circuitous manner by writing the volume mesh, then converting it to gmsh, and finally re-reading it into MOOSE so that we can use the Exodiff utility.
- 1.30.47The system shall bin elements according to temperature and density, on multiple subdomains, and be able to visualize the bins on the mesh volume for second-order tets.
- 1.30.48The system shall be able to convert a .h5m file to gmsh
- 1.30.49The system shall be able to convert a .h5m file to gmsh
- 1.30.50The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions for second-order tets
- 1.30.51The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions for second-order tets. The bins shall be re-generated on each time step.
- 1.30.52The system shall bin elements according to temperature and density, on multiple subdomains, and be able to visualize the bins on the mesh volume.
- 1.30.53The system shall bin elements according to temperature, on multiple subdomains, and be able to visualize the bins.
- 1.30.54The system shall be able to convert a .h5m file to gmsh
- 1.30.55The system shall be able to convert a .h5m file to gmsh
- 1.30.56The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions
- 1.30.57The system shall properly skin a MOAB mesh and create new MOAB surface meshes bounding bin regions. The bins shall be re-generated on each time step.
- 1.30.58The system shall error if the requirements for a binning variable type are violated.
- 1.30.59The NekScalarValue shall allow NekRS simulations to interface with MOOSE's Controls system
- 1.30.60The NekScalarValue shall allow standalone NekRS simulations to interface with MOOSE's Controls system
- 1.30.61The system shall error if trying to change nuclide densities for a non-existing material ID.
- 1.30.62The system shall error if trying to add a nuclide not accessible in the cross section library.
- 1.30.63The system shall give identical results to a standalone OpenMC run if the nuclide compositions are modified, but are still set to their initial values.
- 1.30.64The system shall give identical results to a standalone OpenMC run if the nuclide densities are modified, but there are no nuclides added or removed.
- 1.30.65The system shall give identical results to a standalone OpenMC run if the nuclide densities and nuclides are modified, after which the total density is modified due to thermal effects.
- 1.30.66The system shall give identical results to a standalone OpenMC run if the nuclide densities and nuclides are modified.
- 1.30.67The system shall error if inconsistent lengths for names and densities are provided.
- 1.30.68The system shall error if
- trying to edit a non-existant tally
- trying to add a nuclide not accessible in the cross section library
- trying to add an invalid score to a tally
- the filter referenced by an OpenMCFilterEditor via ID does not exist and is not flagged for creation
- an OpenMCDomainFilter editor exists with the same filter ID but a different filter type
- more than one OpenMCDomainFilterEditor eixsts with the same filter ID
- more than one OpenMCTallyEditor eixsts with the same tally ID
- an OpenMCTallyEditor eixsts for a mapped tally created by Cardinal
- 1.30.69Ensure that nuclides specified by a tally editor UO are present in the tally output
- 1.30.70Ensure that the scattering score specified by a tally editor UO are present in the tally output
- 1.30.71Ensure that the absorption score for a specific nuclide specified by a tally editor UO are present in the tally output
- 1.30.72Ensure that a cell filter specified by a tally editor UO are present in the tally output
- 1.30.73Ensure that a material filter specified by a tally editor UO are present in the tally output
- 1.30.74Ensure that a universe filters specified by a tally editor UO are present in the tally output
- 1.30.75A radial bin shall divide space according to 1-D layers in the radial direction.
- 1.30.76The output points shall be automatically output for a single-axis radial distribution.
- 1.30.77The output points shall be automatically output for a single-axis radial distribution plus a 1-D distribution.
- 1.30.78The system shall precompile a Nek case in preparation for a multi-input simulation.
- 1.30.79The system shall error if an invalid boundary ID is specified
- 1.30.80The system shall correctly integrate over a sideset in the NekRS domain when mapping space by the quadrature point
- 1.30.81The system shall correctly integrate over a sideset in the NekRS domain when mapping space by the face centroid
- 1.30.82The system shall correctly average over a sideset in the NekRS domain
- 1.30.83The system shall error if the userobjects aren't derived from the correct base class.
- 1.30.84The system shall error if the userobjects aren't listed in the correct order in the input file.
- 1.30.85A subchannel and 1-D layered bin shall be combined to give a multi-dimensional binning and demonstrate correct results for volume integrals and averages.
- 1.30.86System shall error if user attemps to combine multiple bins that specify the same coordinate direction.
- 1.30.87The output points shall be automatically output for a single-axis subchannel binning
- 1.30.88System shall error if a side user object is provided to a volume binning user object.
- 1.30.89System shall error if attempting to use a normal velocity component with a user object that does not have normals defined
- 1.30.90A subchannel and 1-D layered bin shall be combined to give a multi-dimensional binning and demonstrate correct results for volume averages of velocity projected along a constant direction.
- 1.30.91A pin-centered subchannel bin shall give correct results for side averages of temperature.
- 1.30.92The system shall error if the rotation axis is not perpendicular to the symmetry normal
- 1.30.93The system shall error if the rotation angle does not describe a valid symmetry wedge
- 1.30.94Spatially-binned volume integrals and averages shall be correctly dimensionalized for nondimensional cases. An equivalent setup with a dimensional problem is available at ../dimensional. The user object averages/integrals computed here exactly match.
- 1.30.95The system shall warn the user if encountering volume calculations needing instance-level granularity, since this is not available yet in OpenMC itself.
- 1.30.96The system shall map from a stochastic volume calculation to MOOSE for repeated cell instances
- 1.30.97The system shall terminate the stochastic volume calculation using a relative error metric.
- 1.30.98The system shall error if an invalid bounding box is specified for volume calculations
- 1.30.99The system shall map stochastic volumes for each OpenMC cell which maps to MOOSE.
- 1.30.100The system shall map stochastic volumes for each OpenMC cell which maps to MOOSE when the Mesh is not in units of centimeters
- 1.30.101The system shall error if trying to view stochastic volumes without the stochastic volume calculation having been created.
- 1.30.102The system shall terminate the stochastic volume calculation using a relative error metric.
- 1.30.103The system shall correctly compute volumes in OpenMC when using a user-provided bounding box.
Usability Requirements
No requirements of this type exist for this application, beyond those of its dependencies.Performance Requirements
No requirements of this type exist for this application, beyond those of its dependencies.System Interfaces
No requirements of this type exist for this application, beyond those of its dependencies.System Operations
Human System Integration Requirements
The Cardinal application is command line driven and conforms to all standard terminal behaviors. Specific human system interaction accommodations shall be a function of the end-user's terminal. MOOSE (and therefore Cardinal) does support optional coloring within the terminal's ability to display color, which may be disabled.
Maintainability
The latest working version (defined as the version that passes all tests in the current regression test suite) shall be publicly available at all times through the repository host provider.
Flaws identified in the system shall be reported and tracked in a ticket or issue based system. The technical lead will determine the severity and priority of all reported issues and assign resources at their discretion to resolve identified issues.
The software maintainers will entertain all proposed changes to the system in a timely manner (within two business days).
The core software in its entirety will be made available under the terms of a designated software license. These license terms are outlined in the LICENSE file alongside the Cardinal application source code.
Reliability
The regression test suite will cover at least 90% of all lines of code at all times. Known regressions will be recorded and tracked (see Maintainability) to an independent and satisfactory resolution.
System Modes and States
MOOSE applications normally run in normal execution mode when an input file is supplied. However, there are a few other modes that can be triggered with various command line flags as indicated here:
Command Line Flag | Description of mode |
---|---|
-i <input_file> | Normal execution mode |
--split-mesh <splits> | Read the mesh block splitting the mesh into two or more pieces for use in a subsequent run |
--use-split | (implies -i flag) Execute the simulation but use pre-split mesh files instead of the mesh from the input file |
--yaml | Output all object descriptions and available parameters in YAML format |
--json | Output all object descriptions and available parameters in JSON format |
--syntax | Output all registered syntax |
--registry | Output all known objects and actions |
--registry-hit | Output all known objects and actions in HIT format |
--mesh-only (implies -i flag) | Run only the mesh related tasks and output the final mesh that would be used for the simulation |
--start-in-debugger <debugger> | Start the simulation attached to the supplied debugger |
The list of system-modes may not be extensive as the system is designed to be extendable to end-user applications. The complete list of command line options for applications can be obtained by running the executable with zero arguments. See the command line usage.
Physical Characteristics
The Cardinal application is software only with no associated physical media. See System Requirements for a description of the minimum required hardware necessary for running the Cardinal application.
Environmental Conditions
Not Applicable
System Security
MOOSE-based applications such as Cardinal have no requirements or special needs related to system-security. The software is designed to run completely in user-space with no elevated privileges required nor recommended.
Information Management
Cardinal as well as the core MOOSE framework, OpenMC, NekRS, and DAGMC are publicly available on an appropriate repository hosting site. Day-to-day backups and security services will be provided by the hosting service. More information about backups of the public repository on INL-hosted services can be found on the following page: GitHub Backups
Polices and Regulations
MOOSE-based applications must comply with all export control restrictions.
System Life Cycle Sustainment
MOOSE-based development follows various agile methods. The system is continuously built and deployed in a piecemeal fashion since objects within the system are more or less independent. Every new object requires a test, which in turn requires an associated requirement and design description. The Cardinal development team follows the NQA-1 standards.
Packaging, Handling, Shipping and Transportation
No special requirements are needed for packaging or shipping any media containing the Cardinal source code. However, some other applications that use Cardinal may be export-controlled, in which case all export control restrictions must be adhered to when packaging and shipping media.
Verification
The regression test suite will employ several verification tests using comparison against known analytical solutions, the method of manufactured solutions, and convergence rate analysis.