Building Cardinal without MOOSE's conda Environment

commentnote:tldr

On CPU systems, all that you need to compile Cardinal is:


cd $HOME
git clone https://github.com/neams-th-coe/cardinal.git
cd cardinal
./scripts/get-dependencies.sh
./contrib/moose/scripts/update_and_rebuild_petsc.sh
./contrib/moose/scripts/update_and_rebuild_libmesh.sh
./contrib/moose/scripts/update_and_rebuild_wasp.sh
export NEKRS_HOME=$HOME/cardinal/install
make -j8

If the above produces a cardinal-opt Cardinal executable, you can jump straight to Running . If you are on a GPU system, want to customize the build, or were not successful with the above, please consult the detailed instructions that follow.

Access

To get access to Cardinal, clone the repository and cd into the directory.


git clone https://github.com/neams-th-coe/cardinal.git
cd cardinal

Prerequisites

The basic prerequisites for building Cardinal are summarized in Table 1.

Table 1: Summary of prerequisites needed for Cardinal.

Building with NekRSBuilding with OpenMCBoth
CMake
GNU fortran >= 9.0 compiler 
HDF5 
MPI
schooltip:How do I know if I have these dependencies?

Most systems will already have these available. To figure out if you have these dependencies, check out our prerequisite guide.

Then, decide whether you want both NekRS and OpenMC, just one, or neither. Both are enabled by default, but you can build Cardinal with only the dependencies that you want. If you do not want to build the NekRS-part of Cardinal, set the following environment variable:


export ENABLE_NEK=false

Likewise, if you do not want to build the OpenMC-part of Cardinal, set the following environment variable:


export ENABLE_OPENMC=false

We support the optional usage of DAGMC's CAD-based models in OpenMC. This capability is off by default, but to build with DAGMC support, set:


export ENABLE_DAGMC=yes

Building

Fetch Dependencies

Cardinal has MOOSE, OpenMC, and NekRS as dependencies. However, you do not need to separately build/compile any of these dependencies - Cardinal's Makefile handles all steps automatically. To fetch the MOOSE, OpenMC, and NekRS dependencies, run:


./scripts/get-dependencies.sh
commentnote:Optional dependencies

Cardinal supports optional coupling to the following codes:

  • SAM, a tool for systems analysis of advanced non-light water reactors safety analysis. Follow these instructions to obtain the required dependencies for adding the SAM submodule.

  • Sockeye, a tool for modeling of heat pipe systems. Follow these instructions to obtain the required dependencies for adding the Sockeye submodule.

  • BISON, a tool for modeling fuel performance and material behavior of nuclear fuels. Follow these instructions to obtain the required dependencies for adding the BISON submodule.

Set Environment Variables

A number of environment variables are required or recommended when building/running Cardinal. Put these in your ~/.bashrc (don't forget to source ~/.bashrc!):


# [REQUIRED] you must set the location of the root directory of the NekRS install;
# this will be the 'install' directory at the top level of the Cardinal repository.
export NEKRS_HOME=$HOME/cardinal/install

# [OPTIONAL] it's a good idea to explicitly note that you are using MPI compiler wrappers
export CC=mpicc
export CXX=mpicxx
export FC=mpif90

# [OPTIONAL] if running with OpenMC, you will need cross section data at runtime;
# you will need to set this variable to point to a 'cross_sections.xml' file.
export OPENMC_CROSS_SECTIONS=${HOME}/cross_sections/endfb-vii.1-hdf5/cross_sections.xml
schooltip:Additional environment variables

For even further control, you can set other optional environment variables to specify the optimization level, dependency locations, and more.

Set OCCA Backend

NekRS uses OCCA to provide an API for device programming. Available backends include CPU (i.e. Message Passing Interface (MPI) parallelism), CUDA, HIP, OpenCL, and OpenMP. There are several different ways that you can set the backend; in order of decreasing priority,

  • Pass via the command line, like

    
    cardinal-opt -i nek.i --nekrs-backend=CPU
    
  • Set in the [OCCA] block of the NekRS .par file to control the backend for a specific model, like

    
    [OCCA]
      backend = CPU
    
  • Set the NEKRS_OCCA_MODE_DEFAULT environment variable to one of CPU, CUDA, HIP, OPENCL, or OPENMP to control the backend for all models, like

    
    export NEKRS_OCCA_MODE_DEFAULT=CPU
    
commentnote:Compiling for GPU?

If you plan to use a GPU backend, you will also need to set the correct threading API in the Makefile by setting the values of the OCCA_CUDA_ENABLED, OCCA_HIP_ENABLED, or OCCA_OPENCL_ENABLED variables to 1, respectively.

Build PETSc and libMesh

You must now build PETSc, libMesh, and WASP:


./contrib/moose/scripts/update_and_rebuild_petsc.sh
./contrib/moose/scripts/update_and_rebuild_libmesh.sh
./contrib/moose/scripts/update_and_rebuild_wasp.sh

To troubleshoot the PETSc or libMesh install, please consult our PETSc and libMesh troubleshooting page. If you want to check the PETSc install, you can run the PETSc tests.

schooltip

Building libMesh can be time consuming. You only need to build libMesh if the libMesh hash used by MOOSE has been updated or this is the first time you are building Cardinal. On systems with multiple processors, you can set the environment variables JOBS, LIBMESH_JOBS, and/or MOOSE_JOBS to be the number of processes to use in a parallel make to build libMesh.

Compile Cardinal

Finally, run make in the top-level directory,


make -j8 MAKEFLAGS=-j8

which will compile Cardinal in parallel with 8 processes (the MAKEFLAGS part is optional, but will also tell CMake to build in parallel with 8 processes - otherwise, the CMake aspects of Cardinal, i.e. OpenMC, NekRS, and DAGMC, will build serially). This will create the executable cardinal-<mode> in the top-level directory. <mode> is the optimization level used to compile MOOSE set with the METHOD environment variable. If you encounter issues while compiling, check out our compile-time troubleshooting guide.

Running

The command to run Cardinal with an input file input.i, <n> MPI ranks, and <s> OpenMP threads is:


mpiexec -np <n> cardinal-opt -i input.i --n-threads=<s>

This command assumes that cardinal-opt is located on your PATH; otherwise, you need to provide the full path to cardinal-opt in the above command or add the cardinal folder to your path. Note that while MOOSE and OpenMC use hybrid parallelism with both MPI and OpenMP, NekRS does not use shared memory parallelism.

schooltip:Command line options

Cardinal supports all of MOOSE's command line parameters, as well as a few Cardinal-specific command line options. For a full list:


./cardinal-opt --help

Checking the Install

If you would like to check that Cardinal was built correctly and that you have all the basic requirements in place, we can walk you through a few installation checks and try running a few input files. If you run into any issues with the following commands, you can find an FAQ of common issues here.

  1. If using OpenMC, make sure that you have cross sections downloaded. If the following returns an empty line, you need to download cross sections.


echo $OPENMC_CROSS_SECTIONS
  1. If using OpenMC, try running a multiphysics case.


cd test/tests/neutronics/feedback/lattice
mpiexec -np 2 ../../../../../cardinal-opt -i openmc_master.i --n-threads=2
  1. If using OpenMC, try building the OpenMC XML files using OpenMC's Python API. If you run into any issues, you most likely need to install OpenMC's Python API.


cd tutorials/lwr_solid
python make_openmc_model.py
  1. If using NekRS, try running a conjugate heat transfer case.


cd test/tests/cht/sfr_pincell
mpiexec -np 4 ../../../../cardinal-opt -i nek_master.i
  1. Try leveraging NekRS's tools to make a mesh. If you run into any issues, you most likely need to install the NekRS tools.


cd test/tests/conduction/boundary_and_volume/prism
exo2nek

For Developers

You can run Cardinal's regression test suite with the following:


./run_tests -j8

which will run the tests in parallel with 8 processes. OpenMC's tests require you to use this data set. Depending on the availability of various dependencies, some tests may be skipped. The first time you run the test suite, the runtime will be very long due to the just-in-time compilation of NekRS. Subsequent runs will be much faster due to the use of cached build files. If you run into issues running the test suite, please check out our run_tests troubleshooting page.

You can run the unit tests with the following:


cd unit
make -j8
./run_tests -j8