HPC Systems
This page collects instructions for building and running Cardinal on a number of HPC systems. Because default modules and settings change on HPC systems with time, the instructions below may become deprecated, but we try to keep this information up-to-date. Note that the absence of a particular HPC system from this list does not imply that Cardinal will not build/run on that system - only that documentation has not yet been created.
In addition to these provided module and environment settings, you must follow the build instructions on the Getting Started page.
NekRS can sometimes fail to correctly pre-compile its kernels on these HPC systems. We recommend precompiling NekRS (with the nrspre
script) if you run into issues. See the NekRS documentation for more information.
Bebop
Bebop is an HPC system at Argonne National Laboratory (ANL) with an Intel Broadwell partition (36 cores/node) and an Intel Knights Landing partition (64 cores/node).
Note that if you want to build Cardinal via a job script, you will also need to module load numactl/2.0.12-355ef36
because make can find libnuma-dev
on the login nodes, but you need to explicitly load it for compute nodes.
module purge
module load gcc/9.2.0-pkmzczt
module load openmpi/4.1.1-ckyrlu7
module load cmake/3.20.3-vedypwm
module load python/intel-parallel-studio-cluster.2019.5-zqvneip/3.6.9
export CC=mpicc
export CXX=mpicxx
export FC=mpif90
# Revise for your Cardinal repository location
DIRECTORY_WHERE_YOU_HAVE_CARDINAL=$HOME
# This is needed because your home directory on Bebop is actually a symlink
HOME_DIRECTORY_SYM_LINK=$(realpath -P $DIRECTORY_WHERE_YOU_HAVE_CARDINAL)
export NEKRS_HOME=$HOME_DIRECTORY_SYM_LINK/cardinal/install
# Revise for your cross sections location
export OPENMC_CROSS_SECTIONS=$HOME_DIRECTORY_SYM_LINK/cross_sections/endfb-vii.1-hdf5/cross_sections.xml
#!/bin/bash
# Usage:
# 1. Copy to the directory where you have your files
# 2. Update any needed environment variables and input file names in this script
# 3. sbatch job_bebop
#SBATCH --job-name=cardinal
#SBATCH --account=startup
#SBATCH --partition=bdwall
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=36
#SBATCH --output=run.out
#SBATCH --error=run.error
#SBATCH --time=00:10:00
module purge
module load cmake/3.20.3-vedypwm
module load gcc/9.2.0-pkmzczt
module load openmpi/4.1.1-ckyrlu7
module load python/intel-parallel-studio-cluster.2019.5-zqvneip/3.6.9
# Revise for your repository and cross section data locations
DIRECTORY_WHERE_YOU_HAVE_CARDINAL=$HOME
HOME_DIRECTORY_SYM_LINK=$(realpath -P $DIRECTORY_WHERE_YOU_HAVE_CARDINAL)
export NEKRS_HOME=$HOME_DIRECTORY_SYM_LINK/cardinal/install
export CARDINAL_DIR=$HOME_DIRECTORY_SYM_LINK/cardinal
# The name of the input file you want to run
input_file=openmc.i
# Run a Cardinal case
srun -n 36 $CARDINAL_DIR/cardinal-opt -i $input_file > logfile
(scripts/job_bebop)Frontier
Frontier is an HPC system at Oak Ridge National Laboratory (ORNL) with 9408 AMD compute nodes, each with 4 AMD MI250X, each with 2 Graphics Compute Dies (GCDs), which you can think of as representing 8 GPUs per node. Remember that in order to build Cardinal with GPU support, set the appropriate variable in the Makefile
to true (1
):
OCCA_CUDA_ENABLED=0
OCCA_HIP_ENABLED=1
OCCA_OPENCL_ENABLED=0
When building the PETSc, libMesh, and Wasp dependencies from the scripts, you'll also need to pass some additional settings to libMesh.
./contrib/moose/scripts/update_and_rebuild_petsc.sh
./contrib/moose/scripts/update_and_rebuild_libmesh.sh --enable-xdr-required --with-xdr-include=/usr/include
./contrib/moose/scripts/update_and_rebuild_wasp.sh
if [ $LMOD_SYSTEM_NAME = frontier ]; then
module purge
module load PrgEnv-gnu craype-accel-amd-gfx90a cray-mpich rocm cray-python/3.9.13.1 cmake/3.21.3
module unload cray-libsci
# Revise for your Cardinal repository location
DIRECTORY_WHERE_YOU_HAVE_CARDINAL=$HOME/frontier
cd $DIRECTORY_WHERE_YOU_HAVE_CARDINAL
HOME_DIRECTORY_SYM_LINK=$(realpath -P $DIRECTORY_WHERE_YOU_HAVE_CARDINAL)
export NEKRS_HOME=$HOME_DIRECTORY_SYM_LINK/cardinal/install
export OPENMC_CROSS_SECTIONS=/lustre/orion/fus166/proj-shared/novak/cross_sections/endfb-vii.1-hdf5/cross_sections.xml
fi
#/bin/bash
# Usage:
# 1. Copy to the directory where you have your files
# 2. Update any needed environment variables and input files names in this script
# 3. ./job_frontier [casename] <nodes> <hh:mm>
# where [casename] is the NekRS case (used for precompiling). If not using
# NekRS, omit this field.
# <nodes> is the number of nodes you want to use
# <hh::mm> is the maximum wall time you want
# additional variables you may want to change
: ${PROJ_ID:="FUS166"}
: ${CARDINAL_I:=openmc.i}
: ${CARDINAL_DIR:=/ccs/home/novak/frontier/cardinal}
: ${QUEUE:="batch"}
: ${NEKRS_HOME:="$CARDINAL_DIR/install"}
: ${OCCA_CACHE_DIR:="$PWD/.cache/occa"}
: ${CARDINAL_BIN:="$CARDINAL_DIR/cardinal-opt"}
: ${OPENMC_CROSS_SECTIONS:=/lustre/orion/fus166/proj-shared/novak/cross_sections/endfb-vii.1-hdf5/cross_sections.xml}
: ${NEKRS_CACHE_BCAST:=1}
: ${NEKRS_SKIP_BUILD_ONLY:=1}
# =============================================================================
# Adapted from nrsqsub_frontier
# =============================================================================
if [ -z "$PROJ_ID" ]; then
echo "ERROR: PROJ_ID is empty"
exit 1
fi
if [ -z "$QUEUE" ]; then
echo "ERROR: QUEUE is empty"
exit 1
fi
if [ $# -lt 2 ] || [ $# -gt 3 ]; then
echo "usage: [PROJ_ID] [QUEUE] $0 [casename] <number of compute nodes> <hh:mm>"
exit 0
fi
NVME_HOME="/mnt/bb/$USER/"
bin=$CARDINAL_BIN
bin_nekrs=${NEKRS_HOME}/bin/nekrs
case=$1
nodes=$2
time=$3
gpu_per_node=8
cores_per_numa=7
# special rules for OpenMC-only cases
if [ $# -eq 2 ]; then
nodes=$1
time=$2
fi
let nn=$nodes*$gpu_per_node
let ntasks=nn
backend=HIP
if [ ! -f $bin ]; then
echo "Cannot find" $bin
exit 1
fi
if [ $# -ne 2 ]; then
if [ ! -f $case.par ]; then
echo "Cannot find" $case.par
exit 1
fi
if [ ! -f $case.udf ]; then
echo "Cannot find" $case.udf
exit 1
fi
if [ ! -f $case.re2 ]; then
echo "Cannot find" $case.re2
exit 1
fi
fi
striping_unit=16777216
max_striping_factor=400
let striping_factor=$nodes/2
if [ $striping_factor -gt $max_striping_factor ]; then
striping_factor=$max_striping_factor
fi
if [ $striping_factor -lt 1 ]; then
striping_factor=1
fi
MPICH_MPIIO_HINTS="*:cray_cb_write_lock_mode=2:cray_cb_nodes_multiplier=4:striping_unit=${striping_unit}:striping_factor=${striping_factor}:romio_cb_write=enable:romio_ds_write=disable:romio_no_indep_rw=true"
# sbatch
SFILE=s.bin
echo "#!/bin/bash" > $SFILE
echo "#SBATCH -A $PROJ_ID" >>$SFILE
echo "#SBATCH -J cardinal_$case" >>$SFILE
echo "#SBATCH -o %x-%j.out" >>$SFILE
echo "#SBATCH -t $time:00" >>$SFILE
echo "#SBATCH -N $nodes" >>$SFILE
echo "#SBATCH -p $QUEUE" >>$SFILE
echo "#SBATCH -C nvme" >>$SFILE
echo "#SBATCH --exclusive" >>$SFILE
echo "#SBATCH --ntasks-per-node=$gpu_per_node" >>$SFILE
echo "#SBATCH --gpus-per-task=1" >>$SFILE
echo "#SBATCH --gpu-bind=closest" >>$SFILE
echo "#SBATCH --cpus-per-task=$cores_per_numa" >>$SFILE
echo "module load PrgEnv-gnu" >> $SFILE
echo "module load craype-accel-amd-gfx90a" >> $SFILE
echo "module load cray-mpich" >> $SFILE
echo "module load rocm" >> $SFILE
echo "module unload cray-libsci" >> $SFILE
echo "module list" >> $SFILE
echo "rocm-smi" >>$SFILE
echo "rocm-smi --showpids" >>$SFILE
echo "squeue -u \$USER" >>$SFILE
echo "export MPICH_GPU_SUPPORT_ENABLED=1" >>$SFILE
## These must be set before compiling so the executable picks up GTL
echo "export PE_MPICH_GTL_DIR_amd_gfx90a=\"-L${CRAY_MPICH_ROOTDIR}/gtl/lib\"" >> $SFILE
echo "export PE_MPICH_GTL_LIBS_amd_gfx90a=\"-lmpi_gtl_hsa\"" >> $SFILE
#echo "export PMI_MMAP_SYNC_WAIT_TIME=1800" >> $SFILE # avoid timeout by MPI init for large job
echo "ulimit -s unlimited " >>$SFILE
echo "export NEKRS_HOME=$NEKRS_HOME" >>$SFILE
echo "export NEKRS_GPU_MPI=1 " >>$SFILE
echo "export NVME_HOME=$NVME_HOME" >>$SFILE
echo "export MPICH_MPIIO_HINTS=$MPICH_MPIIO_HINTS" >>$SFILE
echo "export MPICH_MPIIO_STATS=1" >>$SFILE
echo "export MPICH_OFI_NIC_POLICY=NUMA" >>$SFILE
echo "export NEKRS_CACHE_BCAST=$NEKRS_CACHE_BCAST" >> $SFILE
echo "if [ \$NEKRS_CACHE_BCAST -eq 1 ]; then" >> $SFILE
echo " export NEKRS_LOCAL_TMP_DIR=\$NVME_HOME" >> $SFILE
echo "fi" >> $SFILE
echo "" >> $SFILE
echo "date" >>$SFILE
echo "" >> $SFILE
bin_nvme=$NVME_HOME"cardinal-bin"
bin_nvme_nekrs=$NVME_HOME"nekrs-bin"
bin_nvme_libs=$bin_nvme"_libs"
bin_nvme_libs_nekrs=$bin_nvme_nekrs"_libs"
echo "sbcast -fp --send-libs $bin $bin_nvme" >> $SFILE
echo "sbcast -fp --send-libs $bin_nekrs $bin_nvme_nekrs" >> $SFILE
echo "if [ ! \"\$?\" == \"0\" ]; then" >> $SFILE
echo " echo \"SBCAST failed!\"" >> $SFILE
echo " exit 1" >> $SFILE
echo "fi" >> $SFILE
echo "export LD_LIBRARY_PATH=$bin_nvme_libs_nekrs:$bin_nvme_libs:${LD_LIBRARY_PATH}" >> $SFILE
echo "export LD_PRELOAD=$bin_nvme_libs/libnekrs.so:$bin_nvme_libs/libocca.so:$bin_nvme_libs/libnekrs-hypre-device.so:$bin_nvme_libs/libnekrs-hypre.so" >> $SFILE
echo "ls -ltra $NVME_HOME" >> $SFILE
echo "ls -ltra $bin_nvme_libs" >> $SFILE
echo "ldd $bin_nvme" >> $SFILE
if [ $NEKRS_SKIP_BUILD_ONLY -eq 0 ]; then
echo "# precompilation" >>$SFILE
echo "srun -N 1 -n 1 $bin_nvme -i $CARDINAL_I --nekrs-setup $case --nekrs-backend $backend --nekrs-device-id 0 --nekrs-build-only $ntasks" >>$SFILE
fi
echo "" >> $SFILE
echo "# actual run" >>$SFILE
echo "srun -N $nodes -n $ntasks $bin_nvme -i $CARDINAL_I --nekrs-setup $case --nekrs-backend $backend --nekrs-device-id 0" >>$SFILE
sbatch $SFILE
(scripts/job_frontier)Nek5k
Nek5k is a cluster at ANL with 40 nodes, each with 40 cores. We use conda to set up a proper environment on Nek5k for running Cardinal. To use this environment, you will need to follow these steps the first time you use Nek5k:
The first time you log in, run from the command line:
module load openmpi/4.0.1/gcc/8.3.1 cmake openmpi/4.0.1/gcc/8.3.1-hdf5-1.10.6 anaconda3/anaconda3
conda init
Log out, then log back in
Add the following to your
~/.bashrc
:
module load openmpi/4.0.1/gcc/8.3.1 cmake openmpi/4.0.1/gcc/8.3.1-hdf5-1.10.6 anaconda3/anaconda3
conda activate
Log out, then log back in
After following these steps, you should not require any further actions to run Cardinal on Nek5k. Your ~/.bashrc
should look something like below. The content within the conda initialize
section is added automatically by conda when you perform the steps above.
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
module load openmpi/4.0.1/gcc/8.3.1 cmake openmpi/4.0.1/gcc/8.3.1-hdf5-1.10.6 anaconda3/anaconda3
conda activate
# Update for your Cardinal repository location
export NEKRS_HOME=$HOME/cardinal/install
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/shared/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/shared/anaconda3/etc/profile.d/conda.sh" ]; then
. "/shared/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/shared/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
# need to point to a newer CMake version
export PATH=/shared/cmake-3.24.2/bin:$PATH
#!/bin/bash
# Usage:
# 1. Copy to the directory where you have your files
# 2. Update any needed environment variables and input file names in this script
# 3. qsub job_nek5k
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --time=00:20:00
#SBATCH --output=pincell.log
#SBATCH -p compute
# Revise for your repository and cross section data locations
export OPENMC_CROSS_SECTIONS=$HOME/cross_sections/endfb71_hdf5/cross_sections.xml
export CARDINAL_DIR=$HOME/cardinal
export NEKRS_HOME=$CARDINAL_DIR/install
# The name of the input file you want to run
input_file=nek_master.i
# Run a Cardinal case
mpirun -np 40 $CARDINAL_DIR/cardinal-opt -i $input_file > logfile
(scripts/job_nek5k)Pinchot
Pinchot is a small Ubuntu server at the University of Illinois hosted in Dr. Novak's lab. While not an HPC system per se, we include these instructions on this page to facilitate development among the Cardinal team. There is no job queue on Pinchot.
module load openmpi/ubuntu/5.0.0
module load hdf5/ubuntu/1.14.3
# change to your Cardinal location (either the shared location in /shared/data,
# or to a location in your home directory
CARDINAL_DIR=$(realpath -P /shared/data/cardinal)
#CARDINAL_DIR=$(realpath -P /home/ajnovak2/cardinal)
export NEKRS_HOME=$CARDINAL_DIR/install
export LIBMESH_JOBS=80
export MOOSE_JOBS=80
export HDF5_ROOT=/software/HDF5-1.14.3-ubuntu22
# revise for your cross section location
export OPENMC_CROSS_SECTIONS=/shared/data/endfb-vii.1-hdf5/cross_sections.xml
export ENABLE_DAGMC=true
export PATH=${PATH}:${NEKRS_HOME}/bin:${CARDINAL_DIR}
export MOOSE_DIR=$CARDINAL_DIR/contrib/moose
export PYTHONPATH=$MOOSE_DIR/python:${PYTHONPATH}
Sawtooth
Sawtooth is an HPC system at Idaho National Laboratory (INL) with 99,792 cores (48 cores per node). The max number of cores a job can run is 80,000, resulting in a max number of nodes of 1666. Every job should use ncpus=48
to maximize the power of each node requested. The below script requests 1 node (select=1
). In order to maximizie performance on the node, always use ncpus = mpiprocs * ompthreads = 48
, and make sure that --n-threads
in the job launch equals the same value as ompthreads
.
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
if [ -f ~/.bashrc_local ]; then
. ~/.bashrc_local
fi
module purge
module load use.moose
module load moose-tools
module load openmpi/4.1.6-gcc-12.3.0-panw
module load cmake/3.27.7-gcc-12.3.0-5cfk
# Revise for your repository location
export NEKRS_HOME=$HOME/cardinal/install
export OPENMC_CROSS_SECTIONS=$HOME/cross_sections/endfb-vii.1-hdf5/cross_sections.xml
#!/bin/bash
# Launch your job by entering the following into the terminal:
# qsub job_sawtooth
#PBS -l select=1:ncpus=48:mpiprocs=24:ompthreads=2
#PBS -l walltime=5:00
#PBS -m ae
#PBS -N cardinal
#PBS -j oe
#PBS -P moose
#PBS -V
module purge
module load use.moose
module load moose-tools
module load openmpi/4.1.5_ucx1.14.1
module load cmake/3.27.7-gcc-12.3.0-5cfk
# Revise for your repository location
export CARDINAL_DIR=$HOME/cardinal
export OMP_PROC_BIND=true
# Run an OpenMC case
cd $CARDINAL_DIR/test/tests/neutronics/feedback/lattice
rm logfile
mpirun $CARDINAL_DIR/cardinal-opt -i openmc_master.i --n-threads=2 > logfile
# Run a NekRS case
cd $CARDINAL_DIR/test/tests/cht/sfr_pincell
rm logfile
mpirun $CARDINAL_DIR/cardinal-opt -i nek_master.i > logfile
(scripts/job_sawtooth)Summit
Summit is an HPC system at ORNL with approximately 4,600 compute nodes, each of which has two IBM POWER9 processors and six NVIDIA Tesla V100 GPUs. Remember that in order to build Cardinal with GPU support, set the appropriate variable in the Makefile
to true (1
):
OCCA_CUDA_ENABLED=1
OCCA_HIP_ENABLED=0
OCCA_OPENCL_ENABLED=0
module purge
module load DefApps-2023
module unload xl/16.1.1-10 lsf-tools/2.0 xalt/1.2.1 hsi/5.0.2.p5
module load gcc/9.1.0
module load python/3.7.0-anaconda3-5.3.0
module load hdf5/1.10.7
module load cmake/3.27.7
module load eigen/3.3.9
module load nsight-compute/2021.2.1
module load nsight-systems/2021.3.1.54
module load cuda/11.0.3
# Revise for your Cardinal repository location
DIRECTORY_WHERE_YOU_HAVE_CARDINAL=$MEMBERWORK/nfi128/summit
cd $DIRECTORY_WHERE_YOU_HAVE_CARDINAL
# This is needed because your home directory on Summit is actually a symlink
HOME_DIRECTORY_SYM_LINK=$(realpath -P $DIRECTORY_WHERE_YOU_HAVE_CARDINAL)
export NEKRS_HOME=$HOME_DIRECTORY_SYM_LINK/cardinal/install
export OCCA_DIR=$NEKRS_HOME
export HDF5_ROOT=/sw/summit/spack-envs/base/opt/linux-rhel8-ppc64le/gcc-9.1.0/hdf5-1.10.7-yxvwkhm4nhgezbl2mwzdruwoaiblt6q2
export HDF5_INCLUDE_DIR=$HDF5_ROOT/include
export HDF5_LIBDIR=$HDF5_ROOT/lib
export OPENMC_CROSS_SECTIONS=$HOME_DIRECTORY_SYM_LINK/cross_sections/endfb-vii.1-hdf5/cross_sections.xml
#/bin/bash
# Usage:
# 1. Copy to the directory where you have your files
# 2. Update any needed environment variables and input files names in this script
# 3. ./job_summit [casename] <nodes> <hh:mm>
# where [casename] is the NekRS case (used for precompiling). If you are not using
# NekRS, omit this field.
# <nodes> is the number of nodes you want to use
# <hh::mm> is the maximum wall time you want
# additional variables you may want to change
: ${PROJ_ID:=""}
: ${CARDINAL_I:=nek_master.i}
: ${CARDINAL_DIR:=/gpfs/alpine/cfd151/scratch/novak/cardinal}
: ${NEKRS_HOME:="$CARDINAL_DIR/install"}
: ${OCCA_CACHE_DIR:="$PWD/.cache/occa"}
: ${CARDINAL_BIN:="$CARDINAL_DIR/cardinal-opt"}
: ${OPENMC_CROSS_SECTIONS:=/autofs/nccs-svm1_home1/novak/cross_sections/endfb71_hdf5/cross_sections.xml}
# =============================================================================
# Adapted from nrsqsub_summit
# =============================================================================
if [ -z "$PROJ_ID" ]; then
echo "ERROR: PROJ_ID is empty"
exit 1
fi
if [ $# -lt 2 ] || [ $# -gt 3 ]; then
echo "usage: [PROJ_ID] [CPUONLY=1] $0 [casename] <number of compute nodes> <hh:mm>"
exit 0
fi
NVME_HOME="/mnt/bb/$USER/"
XL_HOME="/sw/summit/xl/16.1.1-3/xlC/16.1.1"
: ${CPUONLY:=0}
export NEKRS_HOME
export OPENMC_CROSS_SECTIONS
export OCCA_CACHE_DIR
export NEKRS_HYPRE_NUM_THREADS=1
export OGS_MPI_SUPPORT=1
export OCCA_CXX=
export OCCA_CXXFLAGS="-O3 -qarch=pwr9 -qhot -DUSE_OCCA_MEM_BYTE_ALIGN=64"
export OCCA_LDFLAGS="$XL_HOME/lib/libibmc++.a"
#export OCCA_VERBOSE=1
#export OMPI_LD_PRELOAD_POSTPEND=$OLCF_SPECTRUM_MPI_ROOT/lib/libmpitrace.so
#export PAMI_ENABLE_STRIPING=1
#export PAMI_IBV_ADAPTER_AFFINITY=1
#export PAMI_IBV_DEVICE_NAME="mlx5_0:1,mlx5_3:1"
#export PAMI_IBV_DEVICE_NAME_1="mlx5_3:1,mlx5_0:1"
# work-around for barrier issue
export OMPI_MCA_coll_ibm_collselect_mode_barrier=failsafe
bin=$CARDINAL_BIN
case=$1
nodes=$2
time=$3
gpu_per_node=6
cores_per_socket=21
# special rules for OpenMC-only cases
if [ $# -eq 2 ]; then
nodes=$1
time=$2
fi
let nn=$nodes*$gpu_per_node
let ntasks=nn
backend=CUDA
if [ $CPUONLY -eq 1 ]; then
backend=CPU
let nn=2*$nodes
let ntasks=$nn*$cores_per_socket
fi
if [ ! -f $bin ]; then
echo "Cannot find" $bin
exit 1
fi
if [ $# -ne 2 ]; then
if [ ! -f $case.par ]; then
echo "Cannot find" $case.par
exit 1
fi
if [ ! -f $case.udf ]; then
echo "Cannot find" $case.udf
exit 1
fi
if [ ! -f $case.oudf ]; then
echo "Cannot find" $case.oudf
exit 1
fi
if [ ! -f $case.re2 ]; then
echo "Cannot find" $case.re2
exit 1
fi
fi
mkdir -p $OCCA_CACHE_DIR 2>/dev/null
if [ $# -eq 3 ]; then
while true; do
read -p "Do you want precompile? [Y/N]" yn
case $yn in
[Yy]* )
echo $NEKRS_HOME
mpirun -pami_noib -np 1 $NEKRS_HOME/bin/nekrs --setup $case --build-only $ntasks --backend $backend;
if [ $? -ne 0 ]; then
exit 1
fi
break ;;
* )
break ;;
esac
done
fi
if [ $CPUONLY -eq 1 ]; then
jsrun="jsrun -X 1 -n$nodes -r1 -a1 -c1 -g0 -b packed:1 -d packed cp -a $OCCA_CACHE_DIR/* $NVME_HOME; export OCCA_CACHE_DIR=$NVME_HOME; jsrun -X 1 -n$nn -a$cores_per_socket -c$cores_per_socket -g0 -b packed:1 -d packed $bin -i $CARDINAL_I"
else
jsrun="jsrun -X 1 -n$nodes -r1 -a1 -c1 -g0 -b packed:1 -d packed cp -a $OCCA_CACHE_DIR/* $NVME_HOME; export OCCA_CACHE_DIR=$NVME_HOME; jsrun --smpiargs='-gpu' -X 1 -n$nn -r$gpu_per_node -a1 -c2 -g1 -b rs -d packed $bin -i $CARDINAL_I --nekrs-backend $backend --nekrs-device-id 0"
fi
cmd="bsub -nnodes $nodes -alloc_flags NVME -W $time -P $PROJ_ID -J cardinal_$case \"${jsrun}\""
echo $cmd
$cmd
(scripts/job_summit)Eddy
Eddy is a cluster at ANL with eleven 32-core nodes, five 40-core nodes, and six 80-core nodes.
module purge
module load moose-dev-gcc
module load cmake/3.25.0
export HDF5_ROOT=/opt/moose/seacas
export HDF5_INCLUDE_DIR=$HDF5_ROOT/include
export HDF5_LIBDIR=$HDF5_ROOT/lib
# Revise for your Cardinal repository location
DIRECTORY_WHERE_YOU_HAVE_CARDINAL=$HOME
# This is needed because your home directory on Eddy is actually a symlink
HOME_DIRECTORY_SYM_LINK=$(realpath -P $DIRECTORY_WHERE_YOU_HAVE_CARDINAL)
export NEKRS_HOME=$HOME_DIRECTORY_SYM_LINK/cardinal/install
export OPENMC_CROSS_SECTIONS=$HOME_DIRECTORY_SYM_LINK/cross_sections/endfb-vii.1-hdf5/cross_sections.xml
!listing! scripts/job_eddy language=bash caption=Sample job script for Eddy with the 32-core partition id=e2