Nektar++ on the Mira Cluster

Mira is a Blue Gene/Q supercomputer ran by the Argonne national laboratory. As of 2016, it is ranked as the fifth-fastest supercomputer in the world. If you are interested in using Nektar++ on Mira, please read on.

Users of Mira often have access to Cetus, a smaller  cluster with the same architecture as Mira. The purpose of Cetus is to test your solver on smaller problems before launching large-scale simulations on Mira. The instructions below apply to both Cetus and Mira.

The process of installing Nektar++ on Mira is almost standard as described in section 1.3 (‘Installing from Source’) of the Nektar++ user guide with two minor interventions in the cmake build system.

For the purpose of this post, we will suppose that the directory containing Nektar++ source code is $NEKTAR_SRC_DIR and we are building Nektar++ in $NEKTAR_BUILD_DIR. For example, if your user name is john, then you could set $NEKTAR_SRC_DIR and $NEKTAR_BUILD_DIR to be /home/john/nektar++ and  /home/john/nektar_build, respectively. Note that Nektar++ does not need the environment variables $NEKTAR_SRC_DIR and $NEKTAR_BUILD_DIR to exist and they are only used here to simplify the explanation below.

Compilation

Log in to Mira or Cetus (both machines share their home directory file system, code compiled on one machine is accessible and runs correctly on the other as well). Some software and libraries such as the gcc compiler can be made available at login. Mira uses SoftEnv system to set environment variables that enable user to access required tools. This can be configured in a ‘$HOME/.soft’ configuration file that is automatically created upon first login. Most likely, you will only need the default toolchain and your ‘~/.soft’ file will therefore only contain a single line with ‘@default’. For more details, see the instructions for new Mira users. Note that after every change in the ‘~/.soft’ file, you have to use the ‘resoft’ command to reload your configuration.

If you use sources downloaded as archive file from Nektar++ website, first extract the sources and prepare build directory:

tar -zxvf nektar++-X.X.X.tar.gz
export NEKTAR_SRC=nektar++-X.X.X
export NEKTAR_BUILD=/home/john/nektar_build
mkdir $NEKTAR_BUILD

Alternatively, download the source code from gitlab repository:

git clone http://gitlab.nektar.info/clone/nektar/nektar.git $HOME/nektar_src
export NEKTAR_SRC=$HOME/nektar_src
export NEKTAR_BUILD=/home/john/nektar_build
mkdir $NEKTAR_BUILD

Next, tell the build system of Nektar++ explicitly to use the “-dynamic” flag when linking shared libraries. Without this modification, Nektar++ compiles, but does not run correctly – executables such as the incompressible Navier-Stokes solver fail to find and load libraries they depend on.
Open the top level CMakeLists file (i.e. $NEKTAR_SRC/CMakeLists.txt) in a text editor and add the following two lines at the beginning  of the file (for example, on line 10 just after the SET_PROPERTY(GLOBAL PROPERTY USE_FOLDERS ON) command):

SET(CMAKE_SHARED_LIBRARY_LINK_C_FLAGS "-dynamic")
SET(CMAKE_SHARED_LIBRARY_LINK_CXX_FLAGS "-dynamic")

After this change, start compiling Nektar++:

cd $NEKTAR_BUILD
CC=mpicc CXX=mpic++ cmake -DNEKTAR_USE_MPI=ON [ ... other cmake flags ... ] $NEKTAR_SRC
make -j4

The compilation should start, but will eventually fail during compilation of zlib (one of the third-party libraries required by Nektar++). At this stage, it is necessary to add the same two lines with ‘dynamic’ linker flag to the cmake file located in ThirdParty/zlib-1.2.7/CMakeLists.txt.

Move to $NEKTAR_BUILD and type ‘make‘ again. This time, the compilation should terminate successfully.

Running simulations

Jobs are submitted with the qsub command. You need to specify job duration, number of cluster nodes and how many cores per node should be used. For example, the command

qsub -t 60 -n 512 --proccount 8192 --mode c16 /home/john/nektar_build/solvers/ADRSolver/ADRSolver -v --shared-filesystem my_simulation.xml

would launch your job on 512 nodes with maximum run time 60 minutes. Since each node has 16 cores, the total core count will be 8192 when all cores of each node are used, which is specified by the ‘–mode c16’ switch. Valid values for the number of ranks per node are c1, c2, c4, c8, c16, c32, and c64. When number of nodes (-n) and number of ranks per core (-c16) are specified, you can omit ‘–proccount’. For massively parallel simulations, consider partitioning your grid prior to the simulation – in that case, you won’t need the ‘–shared-filesystem’ flag.

Post-processing

One peculiarity of Mira and Cetus is the fact that the processor type is PowerPC A2 and it employs big-endian byte ordering. When your simulation finishes, convert  the output file into a format suitable for visualization directly on Cetus. Should you fail to do that, you are likely to experience problems during post-processing.

Suppose you copy the output ‘*.fld’ file to your laptop instead, and convert to the vtk file format afterwards. You will end up with output containing nonphysical values – your little-endian CPU (most of today’s laptops and desktops) will interpret the bytes of each value saved in the ‘*.fld’ file incorrectly, thus producing corrupted data.

This issue with non-portable output files should be resolved in future versions of Nektar++, but it is currently up to the user to post-process her output data on the same architecture where it was obtained.

To convert ‘my_output.fld’ to ‘my_output.vtu’ directly on Cetus, launch a single-node job as follows:

qsub -t 60 -n 1 --proccount 1 /home/john/nektar_build/utilities/FieldConvert/FieldConvert my_simulation.xml my_output.fld my_output.vtu