Nektar++ on ARCHER

The ARCHER national supercomputer is intended for very large-scale computations and is the successor to HECToR. These instructions are preliminary in nature and an up-to-date version of the master branch is required as support for the compilation environment has only been added recently.

Compilation instructions

ARCHER, like CX1 and CX2, uses the module system to load various system modules. We choose the GNU compiler suite both for boost compatibility and also to avoid compile issues with the Cray suite. Additionally the CRAYPE_LINK_TYPE is set to dynamic so that shared libraries can be built.

export CRAYPE_LINK_TYPE=dynamic
module swap PrgEnv-cray PrgEnv-gnu
module load cmake git fftw cray-hdf5-parallel/

These options should be put in your ~/.profile file to avoid typing them each session. To clone the repository, first create a public/private key-pair and add it to gitlab. You must currently compile from your work directory, which is located in for example /work/e01/e01/username, where e01 is the project code for your project. Then clone the repository as usual:

cd /work/e01/e01/dmoxey
git clone nektar++

We then create a builds directory within the nektar++ directory created by the git command. From within this new directory we then run cmake with a few extra options. Note the use of CC and CXX to select the special ARCHER-specific compilers.

cd nektar++ && mkdir builds && cd builds

Notes for the above:

  • cc and CC are the C and C++ wrappers for the Cray utilities and determined by the PrgEnv module.
  • SYSTEM_BLAS_LAPACK is disabled since, by default, we can use the libsci package which contains an optimised version of BLAS and LAPACK and not require any additional arguments to cc.
  • HDF5 is a better output option to use on ARCHER since often we run out of the number of files limit on the quota. Setting this option from within ccmake has lead to problems however so make sure to specify it on the cmake command line as above.
  • We are currently not using the system boost since it does not appear to be using C++11 and so causing compilation errors.
    • Currently we have also observed you may need to  change the following in cmake/ThirdPartyBoost.cmake change the line SET(BOOST_FLAGS cxxflags=-fPIC cflags=-fPIC linkflags=-fPIC) to  SET(BOOST_FLAGS cxxflags=-std=c++11 cflags=-fPIC linkflags=-std=c++11)
  • If your version of Nektar++ uses boost version 1.71.0 (or possibly higher), it is observed that compiling the Nektar++ with make -j 4 or even make -j 1 will be unsuccessful because during installation, boost uses lots of resources and compilation my terminates with the error message: resources termporarily are not available. This problem can be solved by either of the following approaches:
    • In the cmake/ThirdPartyBoost.cmake change BUILD_COMMAND NO_BZIP2=1 ./b2 to BUILD_COMMAND NO_BZIP2=1 ./b2 -j 4
    • compile the Nektar++ by submitting a job to the computing nodes

At this point you can run ccmake .. to e.g. disable unnecessary solvers. Now run make as usual to compile the code

make -j4 install

Do not try to run regression tests – the binaries at this point are cross-compiled for the compute nodes and will not execute properly on the login nodes.

Running your job

Jobs are submitted to the queue using PBS files. The /home directory *doesn’t appear to be* mounted across nodes, but jobs should be run from the $WORK directory. Here is an example PBS script which will run on two 24-core nodes for a total of 48 core execution. Make sure to change budget_id to the appropriate budget ID code, and make sure to change the -n 48 as you change the number of nodes the code runs on. Solver output will be directed to a file called solver_out.txt which can be monitored from the login node.

#!/bin/bash --login
#PBS -l walltime=01:00:00
#PBS -l select=2
#PBS -A budget_id

# Make sure any symbolic links are resolved to absolute path
export PBS_O_WORKDIR=$(readlink -f $PBS_O_WORKDIR)

# Change to the directory that the job was submitted from

# Set the number of threads to 1
# This prevents any system libraries from automatically
# using threading.


# Launch the parallel job: solver output will go to solver_out.txt
# so that it can be monitored.
aprun -n 48 $NEKPP_DIR/builds/dist/bin/ADRSolver -i Hdf5 -v diffusion3d.xml intercostal-tet.xml | tee solver_out.txt