Nektar++ on the UCL Thomas cluster

Basic info about Thomas

Thomas is the UK National Tier 2 High Performance Computing Hub in Materials and Molecular Modelling.

Job submission is done using qsub. Jobscripts must begin #!/bin/bash -l in order to run as a login shell and get your login environment and modules.

On Thomas, users do not submit directly to queues – the scheduler assigns your job to one based on the resources it requested. The queues have somewhat unorthodox names as they are only used internally, but this is what they mean:

  • Jerry: single-node job
  • Tom: multi-node job
  • Spike: cross-CU job, using superqueue (any multi-node job may end up using this)

The target job sizes for Thomas are 48-120 cores (2-5 nodes). Jobs larger than this may have a longer queue time.

For more info and how to get an account:

Access to the cluster

After creating your account, in order to access Thomas by:

ssh your_user_name@thomas.rc.ucl.ac.uk

Note regarding folder quota:

~/

  • is backed up
  • has a 50GB quota limit
  • is only writable via login nodes

~/Scratch

  • is NOT backed up
  • should be considered “at risk”
  • has a 200GB quota limit (by default)

Compilation instructions

It is necessary to compile the code directly at cluster login node.

To compile and use Nektar++, you should load some modules:

module load python/2.7.12
boost/1_63_0/mpi/intel-2017-update1
fftw/3.3.4-impi/intel-2017-update1
mpi/intel/2017/update1/intel

Assuming Nektar++ is in your $HOME/nektar++ directory, follow these instructions to compile the code:

Configure the Nektar++ build directory

cd $HOME/nektar++
mkdir build
cd build

Run the command:

cmake -DNEKTAR_USE_MPI:BOOL=ON \
-DNEKTAR_USE_MKL:BOOL=ON \
-DNEKTAR_USE_FFTW:BOOL=ON \
-DCMAKE_C_FLAGS:STRING="-O3 -xSSE4.2 -axAVX,CORE-AVX-I,CORE-AVX2" \
-DCMAKE_CXX_FLAGS:STRING="-O3 -xSSE4.2 -axAVX,CORE-AVX-I,CORE-AVX2" \
../

Type

make install

Running jobs on Thomas

To execute a Nektar++ solver in parallel create a PBS script and use GERun to run the solver using MPI.

To submit a job, use qsub:

qsub your_job_file.pbs

The required information in order to run a Incompressible Navier-Stokes MPI job should contain the following information:

  1. Select hard wall-clock time derided;
  2. Request the amount of RAM memory;
  3. Amount of storage required on TMPDIR drive;
  4. Set job’s name;
  5. Select the MPI parallel environment and number of processes. Note: each node has 24cpus and for the best performance, select a multiple;
  6. Set the working directory to somewhere in your scratch folder. This is a necessary step with the upgraded software stack as compute nodes cannot write to $HOME.
  7. Load some specific modules for the simulation. In this case, load: python/2.7.12; boost/1_63_0/mpi/intel-2017-update1; fftw/3.3.4-impi/intel-2017-update1; mpi/intel/2017/update1/intel;
  8. Run Nektar++ Incompressible Navier-Stokes Solver using GERun, which is a wrapper that launches MPI jobs on Thomas cluster;

The submission script should be like this:

#!/bin/bash -l
#$ -S /bin/bash
#$ -l h_rt=24:00:00
#$ -l mem=24G
#$ -l tmpfs=15G
#$ -N your_job_name
#$ -pe mpi 48
#$ -wd /home/your_UCL_id/Scratch/

module load  python/2.7.12 boost/1_63_0/mpi/intel-2017-update1 fftw/3.3.4-impi/intel-2017-update1 mpi/intel/2017/update1/intel

gerun $HOME/nektar++/builds/dist/bin/IncNavierStokesSolver your_case.xml > your_case.txt

Some institutes have exclusive queue. In order to check if you have access to it, type budgets while logged to Thomas cluster. To add a exclusive queue to the script, add the following lines before the wall-clock time:

#$ -P Gold
#$ -A YourExclusiveQueue

For other job types, please check: