Profiling using Solaris Studio

Oracle Solaris Studio is a free proprietary development suite that includes compilers and analysis tools. It is available ​from here for free download upon a quite non-restrictive license agreement and it can be used locally on Linux machines with Java installed.

On the internal Nektar++ compute nodes it is made available by running

module load dev-studio

This pages describes how to use the Performance Analyzer tool (part of Oracle Solaris Studio) for profiling Nektar++ code.

Preparations

The Nektar++ code needs to be compiled on the compute node in RelWithDebInfo mode (strongly not advisable to use Debug mode). This makes the code working almost same fast as with Release mode, skipping all ASSERTL1 and ASSERTL2 clauses, but also supplies the binary executable with function symbols necessary for profiling.

There is no need to compile Nektar++ using C++ compiler supplied with Oracle Solaris Studio, using gcc is perfectly normal (clang? intel suite?).

Profiling

Actual profiling is split into two stages. At the first stage one runs a target executable (e.g. IncNavierStokesSolver) with some geometry XML file via thin command line wrapper or using a GUI tool. This creates a sub-directory with binary experiment data. At the second stage the experiment data is visualized using GUI tool. Since this requires X11 packets forwarding you should, if using a remote server, establish an ssh connection using

ssh -CX ...

Using command line utility is advantageous since it allows submitting the job (using ​screen) and disconnecting while it runs.

Collecting profile data

The full manual to collect command line utility ​can be found here.

For a sequential run of IncNavierStokesSolver with default profiling configuration, run

collect -o profile-data-directory-name.er ./IncNavierStokesSolver-pg geometry.xml

For an MPI parallel profiling, do

collect -o profile-data-directory-name-mpi.er -m on -M OPENMPI mpirun -np 2 -- ./IncNavierStokesSolver-pg geometry.xml

here -m on tells it to trace MPI calls and -M OPENMPI informs which MPI implementation is installed on this system. Note “–” sign in between mpirun -np 2 and ./IncNavierStokesSolver-pg, it is necessary syntax requirement of collect utility.

It is a good practice to give profile subdirectories meaningful names to match the experiment nature since profiling data does not store neither command line arguments of the collect run nor the program output.

Visualisation

analyzer profile-data-directory-name.er