Skip to content
README_ACC.md 3.97 KiB
Newer Older
Victor's avatar
Victor committed
# Specfem 3D globe -- Bench readme

Note that this guide is still a work in progress and is currently being reviewed.

## Get the source

Clone the repository in a location of your choice, let's say $HOME.

```shell
cd $HOME
git clone https://github.com/geodynamics/specfem3d_globe.git
```
Cedric Jourdain's avatar
Cedric Jourdain committed
Then use a fixed and stable version of specfem3D_globe (the one of October 31, 2017 
for example, see https://github.com/geodynamics/specfem3d_globe/commits/master)
```shell
cd $HOME/specfem3d_globe
git checkout b1d6ba966496f269611eff8c2cf1f22bcdac2bd9
```
If this is not done, clone the ueabs repository. 
Victor's avatar
Victor committed
```shell
cd $HOME
Cedric Jourdain's avatar
Cedric Jourdain committed
git clone https://repository.prace-ri.eu/git/UEABS/ueabs.git
Victor's avatar
Victor committed
```
Cedric Jourdain's avatar
Cedric Jourdain committed
In the specfem3D folder of this repo, you will find test cases in the test_cases folder, 
you will also find environment and submission scripts templates for several machines
Victor's avatar
Victor committed

## Load the environment

You will need a fortran and a C compiler and a MPI library.
The following variables are relevent to compile the code:

 - `LANG=C`
 - `FC`
 - `MPIFC`
 - `CC`
 - `MPICC`

Compiling with CUDA to run on GPUs, you will also need to load the cuda environment
and set the two following variables

 - `CUDA_LIB`
 - `CUDA_INC`

An exemple (compiling for GPUs) on the ouessant cluster at IDRIS - France:

```shell
LANG=C

module purge
module load pgi cuda ompi

export FC=`which pgfortran`
export MPIFC=`which mpif90`
export CC=`which pgcc`
export MPICC=`which mpicc`
export CUDA_LIB="$CUDAROOT/lib64"
export CUDA_INC="$CUDAROOT/include"
```
Cedric Jourdain's avatar
Cedric Jourdain committed
Once again, you will find in the specfem3D folder of this repo a folder name env,
with file name env_x which gives examples of the environment used on several supercomputers 
during the last benchmark campaign
Victor's avatar
Victor committed
## Compile specfem

As arrays are staticaly declared, you will need to compile specfem once for each
test case with the right `Par_file`
On some environement, depending on MPI configuration you will need to replace
`use mpi` statement with `include mpif.h`, use the script and prodedure commented
below.

First you will have to configure.

**On GPU platform** you will have to add the following arguments to the
configure:`--build=ppc64 --with-cuda=cuda5`.

```shell
cp -r $HOME/specfem3d_globe specfem_compil_${test_case_id}
cp $HOME/bench_spec/test_case_${test_case_id}/DATA/Par_file specfem_compil_${test_case_id}/DATA/

cd specfem_compil_${test_case_id}

### replace `use mpi` if needed ###
# cd utils
# perl replace_use_mpi_with_include_mpif_dot_h.pl
# cd ..
####################################

./configure --prefix=$PWD
```

**On Xeon Phi**, since support is recent you should replace the following variables
values in the generated Makefile:

```Makefile
FCFLAGS = -g -O3 -qopenmp -xMIC-AVX512 -DUSE_FP32 -DOPT_STREAMS -align array64byte  -fp-model fast=2 -traceback -mcmodel=large
FCFLAGS_f90 = -mod ./obj -I./obj -I.  -I. -I${SETUP} -xMIC-AVX512
CPPFLAGS = -I${SETUP}  -DFORCE_VECTORIZATION  -xMIC-AVX512
```
Note: Be careful, in most machines login node does not have the same instruction set so, in order to compile with the right instruction set, you'll have to compile on a compute node (salloc + ssh)

Victor's avatar
Victor committed

Finally compile with make:
```shell
make clean
make all
```

Cedric Jourdain's avatar
Cedric Jourdain committed
**-> You will find in the specfem folder of ueabs repository the file "compile.sh" which is an compilation script template for several machines (different architectures : KNL, SKL, Haswell and GPU)**
Victor's avatar
Victor committed

Cedric Jourdain's avatar
Cedric Jourdain committed
## Launch specfem
Victor's avatar
Victor committed

Cedric Jourdain's avatar
Cedric Jourdain committed
You can use or be inspired by the submission script template in the job_script folder using the appropriate job submission command :
- qsub for pbs job,
- sbatch for slurm job,
- ccc_msub for irene job (wrapper),
- llsubmit for LoadLeveler job.
Victor's avatar
Victor committed


## Gather results

The relevant metric for this benchmark is time for the solver. Using slurm, it is
easy to gather as each `mpirun` or `srun` is interpreted as a step wich is already
timed. So the command line `sacct -j <job_id>` allows you to catch the metric.

Cedric Jourdain's avatar
Cedric Jourdain committed
Or you can find more precise timing info at the end of this output file : specfem3d_globe/OUTPUT_FILES/output_solver.txt