Skip to content
README.md 5.35 KiB
Newer Older
Victor's avatar
Victor committed
# Specfem 3D globe -- Bench readme

## Get the source

Clone the repository in a location of your choice, let's say $HOME.

```shell
cd $HOME
git clone https://github.com/geodynamics/specfem3d_globe.git
```
Cedric Jourdain's avatar
Cedric Jourdain committed
Then use a fixed and stable version of specfem3D_globe (the one of October 31, 2017 
for example, see https://github.com/geodynamics/specfem3d_globe/commits/master)
```shell
cd $HOME/specfem3d_globe
git checkout b1d6ba966496f269611eff8c2cf1f22bcdac2bd9
```
If this is not done, clone the ueabs repository. 
Victor's avatar
Victor committed
```shell
cd $HOME
Cedric Jourdain's avatar
Cedric Jourdain committed
git clone https://repository.prace-ri.eu/git/UEABS/ueabs.git
Victor's avatar
Victor committed
```
Cedric Jourdain's avatar
Cedric Jourdain committed
In the specfem3D folder of this repo, you will find test cases in the test_cases folder, 
you will also find environment and submission scripts templates for several machines
Victor's avatar
Victor committed

## Load the environment

You will need a fortran and a C compiler and a MPI library.
The following variables are relevent to compile the code:

 - `LANG=C`
 - `FC`
 - `MPIFC`
 - `CC`
 - `MPICC`

Compiling with CUDA to run on GPUs, you will also need to load the cuda environment
and set the two following variables

 - `CUDA_LIB`
 - `CUDA_INC`

An exemple (compiling for GPUs) on the ouessant cluster at IDRIS - France:

```shell
LANG=C

module purge
module load pgi cuda ompi

export FC=`which pgfortran`
export MPIFC=`which mpif90`
export CC=`which pgcc`
export MPICC=`which mpicc`
export CUDA_LIB="$CUDAROOT/lib64"
export CUDA_INC="$CUDAROOT/include"
```
Cedric Jourdain's avatar
Cedric Jourdain committed
Once again, you will find in the specfem3D folder of this repo a folder named env,
with files named env_x which gives examples of the environment used on several supercomputers 
Cedric Jourdain's avatar
Cedric Jourdain committed
during the last benchmark campaign
Victor's avatar
Victor committed
## Compile specfem

As arrays are staticaly declared, you will need to compile specfem once for each
Cedric Jourdain's avatar
Cedric Jourdain committed
test case with the right `Par_file`. Indeed, input for the mesher (and the solver) is provided through the parameter file Par_file, which resides in the
subdirectory DATA of specfem3D\_Globe. Before running the mesher, a number of parameters need to be set in the Par\_file. In our case, the Par\_file for each test case is provided in the subdirectories test_cases.
Victor's avatar
Victor committed
On some environement, depending on MPI configuration you will need to replace
`use mpi` statement with `include mpif.h`, use the script and prodedure commented
below.

First you will have to configure.

**On GPU platform** you will have to add the following arguments to the
configure:`--build=ppc64 --with-cuda=cuda5`.

```shell
Cedric Jourdain's avatar
Cedric Jourdain committed
cp -r $HOME/specfem3d_globe specfem_build_${test_case_id}
cp $HOME/ueabs/specfem3d/test_cases/SPECFEM3D_${test_case_id}/Par_file specfem_build_${test_case_id}/DATA/
cd specfem_build_${test_case_id}
Victor's avatar
Victor committed

### replace `use mpi` if needed ###
# cd utils
# perl replace_use_mpi_with_include_mpif_dot_h.pl
# cd ..
####################################

Cedric Jourdain's avatar
Cedric Jourdain committed
# GPU platform
./configure --prefix=$PWD --build=ppc64 --with-cuda=cuda5
# Otherwise
#./configure --prefix=$PWD --enable-openmp
Victor's avatar
Victor committed

Cedric Jourdain's avatar
Cedric Jourdain committed
```
Victor's avatar
Victor committed

Cedric Jourdain's avatar
Cedric Jourdain committed
Depending on the architecture you will have to export (before configuring) different options for the environment variables related to the compilation (CFLAGS, CPPFLAGS, FCFLAGS...) (or you can modify the values of the variables in the generated Makefile).
Here is an example of the variables to (re)define **On Xeon Phi** 
```Makefile/environnment
FCFLAGS=" -g -O3 -qopenmp -xMIC-AVX512 -DUSE_FP32 -DOPT_STREAMS -fp-model fast=2 -traceback -mcmodel=large -fma -align array64byte -finline-functions -ipo"
CFLAGS=" -g -O3  -xMIC-AVX512 -fma -align -finline-functions -ipo"
Victor's avatar
Victor committed
FCFLAGS_f90 = -mod ./obj -I./obj -I.  -I. -I${SETUP} -xMIC-AVX512
CPPFLAGS = -I${SETUP}  -DFORCE_VECTORIZATION  -xMIC-AVX512
```
Cedric Jourdain's avatar
Cedric Jourdain committed
Another example for **Skylake** architecture:
```Makefile/environnment
export FCFLAGS=" -g -O3 -qopenmp -xCORE-AVX512 -mtune=skylake -ipo -DUSE_FP32 -DOPT_STREAMS -fp-model fast=2 -traceback -mcmodel=large"
export CFLAGS=" -g -O3  -xCORE-AVX512 -mtune=skylake -ipo"
```
Note: Be careful, in most machines login node does not have the same instruction set so, in order to compile with the right instruction set, you'll have to compile on a compute node (salloc + ssh)

Victor's avatar
Victor committed
Finally compile with make:
```shell
make clean
make all
```

Cedric Jourdain's avatar
Cedric Jourdain committed
**-> You will find in the specfem folder of ueabs repository the file "compile.sh" which is an compilation script template for several machines (different architectures : KNL, SKL, Power 8, Haswell and GPU)**
Victor's avatar
Victor committed

Cedric Jourdain's avatar
Cedric Jourdain committed
## Run instructions
Victor's avatar
Victor committed

Cedric Jourdain's avatar
Cedric Jourdain committed
To run the test cases:
1. Copy the Par_file, STATIONS and CMTSOLUTION files from ueabs/specfem3d/test_cases/SPECFEM3D_TestCaseX into the SPECFEM3D_GLOBE/DATA directory.
2. Recompile the mesher and the solver.
3. Run the mesher and the solver.

On Curie/Irène the commands to put in the submission file are:
```shell
ccc_mprun   bin/xmeshfem3D
ccc_mprun   bin/xspecfem3D
```
SPECFEM3D_TestCaseA runs on 24 nodes, SPECFEM3D_TestCaseB runs on 384 nodes and SPECFEM3D_TestCaseC runs on 1 or 2 nodes.

You can use or be inspired by the submission script template in the job_script folder using the appropriate job submission command:
Cedric Jourdain's avatar
Cedric Jourdain committed
- qsub for pbs job,
- sbatch for slurm job,
- ccc_msub for irene job (wrapper),
- llsubmit for LoadLeveler job.
Victor's avatar
Victor committed

## Gather results

Cedric Jourdain's avatar
Cedric Jourdain committed
The relevant metric for this benchmark is time for the solver, you can find it at the end of this output file : specfem3d_globe/OUTPUT_FILES/output_solver.txt.
Using slurm, it is easy to gather as each `mpirun` or `srun` is interpreted as a step wich is already timed. So the command line `sacct -j <job_id>` allows you to catch the metric.