Commit 7c73d399 authored by Cedric Jourdain's avatar Cedric Jourdain 🐵
Browse files

Update README.md

parent 16e153c3
# Specfem 3D globe -- Bench readme
Note that this guide is still a work in progress and is currently being reviewed.
## General description
The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). All SPECFEM3D_GLOBE software is written in Fortran90 with full portability in mind, and conforms strictly to the Fortran95 standard. It uses no obsolete or obsolescent features of Fortran77. The package uses parallel programming based upon the Message Passing Interface (MPI).
The SEM was originally developed in computational fluid dynamics and has been successfully adapted to address problems in seismic wave propagation. It is a continuous Galerkin technique, which can easily be made discontinuous; it is then close to a particular case of the discontinuous Galerkin technique, with optimized efficiency because of its tensorized basis functions. In particular, it can accurately handle very distorted mesh elements. It has very good accuracy and convergence properties. The spectral element approach admits spectral rates of convergence and allows exploiting hp-convergence schemes. It is also very well suited to parallel implementation on very large supercomputers as well as on clusters of GPU accelerating graphics cards. Tensor products inside each element can be optimized to reach very high efficiency, and mesh point and element numbering can be optimized to reduce processor cache misses and improve cache reuse. The SEM can also handle triangular (in 2D) or tetrahedral (3D) elements as well as mixed meshes, although with increased cost and reduced accuracy in these elements, as in the discontinuous Galerkin method.
In many geological models in the context of seismic wave propagation studies (except for instance for fault dynamic rupture studies, in which very high frequencies of supershear rupture need to be modeled near the fault, a continuous formulation is sufficient because material property contrasts are not drastic and thus conforming mesh doubling bricks can efficiently handle mesh size variations. This is particularly true at the scale of the full Earth. Effects due to lateral variations in compressional-wave speed, shear-wave speed, density, a 3D crustal model, ellipticity, topography and bathyletry, the oceans, rotation, and self-gravitation are included. The package can accommodate full 21-parameter anisotropy as well as lateral variations in attenuation. Adjoint capabilities and finite-frequency kernel simulations are also included.
* Web site: http://geodynamics.org/cig/software/specfem3d_globe/
* User manual: https://geodynamics.org/cig/software/specfem3d_globe/gitbranch/devel/doc/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf
* Code download: https://github.com/geodynamics/specfem3d_globe.git
* Build instructions: https://github.com/geodynamics/specfem3d/wiki/02_getting_started
* Test Cases
* Test Case A: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/master/specfem3d/test_cases/SPECFEM3D_TestCaseA
* Test Case B: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/master/specfem3d/test_cases/SPECFEM3D_TestCaseB
* Test Case C: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/master/specfem3d/test_cases/SPECFEM3D_TestCaseC
* Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/specfem3d/SPECFEM3D_Run_README.txt
## Get the source
Clone the repository in a location of your choice, let's say $HOME.
......@@ -60,8 +77,7 @@ during the last benchmark campaign
## Compile specfem
As arrays are staticaly declared, you will need to compile specfem once for each
test case with the right `Par_file`. Indeed, input for the mesher (and the solver) is provided through the parameter file Par_file, which resides in the
subdirectory DATA of specfem3D\_Globe. Before running the mesher, a number of parameters need to be set in the Par\_file. In our case, the Par\_file for each test case is provided in the subdirectories test_cases.
test case with the right `Par_file`
On some environement, depending on MPI configuration you will need to replace
`use mpi` statement with `include mpif.h`, use the script and prodedure commented
below.
......@@ -72,9 +88,10 @@ First you will have to configure.
configure:`--build=ppc64 --with-cuda=cuda5`.
```shell
cp -r $HOME/specfem3d_globe specfem_build_${test_case_id}
cp $HOME/ueabs/specfem3d/test_cases/SPECFEM3D_${test_case_id}/Par_file specfem_build_${test_case_id}/DATA/
cd specfem_build_${test_case_id}
cp -r $HOME/specfem3d_globe specfem_compil_${test_case_id}
cp $HOME/bench_spec/test_case_${test_case_id}/DATA/Par_file specfem_compil_${test_case_id}/DATA/
cd specfem_compil_${test_case_id}
### replace `use mpi` if needed ###
# cd utils
......@@ -82,57 +99,41 @@ cd specfem_build_${test_case_id}
# cd ..
####################################
# GPU platform
./configure --prefix=$PWD --build=ppc64 --with-cuda=cuda5
# Otherwise
#./configure --prefix=$PWD --enable-openmp
./configure --prefix=$PWD
```
Depending on the architecture you will have to export (before configuring) different options for the environment variables related to the compilation (CFLAGS, CPPFLAGS, FCFLAGS...) (or you can modify the values of the variables in the generated Makefile).
Here is an example of the variables to (re)define **On Xeon Phi**
```Makefile/environnment
FCFLAGS=" -g -O3 -qopenmp -xMIC-AVX512 -DUSE_FP32 -DOPT_STREAMS -fp-model fast=2 -traceback -mcmodel=large -fma -align array64byte -finline-functions -ipo"
CFLAGS=" -g -O3 -xMIC-AVX512 -fma -align -finline-functions -ipo"
**On Xeon Phi**, since support is recent you should replace the following variables
values in the generated Makefile:
```Makefile
FCFLAGS = -g -O3 -qopenmp -xMIC-AVX512 -DUSE_FP32 -DOPT_STREAMS -align array64byte -fp-model fast=2 -traceback -mcmodel=large
FCFLAGS_f90 = -mod ./obj -I./obj -I. -I. -I${SETUP} -xMIC-AVX512
CPPFLAGS = -I${SETUP} -DFORCE_VECTORIZATION -xMIC-AVX512
```
Another example for **Skylake** architecture:
```Makefile/environnment
export FCFLAGS=" -g -O3 -qopenmp -xCORE-AVX512 -mtune=skylake -ipo -DUSE_FP32 -DOPT_STREAMS -fp-model fast=2 -traceback -mcmodel=large"
export CFLAGS=" -g -O3 -xCORE-AVX512 -mtune=skylake -ipo"
```
Note: Be careful, in most machines login node does not have the same instruction set so, in order to compile with the right instruction set, you'll have to compile on a compute node (salloc + ssh)
Finally compile with make:
```shell
make clean
make all
```
**-> You will find in the specfem folder of ueabs repository the file "compile.sh" which is an compilation script template for several machines (different architectures : KNL, SKL, Power 8, Haswell and GPU)**
**-> You will find in the specfem folder of ueabs repository the file "compile.sh" which is an compilation script template for several machines (different architectures : KNL, SKL, Haswell and GPU)**
## Run instructions
## Launch specfem
To run the test cases:
1. Copy the Par_file, STATIONS and CMTSOLUTION files from ueabs/specfem3d/test_cases/SPECFEM3D_TestCaseX into the SPECFEM3D_GLOBE/DATA directory.
2. Recompile the mesher and the solver.
3. Run the mesher and the solver.
On Curie/Irène the commands to put in the submission file are:
```shell
ccc_mprun bin/xmeshfem3D
ccc_mprun bin/xspecfem3D
```
SPECFEM3D_TestCaseA runs on 24 nodes, SPECFEM3D_TestCaseB runs on 384 nodes and SPECFEM3D_TestCaseC runs on 1 or 2 nodes.
You can use or be inspired by the submission script template in the job_script folder using the appropriate job submission command:
You can use or be inspired by the submission script template in the job_script folder using the appropriate job submission command :
- qsub for pbs job,
- sbatch for slurm job,
- ccc_msub for irene job (wrapper),
- llsubmit for LoadLeveler job.
## Gather results
The relevant metric for this benchmark is time for the solver, you can find it at the end of this output file : specfem3d_globe/OUTPUT_FILES/output_solver.txt.
Using slurm, it is easy to gather as each `mpirun` or `srun` is interpreted as a step wich is already timed. So the command line `sacct -j <job_id>` allows you to catch the metric.
The relevant metric for this benchmark is time for the solver. Using slurm, it is
easy to gather as each `mpirun` or `srun` is interpreted as a step wich is already
timed. So the command line `sacct -j <job_id>` allows you to catch the metric.
Or you can find more precise timing info at the end of this output file : specfem3d_globe/OUTPUT_FILES/output_solver.txt
\ No newline at end of file
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment