Commit fe548063 authored by Andrew Emerson's avatar Andrew Emerson
Browse files

Merge branch 'r2.2-dev' of https://repository.prace-ri.eu/git/UEABS/ueabs into r2.2-dev

parents 24ac1b5a 738caf83
......@@ -312,19 +312,6 @@ The SHOC benchmark suite currently contains benchmark programs, categoried based
# SPECFEM3D <a name="specfem3d"></a>
The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). All SPECFEM3D_GLOBE software is written in Fortran90 with full portability in mind, and conforms strictly to the Fortran95 standard. It uses no obsolete or obsolescent features of Fortran77. The package uses parallel programming based upon the Message Passing Interface (MPI).
The SEM was originally developed in computational fluid dynamics and has been successfully adapted to address problems in seismic wave propagation. It is a continuous Galerkin technique, which can easily be made discontinuous; it is then close to a particular case of the discontinuous Galerkin technique, with optimized efficiency because of its tensorized basis functions. In particular, it can accurately handle very distorted mesh elements. It has very good accuracy and convergence properties. The spectral element approach admits spectral rates of convergence and allows exploiting hp-convergence schemes. It is also very well suited to parallel implementation on very large supercomputers as well as on clusters of GPU accelerating graphics cards. Tensor products inside each element can be optimized to reach very high efficiency, and mesh point and element numbering can be optimized to reduce processor cache misses and improve cache reuse. The SEM can also handle triangular (in 2D) or tetrahedral (3D) elements as well as mixed meshes, although with increased cost and reduced accuracy in these elements, as in the discontinuous Galerkin method.
In many geological models in the context of seismic wave propagation studies (except for instance for fault dynamic rupture studies, in which very high frequencies of supershear rupture need to be modeled near the fault, a continuous formulation is sufficient because material property contrasts are not drastic and thus conforming mesh doubling bricks can efficiently handle mesh size variations. This is particularly true at the scale of the full Earth. Effects due to lateral variations in compressional-wave speed, shear-wave speed, density, a 3D crustal model, ellipticity, topography and bathyletry, the oceans, rotation, and self-gravitation are included. The package can accommodate full 21-parameter anisotropy as well as lateral variations in attenuation. Adjoint capabilities and finite-frequency kernel simulations are also included.
- Web site: http://geodynamics.org/cig/software/specfem3d_globe/
- Code download: http://geodynamics.org/cig/software/specfem3d_globe/
- Build instructions: http://www.geodynamics.org/wsvn/cig/seismo/3D/SPECFEM3D_GLOBE/trunk/doc/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf?op=file&rev=0&sc=0
- Test Case A: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/r2.1-dev/specfem3d/test_cases/SPECFEM3D_TestCaseA
- Test Case B: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/r2.1-dev/specfem3d/test_cases/SPECFEM3D_TestCaseB
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/specfem3d/README.md
| **General information** | **Scientific field** | **Language** | **MPI** | **OpenMP** | **GPU** | **LoC** | **Code description** |
|------------------|----------------------|--------------|---------|------------|---------------------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
| [- Website](https://geodynamics.org/cig/software/specfem3d_globe/) <br>[- Source](https://github.com/geodynamics/specfem3d_globe.git) <br>[- Bench](https://repository.prace-ri.eu/git/UEABS/ueabs/tree/r2.1-dev/specfem3d) <br>[- Summary](https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/specfem3d/PRACE_UEABS_Specfem3D_summary.pdf) | Geodynamics | Fortran | yes | yes | Yes (CUDA) | 140000 | The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). |
# ALYA
## Summary Version
1.0
## Purpose of Benchmark
The Alya System is a Computational Mechanics code capable of solving different physics, each one with its own modelization characteristics, in a coupled way. Among the problems it solves are: convection-diffusion reactions, incompressible flows, compressible flows, turbulence, bi-phasic flows and free surface, excitable media, acoustics, thermal flow, quantum mechanics (DFT) and solid mechanics (large strain). ALYA is written in Fortran 90/95 and parallelized using MPI and OpenMP.
* Web site: https://www.bsc.es/computer-applications/alya-system
* Code download: https://repository.prace-ri.eu/ueabs/ALYA/2.1/Alya.tar.gz
* Test Case A: https://repository.prace-ri.eu/ueabs/ALYA/2.1/TestCaseA.tar.gz
* Test Case B: https://repository.prace-ri.eu/ueabs/ALYA/2.1/TestCaseB.tar.gz
## Mechanics of Building Benchmark
Alya builds the makefile from the compilation options defined in config.in. In order to build ALYA (Alya.x), please follow these steps:
Goto to directory: Executables/unix
```
cd Executables/unix
```
Edit config.in (some default config.in files can be found in directory configure.in):
* Select your own MPI wrappers and paths
* Select size of integers. Default is 4 bytes, For 8 bytes, select -DI8
* Choose your metis version, metis-4.0 or metis-5.1.0_i8 for 8-bytes integers
Configure Alya:
./configure -x nastin parall
Compile metis:
make metis4
or
make metis5
## Mechanics of Running Benchmark
### Datasets
The parameters used in the datasets try to represent at best typical industrial runs in order to obtain representative speedups. For example, the iterative solvers are never converged to machine accuracy, but only as a percentage of the initial residual.
The different datasets are:
SPHERE_16.7M ... 16.7M sphere mesh
SPHERE_132M .... 132M sphere mesh
### How to execute Alya with a given dataset
In order to run ALYA, you need at least the following input files per execution:
X.dom.dat
X.ker.dat
X.nsi.dat
X.dat
In our case X=sphere
To execute a simulation, you must be inside the input directory and you should submit a job like:
mpirun Alya.x sphere
How to measure the speedup
--------------------------
There are many ways to compute the scalability of Nastin module.
1. **For the complete cycle including: element assembly + boundary assembly + subgrid scale assembly + solvers, etc.**
> In *.nsi.cvg file, column "30. Elapsed CPU time"
2. **For single kernels: element assembly, boundary assembly, subgrid scale assembly, solvers**. Single kernels. Here, average and maximum times are indicated in *.nsi.cvg at each iteration of each time step:
> Element assembly: 19. Ass. ave cpu time 20. Ass. max cpu time
>
> Boundary assembly: 33. Bou. ave cpu time 34. Bou. max cpu time
>
> Subgrid scale assembly: 31. SGS ave cpu time 32. SGS max cpu time
>
> Iterative solvers: 21. Sol. ave cpu time 22. Sol. max cpu time
>
> Note that in the case of using Runge-Kutta time integration (the case
> of the sphere), the element and boundary assembly times are this of
> the last assembly of current time step (out of three for third order).
3. **Using overall times**.
> At the end of *.log file, total timings are shown for all modules. In
> this case we use the first value of the NASTIN MODULE.
Contact
-------
If you have any question regarding the runs, please feel free to contact Guillaume Houzeaux: guillaume.houzeaux@bsc.es
## Overview
This README together with the provided benchmark input files, CP2K build configuration ("arch") files and other details provided in each subdirectory corresponding to a machine specify how the CP2K benchmarking results presented in the PRACE-5IP deliverable D7.5 ("Evaluation of Accelerated and Non-accelerated Benchmarks") were obtained and provide general guidance on building CP2K and running the UEABS benchmarks.
In short, the procedure followed to generate CP2K UEABS benchmark results for the D7.5 deliverable on a given machine was:
1. Compile Libint
2. Compile Libxc
3. Compile FFTW library (or use MKL's FFTW3 interface)
4. Compile CP2K and link to Libint, Libxc, FFTW, LAPACK, BLAS, SCALAPACK and BLACS, and to relevant CUDA libraries if building for GPU
5. Run the benchmarks, namely:
- Test Case A: H2O-1024
- Test Case B: LiH-HFX (adjusting the MAX_MEMORY input parameter to take into account available on-node memory)
- Test Case C: H2O-DFT-LS
Gcc/gfortran was used to compile the libraries and CP2K itself, with the exception of the Frioul machine on which the Intel compiler was used to compile for the Knights Landing processor. In general it is recommended to use an MPI library built using the same compiler used to build CP2K. General information about building CP2K and libraries it depends on can be found in INSTALL.md included in the CP2K source distribution.
The reported walltime for a given run is obtained by querying the resulting .log file for CP2K's internal timing, as follows:
```
grep "CP2K " *.log
```
### Optional
Optional performance libraries including ELPA, libgrid, and libsmm/libxsmm can and should be built and linked to when compiling a CP2K executable for maximum performance for production usage. This was not done for the results presented in deliverable D7.5 due to the effort and complexity involved in doing so for the range of machines on which benchmark results were generated, which included PRACE Tier-0 production machines as well as pre-production prototypes. Information about these libraries can be found in INSTALL.md included in the CP2K source distribution.
## 1. Compile Libint
The Libint library can be obtained from <http://sourceforge.net/projects/libint/files/v1-releases/libint-1.1.4.tar.gz>
The specific commands used to build version 1.1.4 of Libint using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README. By default the build process only creates static (.a) libraries. If you want to be able to link dynamically to Libint when building CP2K you can pass the flag
--enable-shared to ./configure in order to produce shared libraries (.so).
If you can, it is easiest to build Libint on the same processor architecture on which you will run CP2K. This typically correspond to being to compile directly on the relevant compute nodes of the machine. If this is not possible and if you are forced instead to compile on nodes with a different processor architecture to the compute nodes on which CP2K will eventually run, see the section below on cross-compiling Libint.
More information about Libint can be found inside the CP2K distribution base directory in
```
/tools/hfx_tools/libint_tools/README_LIBINT
```
### Cross-compiling Libint for compute nodes
If you are forced to cross-compile Libint for compute nodes on nodes that have a different processor architecture, follow these instructions. They assume you will be able to call a parallel application launcher like ``mpirun`` or ``mpiexec`` during your build process in order to run compiled code.
In ``/src/lib/`` edit the files ``MakeRules`` and ``MakeRules.in``.
On the last line of each file, replace
```
cd $(LIBSRCLINK); $(TOPOBJDIR)/src/bin/$(NAME)/$(COMPILER)
```
with
```
cd $(LIBSRCLINK); $(PARALLEL_LAUNCHER_COMMAND) $(TOPOBJDIR)/src/bin/$(NAME)/$(COMPILER)
```
Then run
```
export PARALLEL_LAUNCHER_COMMAND="mpirun -n 1"
```
replacing ``mpirun`` with a different parallel application launcher such as ``mpiexec`` (or ``aprun`` if applicable). When proceeding with the configure stage, include the configure flag ``cross-compiling=yes``.
## 2. Compile Libxc
The Libxc library can be obtained from <https://tddft.org/programs/libxc/download/previous/>. Version 4.2.3 was used for deliverable D7.5.
The specific commands used to build Libxc using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README.
## 3. Compile FFTW
If FFTW (FFTW3) is not already available preinstalled on your system it can be obtained from <http://www.fftw.org/download.html>
The specific commands used to build FFTW using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README.
Alternatively, you can use MKL's FFTW3 interface.
## 4. Compile CP2K
The CP2K source code can be downloaded from <https://github.com/cp2k/cp2k/releases/>. Version 6.1 of CP2K was used to generate the results reported in D7.5.
The general procedure is to create a custom so-called arch (architecture) file inside the ``arch`` directory in the CP2K distribution, which includes examples for a number of common architectures. The arch file specifies build parameters such as the choice of compilers, library locations and compilation and linker flags. Building the hybrid MPI + OpenMP ("psmp") version of CP2K (most convenient for running benchmarks) in accordance with a given arch file is then accomplished by entering the ``makefiles`` directory in the distribution and running
```
make -j number_of_cores_available_to_you ARCH=arch_file_name VERSION=psmp
```
If the build is successful, the resulting executable ``cp2k.psmp`` can be found inside ``/exe/arch_file_name/`` in the CP2K base directory.
Detailed information about arch file and library options and overall build procedure can be found in the ``INSTALL.md`` readme file. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of
the testing reports for some platforms (click on the status field for a platform, and search for 'ARCH-file' in the resulting output).
Specific arch files used to build CP2K for deliverable D7.5 can be found in the machine-specific subdirectories in this repository.
### Linking to MKL
If you are linking to Intel's MKL library to provide LAPACK, BLAS, SCALAPACK and BLACS (and possibly FFTW3) you should choose linking options using the [MKL Link Line Advisor tool](https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor), carefully selecting the options relevant on your machine environment.
### Building a CUDA-enabled version of CP2K
See the ``PizDaint`` subdirectory for an example arch file that enables GPU acceleration with CUDA. The ``-arch`` NVIDIA flag should be adjusted to choose the right option for the particular Nvidia GPU architecture in question. For example ``-arch sm35`` matches the Tesla K40 and K80 GPU architectures.
## 5. Running the UEABS benchmarks
**The exact input files used to generate the results presented in deliverable D7.5 are included in machine-specific subdirectories accompanying this README**
After building the hybrid MPI+OpenMP version of CP2K you have an executable ending in `.psmp`. The general way to run the benchmarks with the hybrid parallel executable is, e.g. for 2 threads per rank:
```
export OMP_NUM_THREADS=2
parallel_launcher launcher_options path_to_cp2k.psmp -i inputfile -o logfile
```
Where:
- The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on Cray systems or srun when using Slurm.
- launcher\_options specifies parallel placement in terms of total numbers of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which should be equal to the value given to OMP\_NUM\_THREADS). This is not necessary if parallel runtime options are picked up by the launcher from the job environment.
You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest.
### Test Case A: H2O-512
Test Case A is the H2O-512 benchmark included in the CP2K distribution (in ``/tests/QS/benchmark/``) and consists of ab-initio molecular dynamics simulation of 512 liquid water molecules using the Born-Oppenheimer approach via Quickstep DFT.
### Test Case B: LiH-HFX
Test Case B is the LiH-HFX benchmark included in the CP2K distribution (in ``/tests/QS/benchmark_HFX/LiH``) and consists of a DFT energy calculation for a 216 atom LiH crystal.
Provided input files include:
- ``input_bulk_HFX_3.inp``: input for the actual benchmark HFX run
- ``input_bulk_B88_3.inp``: needed to generate an initial wave function (wfn file) for the HFX run - this should be run once before running the actual HFX benchmark
After the B88 run has been performed once, you should rename the resulting wavefunction file for use in the HFX benchmark runs as follows:
```
cp LiH_bulk_3-RESTART.wfn B88.wfn
```
#### Notes
The amount of memory available per MPI process must be altered according to the number of MPI processes being used. If this is not done the benchmark will crash with an out of memory (OOM) error. The input file keyword ``MAX_MEMORY`` in ``input_bulk_HFX_3.inp`` needs to be changed as follows:
```
MAX_MEMORY 14000
```
should be changed to
```
MAX_MEMORY new_value
```
The new value of ``MAX_MEMORY`` is chosen by dividing the total amount of memory available on a node by the number of MPI processes being used per node.
If a shorter runtime is desirable, the following line in ``input_bulk_HFX_3.inp``:
```
MAX_SCF 20
```
may be changed to
```
MAX_SCF 1
```
in order to reduce the maximum number of SCF cycles and hence the execution time.
If the runtime or required memory needs to be reduced so the benchmark can run on a smaller number of nodes, the OPT1 basis set can be used instead of the default OPT2. To this end, the line
```
BASIS_SET OPT2
```
in ``input_bulk_B88_3.inp`` and in ``input_bulk_HFX_3.inp`` should be changed to
```
BASIS_SET OPT1
```
### Test Case C: H2O-DFT-LS
Test Case C is the H2O-DFT-LS benchmark included in the CP2K distribution (in ``tests/QS/benchmark_DM_LS``) and consists of a single-point energy calculation using linear-scaling DFT of 2048 water molecules.
This diff is collapsed.
Complete Build instructions: http://manual.gromacs.org/documentation/
A typical build procedure look like :
tar -zxf gromacs-2016.tar.gz
cd gromacs-2016
mkdir build
cd build
cmake \
-DCMAKE_INSTALL_PREFIX=$HOME/Packages/gromacs/2016 \
-DBUILD_SHARED_LIBS=off \
-DBUILD_TESTING=off \
-DREGRESSIONTEST_DOWNLOAD=OFF \
-DCMAKE_C_COMPILER=`which mpicc` \
-DCMAKE_CXX_COMPILER=`which mpicxx` \
-DGMX_BUILD_OWN_FFTW=on \
-DGMX_SIMD=AVX2_256 \
-DGMX_DOUBLE=off \
-DGMX_EXTERNAL_BLAS=off \
-DGMX_EXTERNAL_LAPACK=off \
-DGMX_FFT_LIBRARY=fftw3 \
-DGMX_GPU=off \
-DGMX_MPI=on \
-DGMX_OPENMP=on \
-DGMX_X11=off \
..
make (or make -j ##)
make install
You probably need to adjust
1. The CMAKE_INSTALL_PREFIX to point to a different path
2. GMX_SIMD : You may completely ommit this if your compile and compute nodes are of the same architecture (for example Haswell).
If they are different you should specify what fits to your compute nodes.
For a complete and up to date list of possible choices refer to the gromacs official build instructions.
Gromacs can be downloaded from : http://www.gromacs.org/Downloads
The UEABS benchmark cases require the use of 4.6 or newer branch,
the latest 4.6.x version is suggested.
# Build and Run instructions for gromacs
## Build
Complete Build instructions: [http://manual.gromacs.org/documentation/](http://manual.gromacs.org/documentation/)
## Overview
A typical build procedure looks like :
## [1. Download](#download)
Gromacs can be downloaded from : [http://www.gromacs.org/Downloads](http://www.gromacs.org/Downloads)
The UEABS benchmark cases require the use of 4.6 or newer branch, the latest stable version is suggested.
## [2. Build](#build)
Complete Build instructions: [http://manual.gromacs.org/documentation/](http://manual.gromacs.org/documentation/)
A typical build procedure look like :
```
tar -zxf gromacs-2016.tar.gz
cd gromacs-2016
tar -zxf gromacs-2020.2.tar.gz
cd gromacs-2020.2
mkdir build
cd build
cmake \
-DCMAKE_INSTALL_PREFIX=$HOME/Packages/gromacs/2016 \
-DCMAKE_INSTALL_PREFIX=$HOME/Packages/gromacs/2020.2 \
-DBUILD_SHARED_LIBS=off \
-DBUILD_TESTING=off \
-DREGRESSIONTEST_DOWNLOAD=OFF \
......@@ -32,64 +39,56 @@ make (or make -j ##)
make install
```
You probably need to adjust
1. The `CMAKE_INSTALL_PREFIX `to point to a different path
You probably need to adjust
1. The `CMAKE_INSTALL_PREFIX` to point to a different path
2. `GMX_SIMD` : You may completely ommit this if your compile and compute nodes are of the same architecture (for example Haswell).
If they are different you should specify what fits to your compute nodes.For a complete and up to date list of possible choices refer to the gromacs official build instructions.
Typical values are `AVX_256` for Ivybridge, `AVX2_256` for Haswell, `AVX_512_KNL` for KNL
For CUDA build one should change `-DGMX_GPU=on`, and cuda bin directory should be in path.
If they are different you should specify what fits to your compute nodes.
For a complete and up to date list of possible choices refer to the gromacs official build instructions.
Typical values are `AVX_256` for Ivybridge, `AVX2_256` for Haswell, `AVX_512_KNL` for KNL For CUDA build one should change `-DGMX_GPU=on`, and cuda bin directory should be in path.
## Run
## [3. Run](#run)
There are two data sets in UEABS for Gromacs.
* `ion_channel` that use PME for electrostatics, for Tier-1 systems
* `lignocellulose-rf` that use Reaction field for electrostatics, for Tier-0 systems. Reference : http://pubs.acs.org/doi/abs/10.1021/bm400442n
1. `ion_channel` that use PME for electrostatics, for Tier-1 systems
2. `lignocellulose-rf` that use Reaction field for electrostatics, for Tier-0 systems. Reference : [http://pubs.acs.org/doi/abs/10.1021/bm400442n](http://pubs.acs.org/doi/abs/10.1021/bm400442n)
The input data file for each benchmark is the corresponding .tpr file produced using tools from a complete gromacs installation and a series of ascii data files (atom coords/velocities, forcefield, run control).
The input data file for each benchmark is the corresponding .tpr file produced using tools from a complete gromacs installation and a series of ascii data files (atom coords/velocities, forcefield, run control).
If it happens to run the tier-0 case on BG/Q use `lignucellulose-rf.BGQ.tpr`
instead `lignocellulose-rf.tpr`. It is the same as `lignocellulose-rf.tpr`
instead `lignocellulose-rf.tpr`. It is the same as `lignocellulose-rf.tpr`
created on a BG/Q system.
The general way to run gromacs benchmarks is :
* `WRAPPER` `WRAPPER_OPTIONS PATH_TO_GMX mdrun -s CASENAME.tpr -maxh 0.50 -resethway -noconfout -nsteps 10000 -g logile`
* `CASENAME` is one of ion_channel or lignocellulose-rf
* `maxh` : Terminate after 0.99 times this time (hours) i.e. gracefully terminate after ~30 min
* `resethwat` : Reset Timer counters at half steps. This means that the reported walltime and performance referes to the last half steps of sumulation.
* `noconfout ` : Do not save output coordinates/velocities at the end.
* `nsteps ` : Run this number of steps, no matter what is requested in the input file
* `logfile ` : The output filename. If extension .log is ommited
it is automatically appended. Obviously, it should be different
for different runs.
WRAPPER and WRAPPER_OPTIONS depend on system, batch system etc.
Few common pairs are :
* CRAY : `aprun -n TASKS -N NODES -d THREADSPERTASK`
* Curie : `ccc_mrun` with no options - obtained from batch system
* Juqueen : `runjob --np TASKS --ranks-per-node TASKSPERNOD --exp-env OMP_NUM_THREADS`
* Slurm : `srun` with no options, obtained from slurm if the variables below are set.
```
WRAPPER WRAPPER_OPTIONS PATH_TO_GMX mdrun -s CASENAME.tpr -maxh 0.50 -resethway -noconfout -nsteps 10000 -g logile
```
- `CASENAME` is one of ion_channel or lignocellulose-rf
- `maxh` : Terminate after 0.99 times this time (hours) i.e. gracefully terminate after ~30 min
- `resethwat` : Reset Timer counters at half steps. This means that the reported walltime and performance referes to the last half steps of sumulation.
- `noconfout` : Do not save output coordinates/velocities at the end.
- `nsteps` : Run this number of steps, no matter what is requested in the input file
- `logfile` : The output filename. If extension .log is ommited it is automatically appended. Obviously, it should be different for different runs.
- `WRAPPER` and `WRAPPER_OPTIONS` depend on system, batch system etc. Few common pairs are :
- CRAY : `aprun -n TASKS -N NODES -d THREADSPERTASK`
- Curie : `ccc_mrun with no options - obtained from batch system`
- Juqueen : `runjob --np TASKS --ranks-per-node TASKSPERNOD --exp-env OMP_NUM_THREADS`
- Slurm : `srun` with no options, obtained from slurm if the variables below are set.
```
#SBATCH --nodes=NODES
#SBATCH --ntasks-per-node=TASKSPERNODE
#SBATCH --cpus-per-task=THREADSPERTASK
```
The best performance is usually obtained using pure MPI i.e. `THREADSPERTASK=1.`
The best performance is usually obtained using pure MPI i.e. `THREADSPERTASK=1`.
You can check other hybrid MPI/OMP combinations.
The execution time is reported at the end of logfile : `grep Time: logfile | awk -F ' ' '{print $3}'`
> NOTE : This is the wall time for the last half number of steps.
> **NOTE** : This is the wall time for the last half number of steps.
For sufficiently large nsteps, this is half of the total wall time.
In order to use GPU acceleration, one needs to add in gmx mdrun options the `-gpu_id GPU_IDS`.
GPU_IDS value depends on how many MPI Tasks and gpus are used per node.
For example, using 4 GPUs per node with 4 Taks per node, the GPU_IDS should be 0123.
In order to run on a 20 core node with 20 gpus with pure MPI i.e. 20 tasks/node,
GPU_IDS should be 0000000000111111111 (10 zeroes and 10 ones)
There are two data sets in UEABS for Gromacs.
1. ion_channel that use PME for electrostatics, for Tier-1 systems
2. lignocellulose-rf that use Reaction field for electrostatics, for Tier-0 systems. Reference : http://pubs.acs.org/doi/abs/10.1021/bm400442n
The input data file for each benchmark is the corresponding .tpr file produced using
tools from a complete gromacs installation and a series of ascii data files
(atom coords/velocities, forcefield, run control).
If it happens to run the tier-0 case on BG/Q use lignucellulose-rf.BGQ.tpr
instead lignocellulose-rf.tpr. It is the same as lignocellulose-rf.tpr
created on a BG/Q system.
The general way to run gromacs benchmarks is :
WRAPPER WRAPPER_OPTIONS PATH_TO_GMX mdrun -s CASENAME.tpr -maxh 0.50 -resethway -noconfout -nsteps 10000 -g logile
CASENAME is one of ion_channel or lignocellulose-rf
maxh : Terminate after 0.99 times this time (hours) i.e. gracefully terminate after ~30 min
resethwat : Reset Timer counters at half steps. This means that the reported
walltime and performance referes to the last
half steps of sumulation.
noconfout : Do not save output coordinates/velocities at the end.
nsteps : Run this number of steps, no matter what is requested in the input file
logfile : The output filename. If extension .log is ommited
it is automatically appended. Obviously, it should be different
for different runs.
WRAPPER and WRAPPER_OPTIONS depend on system, batch system etc.
Few common pairs are :
CRAY : aprun -n TASKS -N NODES -d THREADSPERTASK
Curie : ccc_mrun with no options - obtained from batch system
Juqueen : runjob --np TASKS --ranks-per-node TASKSPERNOD --exp-env OMP_NUM_THREADS
Slurm : srun with no options, obtained from slurm if the variables below are set.
#SBATCH --nodes=NODES
#SBATCH --ntasks-per-node=TASKSPERNODE
#SBATCH --cpus-per-task=THREADSPERTASK
The best performance is usually obtained using pure MPI i.e. THREADSPERTASK=1.
You can check other hybrid MPI/OMP combinations.
The execution time is reported at the end of logfile : grep Time: logfile | awk -F ' ' '{print $3}'
NOTE : This is the wall time for the last half number of steps.
For sufficiently large nsteps, this is half of the total wall time.
Build instructions for namd.
In order to run benchmarks the memopt build with SMP support is mandatory.
NAMD may be compiled in an experimental memory-optimized mode that utilizes a compressed version of the molecular structure and also supports parallel I/O.
In addition to reducing per-node memory requirements, the compressed structure greatly reduces startup times compared to reading a psf file.
In order to build this version, your MPI need to have level of thread support: MPI_THREAD_FUNNELED
You need a NAMD 2.11 version or newer.
1. Uncompress/tar the source.
2. cd NAMD_Source_BASE (the directory name depends on how the source obtained,
typically : namd2 or NAMD_2.11_Source )
3. untar the charm-VERSION.tar that exists. If you obtained the namd source via
cvs, you need to download separately charm.
4. cd to charm-VERSION directory
5. configure and compile charm :
This step is system dependent. Some examples are :
CRAY XE6 : ./build charm++ mpi-crayxe smp --with-production -O -DCMK_OPTIMIZE
CURIE : ./build charm++ mpi-linux-x86_64 smp mpicxx ifort --with-production -O -DCMK_OPTIMIZE
JUQUEEN : ./build charm++ mpi-bluegeneq smp xlc --with-production -O -DCMK_OPTIMIZE
Help : ./build --help to see all available options.
For special notes on various systems, you should look in http://www.ks.uiuc.edu/Research/namd/2.11/notes.html.
The syntax is : ./build charm++ ARCHITECTURE smp (compilers, optional) --with-production -O -DCMK_OPTIMIZE
You can find a list of supported architectures/compilers in charm-VERSION/src/arch
The smp option is mandatory to build the Hybrid version of namd.
This builds charm++.
6. cd ..
7. Configure NAMD.
This step is system dependent. Some examples are :
CRAY-XE6 ./config CRAY-XT-g++ --charm-base ./charm-6.7.0 --charm-arch mpi-crayxe-smp --with-fftw3 --fftw-prefix $CRAY_FFTW_DIR --without-tcl --with-memopt --charm-opts -verbose
CURIE ./config Linux-x86_64-icc --charm-base ./charm-6.7.0 --charm-arch mpi-linux-x86_64-ifort-smp-mpicxx --with-fftw3 --fftw-prefix PATH_TO_FFTW3_INSTALLATION --without-tcl --with-memopt --charm-opts -verbose --cxx-opts "-O3 -xAVX " --cc-opts "-O3 -xAVX" --cxx icpc --cc icc --cxx-noalias-opts "-fno-alias -ip -fno-rtti -no-vec "
Juqueen: ./config BlueGeneQ-MPI-xlC --charm-base ./charm-6.7.0 --charm-arch mpi-bluegeneq-smp-xlc --with-fftw3 --with-fftw-prefix PATH_TO_FFTW3_INSTALLATION --without-tcl --charm-opts -verbose --with-memopt
Help : ./config --help to see all available options.
See in http://www.ks.uiuc.edu/Research/namd/2.11/notes.html for special notes on various systems.
What is absolutely necessary is the option : --with-memopt and an SMP enabled charm++ build.
It is suggested to disable tcl support as it is indicated by the --without-tcl flags, since tcl is not necessary
to run the benchmarks.
You need to specify the fftw3 installation directory. On systems that
use environment modules you need to load the existing fftw3 module
and probably use the provided environment variables - like in CRAY-XE6
example above.
If fftw3 libraries are not installed on your system,
download and install fftw-3.3.5.tar.gz from http://www.fftw.org/.
You may adjust the compilers and compiler flags as the CURIE example.
A typical use of compilers/flags adjustement is for example
to add -xAVX in the CURIE case and keep all the other compiler flags of the architecture the same.
Take care or even just avoid using the --cxx option for NAMD config with no reason,
as this will override the compilation flags from the arch file.
When config ends prompts to change to a directory and run make.
8. cd to the reported directory and run make
If everything is ok you'll find the executable with name namd2 in this
directory.
The official site to download namd is :
http://www.ks.uiuc.edu/Research/namd/
You need to register for free here to get a namd copy from here :
http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD
In order to get a specific CVS snapshot, you need first to ask for
username/password : http://www.ks.uiuc.edu/Research/namd/cvsrequest.html
When your cvs access application is approved, you can use your username/password
to download a specific cvs snapshot :
cvs  -d :pserver:username@cvs.ks.uiuc.edu:/namd/cvsroot co -D "2013-02-06 23:59:00 GMT" namd2
In this case, the charm++ is not included.
You have to download separately and put it in the namd2 source tree :
http://charm.cs.illinois.edu/distrib/charm-6.5.0.tar.gz
# NAMD Build and Run instructions using CUDA, KNC offloading and KNL.
## Overview
## CUDA Build instructions
## [Download](#download)
The official site to download namd is : [http://www.ks.uiuc.edu/Research/namd/](http://www.ks.uiuc.edu/Research/namd/)
You need to register for free here to get a namd copy from here : [http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD](http://www.ks.uiuc.edu/Development/Download/download.cgi?PackageName=NAMD)
In order to get a specific CVS snapshot, you need first to ask for,
- username/password : [http://www.ks.uiuc.edu/Research/namd/cvsrequest.html](http://www.ks.uiuc.edu/Research/namd/cvsrequest.html)
- When your cvs access application is approved, you can use your username/password to download a specific cvs snapshot :
`cvs  -d :pserver:username@cvs.ks.uiuc.edu:/namd/cvsroot co -D "2013-02-06 23:59:00 GMT" namd2`
In this case, the `charm++` is not included.
You have to download separately and put it in the namd2 source tree : [http://charm.cs.illinois.edu/distrib/charm-6.5.0.tar.gz](http://charm.cs.illinois.edu/distrib/charm-6.5.0.tar.gz)