GROMACS_Run_README.txt 2.29 KB
Newer Older
Valeriu Codreanu's avatar
Valeriu Codreanu committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
There are two data sets in UEABS for Gromacs.
1. ion_channel that use PME for electrostatics, for Tier-1 systems
2. lignocellulose-rf that use Reaction field for electrostatics, for Tier-0 systems. Reference :  http://pubs.acs.org/doi/abs/10.1021/bm400442n

The input data file for each benchmark is the corresponding .tpr file produced using
tools from a complete gromacs installation and a series of ascii data files 
(atom coords/velocities, forcefield, run control).
If it happens to run the tier-0 case on BG/Q use lignucellulose-rf.BGQ.tpr
instead lignocellulose-rf.tpr. It is the same as lignocellulose-rf.tpr 
created on a BG/Q system.

The general way to run gromacs benchmarks is :
WRAPPER WRAPPER_OPTIONS PATH_TO_GMX mdrun -s CASENAME.tpr -maxh 0.50 -resethway -noconfout -nsteps 10000 -g logile

CASENAME is one of ion_channel or lignocellulose-rf
maxh        : Terminate after 0.99 times this time (hours) i.e. gracefully terminate after ~30 min
resethwat   : Reset Timer counters at half steps. This means that the reported
	      walltime and performance referes to the last 
              half steps of sumulation.
noconfout   : Do not save output coordinates/velocities at the end.
nsteps      : Run this number of steps, no matter what is requested in the input file
logfile     : The output filename. If extension .log is ommited 
	      it is automatically appended. Obviously, it should be different
	      for different runs.

WRAPPER and WRAPPER_OPTIONS  depend on system, batch system etc.
Few common pairs are :

CRAY     : aprun -n TASKS -N NODES -d THREADSPERTASK
Curie    : ccc_mrun with no options - obtained from batch system
Juqueen  : runjob --np TASKS --ranks-per-node TASKSPERNOD --exp-env OMP_NUM_THREADS
Slurm    : srun  with no options, obtained from slurm if the variables below are set.
#SBATCH --nodes=NODES
#SBATCH --ntasks-per-node=TASKSPERNODE
#SBATCH --cpus-per-task=THREADSPERTASK

The best performance is usually obtained using pure MPI i.e. THREADSPERTASK=1.
You can check other hybrid MPI/OMP combinations.

The execution time  is reported at the end of logfile : grep Time: logfile | awk -F ' ' '{print $3}'

NOTE : This is the wall time for the last half number of steps. 
       For sufficiently large nsteps, this is half of the total wall time.