Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
PRACE Benchmarks for GPAW
=========================
GPAW
----
### Code description
[GPAW](https://wiki.fysik.dtu.dk/gpaw/) is a density-functional theory (DFT)
program for ab initio electronic structure calculations using the projector
augmented wave method. It uses a uniform real-space grid representation of the
electronic wavefunctions that allows for excellent computational scalability
and systematic converge properties.
GPAW is written mostly in Python, but includes also computational kernels
written in C as well as leveraging external libraries such as NumPy, BLAS and
ScaLAPACK. Parallelisation is based on message-passing using MPI with no
support for multithreading. Development branches for GPGPUs and MICs include
support for offloading to accelerators using either CUDA or pyMIC/libxsteam,
respectively.
### Download
GPAW is freely available under the GPL license at:
https://gitlab.com/gpaw/gpaw
### Install
Generic [installation instructions](https://wiki.fysik.dtu.dk/gpaw/install.html)
and
[platform specific examples](https://wiki.fysik.dtu.dk/gpaw/platforms/platforms.html)
are provided in the [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/). For
accelerators, architecture specific instructions and requirements are also
provided for [Xeon Phis](build/build-xeon-phi.md) and for
[GPGPUs](build/build-cuda.md).
Benchmarks
----------
### Download
The benchmark set is available in the [benchmark](benchmark/) directory or,
for download, either directly from the
[git repository](https://github.com/mlouhivu/gpaw-benchmarks/tree/prace)
or from the PRACE RI website (http://www.prace-ri.eu/ueabs/).
To download the benchmarks, use e.g. the following command:
```
git clone -b prace https://github.com/mlouhivu/gpaw-benchmarks
```
### Benchmark cases
#### Case S: Carbon nanotube
A ground state calculation for a carbon nanotube in vacuum. By default uses a
6-6-10 nanotube with 240 atoms (freely adjustable) and serial LAPACK with an
option to use ScaLAPACK. Expected to scale up to 10 nodes and/or 100 MPI
tasks.
Input file: carbon-nanotube/input.py
#### Case M: Copper filament
A ground state calculation for a copper filament in vacuum. By default uses a
2x2x3 FCC lattice with 71 atoms (freely adjustable) and ScaLAPACK for
parallelisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks.
Input file: copper-filament/input.py
#### Case L: Silicon cluster
A ground state calculation for a silicon cluster in vacuum. By default the
cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms,
and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes
and/or 10000 MPI tasks.
Input file: silicon-cluster/input.py
### Running the benchmarks
No special command line options or environment variables are needed to run the
benchmarks on most systems. One can simply say e.g.
```
mpirun -np 256 gpaw-python input.py
```
#### Special case: KNC
For KNCs (Xeon Phi Knights Corner), one needs to use a wrapper script to set
correct affinities for pyMIC (see
[examples/affinity-wrapper.sh](examples/affinity-wrapper.sh) for an example)
and to set two environment variables for GPAW:
```shell
GPAW_OFFLOAD=1 # (to turn on offloading)
GPAW_PPN=<no. of MPI tasks per node>
```
For example, in a SLURM system, this could be:
```shell
GPAW_PPN=12 GPAW_OFFLOAD=1 mpirun -np 256 -bootstrap slurm \
./affinity-wrapper.sh gpaw-python input.py
```
#### Examples
Example [job scripts](examples/) (`examples/job-*.sh`) are provided for
different systems together with related machine specifications
(`examples/specs.*`) that may offer a helpful starting point.