Skip to content
README.md 3.55 KiB
Newer Older
PRACE Benchmarks for GPAW
=========================

GPAW
----

### Code description

[GPAW](https://wiki.fysik.dtu.dk/gpaw/) is a density-functional theory (DFT)
program for ab initio electronic structure calculations using the projector
augmented wave method. It uses a uniform real-space grid representation of the
electronic wavefunctions that allows for excellent computational scalability
and systematic converge properties.

GPAW is written mostly in Python, but includes also computational kernels
written in C as well as leveraging external libraries such as NumPy, BLAS and
ScaLAPACK. Parallelisation is based on message-passing using MPI with no
support for multithreading. Development branches for GPGPUs and MICs include
support for offloading to accelerators using either CUDA or pyMIC/libxsteam,
respectively.


### Download

GPAW is freely available under the GPL license at:
  https://gitlab.com/gpaw/gpaw


### Install

Generic [installation instructions](https://wiki.fysik.dtu.dk/gpaw/install.html)
and
[platform specific examples](https://wiki.fysik.dtu.dk/gpaw/platforms/platforms.html)
are provided in the [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/). For
accelerators, architecture specific instructions and requirements are also
provided for [Xeon Phis](build/build-xeon-phi.md) and for
[GPGPUs](build/build-cuda.md).


Benchmarks
----------

### Download

The benchmark set is available in the [benchmark](benchmark/) directory or,
for download, either directly from the
[git repository](https://github.com/mlouhivu/gpaw-benchmarks/tree/prace)
or from the PRACE RI website (http://www.prace-ri.eu/ueabs/).

To download the benchmarks, use e.g. the following command:
```
git clone -b prace https://github.com/mlouhivu/gpaw-benchmarks
```


### Benchmark cases

#### Case S: Carbon nanotube

A ground state calculation for a carbon nanotube in vacuum. By default uses a
6-6-10 nanotube with 240 atoms (freely adjustable) and serial LAPACK with an
option to use ScaLAPACK. Expected to scale up to 10 nodes and/or 100 MPI
tasks.

Input file: carbon-nanotube/input.py

#### Case M: Copper filament

A ground state calculation for a copper filament in vacuum. By default uses a
2x2x3 FCC lattice with 71 atoms (freely adjustable) and ScaLAPACK for
parallelisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks.

Input file: copper-filament/input.py

#### Case L: Silicon cluster

A ground state calculation for a silicon cluster in vacuum. By default the
cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms,
and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes
and/or 10000 MPI tasks.

Input file: silicon-cluster/input.py


### Running the benchmarks

No special command line options or environment variables are needed to run the
benchmarks on most systems. One can simply say e.g.
```
mpirun -np 256 gpaw-python input.py
```

#### Special case: KNC

For KNCs (Xeon Phi Knights Corner), one needs to use a wrapper script to set
correct affinities for pyMIC (see
[examples/affinity-wrapper.sh](examples/affinity-wrapper.sh) for an example)
and to set two environment variables for GPAW:
```shell
GPAW_OFFLOAD=1  # (to turn on offloading)
GPAW_PPN=<no. of MPI tasks per node>
```

For example, in a SLURM system, this could be:
```shell
GPAW_PPN=12 GPAW_OFFLOAD=1 mpirun -np 256 -bootstrap slurm \
  ./affinity-wrapper.sh gpaw-python input.py
```

#### Examples

Example [job scripts](examples/) (`examples/job-*.sh`) are provided for
different systems together with related machine specifications
(`examples/specs.*`) that may offer a helpful starting point.