Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
UEABS
ueabs
Commits
9cbd4054
Commit
9cbd4054
authored
Jul 23, 2020
by
Kurt Lust
Browse files
Removed old README as it is no longer relevant.
parent
64814b29
Changes
1
Hide whitespace changes
Inline
Side-by-side
gpaw/README.old.md
deleted
100644 → 0
View file @
64814b29
PRACE Benchmarks for GPAW
=========================
GPAW
----
### Code description
[
GPAW
](
https://wiki.fysik.dtu.dk/gpaw/
)
is a density-functional theory (DFT)
program for ab initio electronic structure calculations using the projector
augmented wave method. It uses a uniform real-space grid representation of the
electronic wavefunctions that allows for excellent computational scalability
and systematic converge properties.
GPAW is written mostly in Python, but includes also computational kernels
written in C as well as leveraging external libraries such as NumPy, BLAS and
ScaLAPACK. Parallelisation is based on message-passing using MPI with no
support for multithreading. Development branches for GPGPUs and MICs include
support for offloading to accelerators using either CUDA or pyMIC/libxsteam,
respectively.
### Download
GPAW is freely available under the GPL license. The source code can be
downloaded from the
[
Git repository
](
https://gitlab.com/gpaw/gpaw
)
or as
a tar package for each release from
[
PyPi
](
https://pypi.org/simple/gpaw/
)
.
For example, to get version 1.4.0 using git:
```
bash
git clone
-b
1.4.0 https://gitlab.com/gpaw/gpaw.git
```
### Install
Generic
[
installation instructions
](
https://wiki.fysik.dtu.dk/gpaw/install.html
)
and
[
platform specific examples
](
https://wiki.fysik.dtu.dk/gpaw/platforms/platforms.html
)
are provided in the
[
GPAW wiki
](
https://wiki.fysik.dtu.dk/gpaw/
)
. For
accelerators, architecture specific instructions and requirements are also
provided for
[
Xeon Phis
](
build/build-xeon-phi.md
)
and for
[
GPGPUs
](
build/build-cuda.md
)
.
Example
[
build scripts
](
build/examples/
)
are also available for some PRACE
systems.
Benchmarks
----------
### Download
The benchmark set is available in the
[
benchmark/
](
benchmark/
)
directory or
alternatively, for download, either directly from the development
[
Git repository
](
https://github.com/mlouhivu/gpaw-benchmarks/tree/prace
)
or from the PRACE RI website (http://www.prace-ri.eu/ueabs/).
To download the benchmarks, use e.g. the following command:
```
git clone -b prace https://github.com/mlouhivu/gpaw-benchmarks
```
### Benchmark cases
#### Case S: Carbon nanotube
A ground state calculation for a carbon nanotube in vacuum. By default uses a
6-6-10 nanotube with 240 atoms (freely adjustable) and serial LAPACK with an
option to use ScaLAPACK. Expected to scale up to 10 nodes and/or 100 MPI
tasks.
Input file:
[
benchmark/carbon-nanotube/input.py
](
benchmark/carbon-nanotube/input.py
)
#### Case M: Copper filament
A ground state calculation for a copper filament in vacuum. By default uses a
2x2x3 FCC lattice with 71 atoms (freely adjustable) and ScaLAPACK for
parallelisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks.
Input file:
[
benchmark/carbon-nanotube/input.py
](
benchmark/copper-filament/input.py
)
#### Case L: Silicon cluster
A ground state calculation for a silicon cluster in vacuum. By default the
cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms,
and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes
and/or 10000 MPI tasks.
Input file:
[
benchmark/carbon-nanotube/input.py
](
benchmark/silicon-cluster/input.py
)
### Running the benchmarks
No special command line options or environment variables are needed to run the
benchmarks on most systems. One can simply say e.g.
```
srun gpaw-python input.py
```
#### Special case: KNC
For KNCs (Xeon Phi Knights Corner), one needs to use a wrapper script to set
correct affinities for pyMIC (see
[
scripts/affinity-wrapper.sh
](
scripts/affinity-wrapper.sh
)
for an example)
and to set two environment variables for GPAW:
```
shell
GPAW_OFFLOAD
=
1
# (to turn on offloading)
GPAW_PPN
=
<no. of MPI tasks per node>
```
For example, in a SLURM system, this could be:
```
shell
GPAW_PPN
=
12
GPAW_OFFLOAD
=
1 mpirun
-np
256
-bootstrap
slurm
\
./affinity-wrapper.sh 12 gpaw-python input.py
```
#### Examples
Example
[
job scripts
](
scripts/
)
(
`scripts/job-*.sh`
)
are provided for
different PRACE systems that may offer a helpful starting point.
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment