Commit 83bece39 authored by Jacob Finkenrath's avatar Jacob Finkenrath
Browse files

Merge branch 'r2.2-dev' of https://repository.prace-ri.eu/git/UEABS/ueabs into r2.2-dev-qcd

parents 8e847bd0 b2808717
...@@ -37,8 +37,8 @@ There is currently no active support for non-CUDA accelerator platforms. ...@@ -37,8 +37,8 @@ There is currently no active support for non-CUDA accelerator platforms.
For the UEABS benchmark version 2.2, the following versions of GPAW were tested: For the UEABS benchmark version 2.2, the following versions of GPAW were tested:
* CPU-based: * CPU-based:
* Version 1.5.3 as this one is the last of the 1.5 branch and since the GPU version * Version 1.5.2 as this one is the last of the 1.5 branch and since the GPU version
is derived from 1.5.2. is derived from this version.
* Version 20.1.0, the most recent version during the development of the UEABS * Version 20.1.0, the most recent version during the development of the UEABS
2.2 benchmark suite. 2.2 benchmark suite.
* GPU-based: There is no official release or version number. The UEABS 2.2 benchmark * GPU-based: There is no official release or version number. The UEABS 2.2 benchmark
...@@ -60,10 +60,18 @@ Input file: [benchmark/1_S_carbon-nanotube/input.py](benchmark/1_S_carbon-nanotu ...@@ -60,10 +60,18 @@ Input file: [benchmark/1_S_carbon-nanotube/input.py](benchmark/1_S_carbon-nanotu
### Case M: Copper filament ### Case M: Copper filament
A ground state calculation for a copper filament in vacuum. By default uses a A ground state calculation for a copper filament in vacuum. By default uses a
2x2x3 FCC lattice with 71 atoms (freely adjustable) and ScaLAPACK for 3x4x4 FCC lattice with 71 atoms (freely adjustable through the variables `x`,
parallelisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks. `y` and `z` in the input file) and ScaLAPACK for
parallellisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks.
Input file: [benchmark/2_M_carbon-nanotube/input.py](benchmark/2_M_copper-filament/input.py) Input file: [benchmark/2_M_copper-filament/input.py](benchmark/2_M_copper-filament/input.py)
The benchmark was tested using 1000 and 1024 cores. For some core configurations, one may
get error messages similar to ``gpaw.grid_descriptor.BadGridError: Grid ... to small
for ... cores``. If one really wants to run the benchmark for those number of cores,
one needs to adapt the values of `x`, `y` and `z` in `input.py`. However, this
changes the benchmark so results cannot be compared easily with benchmark runs for
different values of these variables.
### Case L: Silicon cluster ### Case L: Silicon cluster
...@@ -72,7 +80,7 @@ cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms, ...@@ -72,7 +80,7 @@ cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms,
and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes
and/or 10000 MPI tasks. and/or 10000 MPI tasks.
Input file: [benchmark/3_L_carbon-nanotube/input.py](benchmark/3_L_silicon-cluster/input.py) Input file: [benchmark/3_L_silicon-cluster/input.py](benchmark/3_L_silicon-cluster/input.py)
## Mechanics of building the benchmark ## Mechanics of building the benchmark
...@@ -82,18 +90,37 @@ last version with the old numbering. In 2019 the development team switched ...@@ -82,18 +90,37 @@ last version with the old numbering. In 2019 the development team switched
to a version numbering scheme based on year, month and patchlevel, e.g., to a version numbering scheme based on year, month and patchlevel, e.g.,
19.8.1 for the second version released in August 2019. 19.8.1 for the second version released in August 2019.
A further major change affecting both the build process and the mechanics of running Another change is in the Python packages used to install GPAW. Versions up to
the benchmark happened in version 20.1.0. Versions up to and including 19.8.1 use a and including 19.8.1 use the `distutils` package while versions 20.1.0 and later
wrapper executable `gpaw-python` that replaces the Python interpreter (it internally are based on `setuptools`. This does affect the installation process.
links to the libpython library) and provides the MPI functionality. From version 20.1.0
the standard Python interpreter is used and the MPI functionality is included in the `_gpaw.so` GPAW for a while supports two different ways to run in parallel distributed memory mode:
shared library, though there is still an option in the build process (not tested for * Using a wrapper executable `gpaw-python` that replaces the Python interpreter (it internally
the UEABS benchmarks) to generate that wrapper instead. links to the libpython library) and that provides the MPI functionality.
* Using the standard Python interpreter, including the MPI functionality in the
`_gpaw.so` shared library.
In the `distutils`-based versions, the wrapper script approach is the default behaviour,
while in the `setuptools`-based versions, the approach using the standard Python interpreter
is the preferred one in the manual. Even though the code in the `setuptools`-based
versions still includes the option to use the wrapper script approach, it does not
work in the tested version 20.1.0.
### Available instructions
The [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/) only contains the
[installation instructions](https://wiki.fysik.dtu.dk/gpaw/index.html) for the current version.
For the installation instructions with a list of dependencies for older versions,
download the code (see below) and look for the file `doc/install.rst` or go to the
[GPAW GitLab](https://gitlab.com/gpaw), select the tag for the desired version and
view the file `doc/install.rst`.
The [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/) also provides some
[platform specific examples](https://wiki.fysik.dtu.dk/gpaw/platforms/platforms.html).
### List of dependencies ### List of dependencies
GPAW is Python code (3.5 or newer) but it also contains some C code for some performance-critical GPAW is Python code but it also contains some C code for some performance-critical
parts and to interface to a number of libraries on which it depends. parts and to interface to a number of libraries on which it depends.
Hence GPAW has the following requirements: Hence GPAW has the following requirements:
...@@ -101,16 +128,20 @@ Hence GPAW has the following requirements: ...@@ -101,16 +128,20 @@ Hence GPAW has the following requirements:
* BLAS, LAPACK, BLACS and ScaLAPACK. ScaLAPACK is optional for GPAW, but mandatory * BLAS, LAPACK, BLACS and ScaLAPACK. ScaLAPACK is optional for GPAW, but mandatory
for the UEABS benchmarks. It is used by the medium and large cases and optional for the UEABS benchmarks. It is used by the medium and large cases and optional
for the small case. for the small case.
* Python 3.5 or newer * Python. GPAW 1.5.2 requires
Python 2.7 or 3.4-3.7, GPAW 19.8.1 requires 3.4-3.7, GPAW 20.1.0 Python 3.5-3.8
and GPAW 20.10.0 Python 3.6-3.9.
* Mandatory Python packages: * Mandatory Python packages:
* [NumPY](https://pypi.org/project/numpy/) 1.9 or later (for GPAW 19.8.1/20.1.0) * [NumPY](https://pypi.org/project/numpy/) 1.9 or later (for GPAW 1.5.2/19.8.1/20.1.0/20.10.0)
* [SciPy](https://pypi.org/project/scipy/) 0.14 or later (for GPAW 19.8.1/20.1.0) * [SciPy](https://pypi.org/project/scipy/) 0.14 or later (for GPAW 1.5.2/19.8.1/20.1.0/20.10.0)
* [FFTW](http://www.fftw.org) is highly recommended. As long as the optional libvdwxc * [FFTW](http://www.fftw.org) is highly recommended. As long as the optional libvdwxc
component is not used, the MKL FFTW wrappers can also be used. Recent versions of component is not used, the MKL FFTW wrappers can also be used. Recent versions of
GPAW can even show good performance using just the NumPy-provided FFT routines provided GPAW also show good performance using just the NumPy-provided FFT routines provided
that NumPy has been built with a highly optimized FFT library. that NumPy has been built with a highly optimized FFT library.
* [LibXC](https://www.tddft.org/programs/libxc/) 3.X or 4.X. LibXC is a library * [LibXC](https://www.tddft.org/programs/libxc/) 2.X or newer for GPAW 1.5.2,
of exchange-correlation functions for density-functional theory 3.X or 4.X for GPAW 19.8.1, 20.1.0 and 20.10.0. LibXC is a library
of exchange-correlation functions for density-functional theory. None of the
versions currently mentions LibXC 5.X as officially supported.
* [ASE, Atomic Simulation Environment](https://wiki.fysik.dtu.dk/ase/), a Python package * [ASE, Atomic Simulation Environment](https://wiki.fysik.dtu.dk/ase/), a Python package
from the same group that develops GPAW from the same group that develops GPAW
* Check the release notes of GPAW as the releases of ASE and GPAW should match. * Check the release notes of GPAW as the releases of ASE and GPAW should match.
...@@ -128,9 +159,17 @@ Hence GPAW has the following requirements: ...@@ -128,9 +159,17 @@ Hence GPAW has the following requirements:
[LCAO mode](https://wiki.fysik.dtu.dk/gpaw/documentation/lcao/lcao.html) [LCAO mode](https://wiki.fysik.dtu.dk/gpaw/documentation/lcao/lcao.html)
In addition, the GPU version needs: In addition, the GPU version needs:
* CUDA toolkit * NVIDIA CUDA toolkit
* [PyCUDA](https://pypi.org/project/pycuda/) * [PyCUDA](https://pypi.org/project/pycuda/)
Installing GPAW also requires a number of standard build tools on the system, including
* [GNU autoconf](https://www.gnu.org/software/autoconf/) is needed to generate the
configure script for libxc
* [GNU Libtool](https://www.gnu.org/software/libtool/) is needed. If not found,
the configure process of libxc produces very misleading
error messages that do not immediately point to libtool missing.
* [GNU make](https://www.gnu.org/software/make/)
### Download of GPAW ### Download of GPAW
...@@ -155,21 +194,16 @@ git clone -b cuda https://gitlab.com/mlouhivu/gpaw.git ...@@ -155,21 +194,16 @@ git clone -b cuda https://gitlab.com/mlouhivu/gpaw.git
### Install ### Install
Official generic [installation instructions](https://wiki.fysik.dtu.dk/gpaw/install.html)
and
[platform specific examples](https://wiki.fysik.dtu.dk/gpaw/platforms/platforms.html)
are provided in the [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/).
Crucial for the configuration of GPAW is a proper `customize.py` (GPAW 19.8.1 and Crucial for the configuration of GPAW is a proper `customize.py` (GPAW 19.8.1 and
earlier) or `siteconfig.py` (GPAW 20.1.0 and later) file. The defaults used by GPAW earlier) or `siteconfig.py` (GPAW 20.1.0 and later) file. The defaults used by GPAW
may not offer optimal performance and the automatic detection of the libraries also may not offer optimal performance and the automatic detection of the libraries also
fails on some systems. fails on some systems.
The UEABS repository contains additional instructions: The UEABS repository contains additional instructions:
* [general instructions](installation.md) - Under development * [general instructions](build/build-cpu.md)
* [GPGPUs](build/build-cuda.md) - To check * [GPGPUs](build/build-cuda.md) - To check
Example [build scripts](build/examples/) are also available for some PRACE Example [build scripts](build/examples/) are also available for some PRACE and non-PRACE
systems. systems.
...@@ -187,7 +221,9 @@ right from this repository. ...@@ -187,7 +221,9 @@ right from this repository.
### Running the benchmarks ### Running the benchmarks
#### Versions up to and including 19.8.1 of GPAW #### Using the `gpaw-python` wrapper script
This is the default approach for versions up to and including 19.8.1 of GPAW
These versions of GPAW come with their own wrapper executable, `gpaw-python`, These versions of GPAW come with their own wrapper executable, `gpaw-python`,
to start a MPI-based GPAW run. to start a MPI-based GPAW run.
...@@ -199,7 +235,9 @@ properly with the resource manager. E.g., on Slurm systems, use ...@@ -199,7 +235,9 @@ properly with the resource manager. E.g., on Slurm systems, use
srun gpaw-python input.py srun gpaw-python input.py
``` ```
#### GPAW 20.1.0 (and likely later) #### Using the regular Python interpreter and parallel GPAW shared library
This is the default method for GPAW 20.1.0 (and likely later).
The wrapper executable `gpaw-python` is no longer available in the default parallel The wrapper executable `gpaw-python` is no longer available in the default parallel
build of GPAW. There are now two different ways to start GPAW. build of GPAW. There are now two different ways to start GPAW.
...@@ -229,8 +267,8 @@ would do. ...@@ -229,8 +267,8 @@ would do.
Example [job scripts](scripts/) (`scripts/job-*.sh`) are provided for Example [job scripts](scripts/) (`scripts/job-*.sh`) are provided for
different PRACE systems that may offer a helpful starting point. different PRACE systems that may offer a helpful starting point.
TODO: Update the examples.
*TODO: Update the examples as testing on other systems goes on.*
## Verification of Results ## Verification of Results
...@@ -240,9 +278,9 @@ TODO. ...@@ -240,9 +278,9 @@ TODO.
### Case M: Copper filament ### Case M: Copper filament
TODO. TODO. Convergence problems.
### Case L: Silicon cluster ### Case L: Silicon cluster
TODO. TODO. Get the medium case to run before spending time on the large one.
# Detailed GPAW installation instructions on non-acclerated systems
These instructions are in addition to the brief instructions in [README.md](../README.md).
## Detailed dependency list
### Libraries and Python interpreter
GPAW needs (for the UEABS benchmarks)
* [Python](https://www.python.org/): GPAW 1.5.2 supports Python 2.7 and 3.4-3.7.
GPAW 19.8.1 needs Python 3.4-3.7 and GPAW 20.1.0 requires Python 3.5-3.8.
* [MPI library](https://www.mpi-forum.org/)
* [LibXC](https://www.tddft.org/programs/libxc/). GPAW 1.5.2 requires LibXC 1.5.2
or later. GPAW 19.8.1 and 20.1.0 need LibXC 3.x or 4.x.
* (Optimized) [BLAS](http://www.netlib.org/blas/) and
[LAPACK](http://www.netlib.org/lapack/) libraries.
There are both commercial and free and open source versions of these libraries.
Using the [reference implementation of BLAS from netlib](http://www.netlib.org/blas/)
will give very poor performance. Most optimized LAPACK libraries actually only
optimize a few critical routines while the remaining routines are compiled from
the reference version. Most processor vendors for HPC machines and system vendors
offer optmized versions of these libraries.
* [ScaLAPACK](http://www.netlib.org/scalapack/) and the underlying communication
layer [BLACS](http://www.netlib.org/blacs/).
* [FFTW](http://www.fftw.org/) or compatible FFT library.
For the UEABS benchmarks, the double precision, non-MPI version is sufficient.
GPAW also works with the
[Intel MKL](https://software.intel.com/content/www/us/en/develop/tools/math-kernel-library.html)
FFT routines when using the FFTW wrappers provided with that product.
For the GPU version, the following packages are needed in addition to the packages
above:
* CUDA toolkit
* [PyCUDA](https://pypi.org/project/pycuda/)
Optional components of GPAW that are not used by the UEABS benchmarks:
* [libvdwxc](https://gitlab.com/libvdwxc/libvdwxc), a portable C library
of density functionals with van der Waals interactions for density functional theory.
This library does not work with the MKL FFTW wrappers as it needs the MPI version
of the FFTW libraries too.
* [ELPA](https://elpa.mpcdf.mpg.de/),
which should improve performance for large systems when GPAW is used in
[LCAO mode](https://wiki.fysik.dtu.dk/gpaw/documentation/lcao/lcao.html)
### Python packages
GPAW needs
* [wheel](https://pypi.org/project/wheel/) is needed in most (if not all) ways of
installing the packages from source.
* [NumPy](https://pypi.org/project/numpy/) 1.9 or later (for GPAW 1.5.2/19.8.1/20.1.0/20.10.0)
* Installing NumPy from source will also require
[Cython](https://pypi.org/project/Cython/)
* GPAW 1.5.2 is not fully compatible with NumPy 1.19.x. Warnings about the use
of deprecated constructs will be shown.
* [SciPy](https://pypi.org/project/scipy/) 0.14 or later (for GPAW 1.5.2/19.8.1/20.1.0/20.10.0)
* [ASE, Atomic Simulation Environment](https://wiki.fysik.dtu.dk/ase/), a Python package
from the same group that develops GPAW. Required versions are 3.17.0 or later for
GPAW 1.5.2 and 3.18.0 or later for GPAW 19.8.1 or 20.1.0.
ASE has a couple of dependendencies
that are not needed for running the UEABS benchmarks. However, several Python
package install methods will trigger the installation of those packages, and
with them may require a chain of system libraries.
* ASE does need NumPy and SciPy, but these are needed anyway for GPAW.
* [matplotlib](https://pypi.org/project/matplotlib/), at least version 2.0.0.
This package is optional and not really needed to run the benchmarks.
Matplotlib pulls in a lot of other dependencies. When installing ASE with pip,
it will try to pull in matplotlib and its dependencies
* [pillow](https://pypi.org/project/Pillow/) needs several exgternal
libraries. During the development of the benchmarks, we needed at least
zlib, libjpeg-turbo (or compatible libjpeg library) and freetype. Even
though the pillow documentation claimed that libjpeg was optional,
it refused to install without.
* [kiwisolver](https://pypi.org/project/kiwisolver/): Contains C++-code
* [pyparsing](https://pypi.org/project/pyparsing/)
* [Cycler](https://pypi.org/project/Cycler/), which requires
* [six](https://pypi.org/project/six/)
* [python-dateutil](https://pypi.org/project/python-dateutil/), which also
requires
* [six](https://pypi.org/project/six/)
* [Flask](https://pypi.org/project/Flask/) is an optional dependency of ASE
that is not automatically pulled in by `pip` in versions of ASE tested during
the development of this version of the UEABS. It has a number of dependencies
too:
* [Jinja2](https://pypi.org/project/Jinja2/)
* [MarkupSafe](https://pypi.org/project/MarkupSafe/), contains some C
code
* [itsdangerous](https://pypi.org/project/itsdangerous/)
* [Werkzeug](https://pypi.org/project/Werkzeug/)
* [click]()
## Tested configurations
* Python
* Libraries used during the installation of Python:
* ncurses 6.2
* libreadline 8.0, as it makes life easy when using the command line
interface of Python (and in case of an EasyBuild Python, because EasyBuild
requires it)
* libffi 3.3
* zlib 1.2.11
* OpenSSL 1.1.1g, but only when EasyBuild was used and requires it.
* SQLite 3.33.0, as one of the tests in some versions of GPAW requires it to
succeed.
* Python will of course pick up several other libraries that it might find on
the system. The benchmark installation was tested on a system with very few
development packages of libraries installed in the system image. Tcl/Tk
and SQLite3 development packages in particular where not installed, so the
standard Python library packages sqlite3 and tkinter were not fully functional.
* Python packages
* wheel
* Cython
* NumPy
* SciPy
* ASE
* GPAW
The table below give the combinations of major packages Python, NumPy, SciPy, ASE and
GPAW that were tested:
| Python | NumPy | SciPy | ASE | GPAW |
|:-------|:-------|:------|:-------|:--------|
| 3.7.9 | 1.18.5 | 1.4.1 | 3.17.0 | 1.5.2 |
| 3.7.9 | 1.18.5 | 1.4.1 | 3.18.2 | 19.8.1 |
| 3.8.6 | 1.18.5 | 1.4.1 | 3.19.3 | 20.1.0 |
## Installing all prerequisites
We do not include the optimized mathematical libraries in the instructions (BLAS, LAPACK,
FFT library, ...) as these libraries should be standard on any optimized HPC system.
Also, the instructions below will need to be adapted to the specific
libraries that are being used.
Other prerequisites:
* libxc
* Python interpreter
* Python package NumPy
* Python package SciPy
* Python package ase
### Installing libxc
* Installing libxc requires GNU automake and GNU buildtool besides GNU make and a
C compiler. The build process is the usual GNU configure - make - make install
cycle, but the `configure` script still needs to be generated with autoreconf.
* Download libxc:
* The latest version of libxc can be downloaded from
[the libxc download page](https://www.tddft.org/programs/libxc/download/).
However, that version may not be officially supported by GPAW.
* It is also possible to download all recent versions of libxc from
[the libxc GitLab](https://gitlab.com/libxc/libxc)
* Select the tag corresponding to the version you want to download in the
branch/tag selection box.
* Then use the download button and select the desired file type.
* Dowload URLs look like `https://gitlab.com/libxc/libxc/-/archive/4.3.4/libxc-4.3.4.tar.bz2`.
* Untar the file in the build directory.
### Installing Python from scratch
The easiest way to get Python on your system is to download an existing distribution
(one will likely already be installed on your system). Python itself does have a lot
of dependencies though, definitely in its Standard Python Library. Many of the
standard packages are never needed when executing the benchmark cases. Isolating them
to compile a Python with minimal dependencies is beyond the scope though. We did
compile Python without the necessary libraries for the standard libraries sqlite3
and tkinter (the latter needing Tcl/Tk).
Even though GPAW contains a lot of Python code, the Python interpreter is not the main
performance-determining factor in the GPAW benchmark. Having a properly optimized installation
of NumPy, SciPy and GPAW itself proves much more important.
### Installing NumPy
* As NumPy relies on optimized libraries for its performance, one should carefully
select which NumPy package to download, or install NumPy from sources. How crucial
this is, depends on the version of GPAW and the options selected when building
GPAW.
* Given that GPAW also uses optimized libraries, it is generally advised to install
NumPy from sources instead to ensure that the same libraries are used as will be
used for GPAW to prevent conflicts between libraries that might otherwise occur.
* In most cases, NumPy will need a `site.cfg` file to point to the optimized libraries.
See the examples for various systems and the file `site.cfg.example` included in
the NumPy sources.
### Installing SciPy
* Just as NumPy, SciPy relies on optimized libraries for its performance. It should
be installed after NumPy as it does get the information about which libraries to
use from NumPy. Hence, when installing pre-built binaries, make sure they match
the NumPy binaries used.
* Just as is the case for NumPy, it may be better to install SciPy from sources.
[Instructions for installing SciPy from source can be found on the SciPy GitHub
site](https://github.com/scipy/scipy/blob/master/INSTALL.rst.txt).
### Installing ase
* Just as for any user-installed Python package, make sure you have created a
directory to install Python packages to and have added it to the front of PYTHONPATH.
* ase is [available on PyPi](https://pypi.org/project/ase/). It is also possible
to [see a list of previous releases](https://pypi.org/project/ase/#history).
* The easiest way to install ase is using `pip` which will automatically download.
the requested version.
## Configuring and installing GPAW
### GPAW 1.5.2
* GPAW 1.5.2 uses `distutils`. Customization of the installation process is possible
through the `customize.py` file.
* The FFT library: According to the documentation, the following strategy is used
* The compile process searches (in this order) for ``libmkl_rt.so``,
``libmkl_intel_lp64.so`` and ``libfftw3.so`. First one found will be
loaded.
* If none is found, the built-in FFT from NumPy will be used. This does not need
to be a problem if NumPy provides a properly optimized FFT library.
* The choice can also be overwritten using the GPAW_FFTWSO environment variable.
* With certain compilers, the GPAW test suite produced crashes in `xc/xc.py`. The
patch for GPAW 1.5.2 included in the [pathces](patches) subdirectory solved these
problems on the systems tested.
### GPAW 19.8.1
* GPAW 19.8.1 uses `distutils`. Customization of the installation process is possible
through a `customize.py` file.
* The selection process of the FFT library has changed from version 1.5.2. It is
now possible to specify the FFT library in `customize.py` or to simply select to
use the NumPy FFT routines.
### GPAW 20.1.0 and 20.10.0
* GPAW 20.1.0 uses `setuptools`. Customization of the installation process is possible
through the `siteconfig.py` file.
* The selection process of the FFT library is the same as in version 19.8.1, except
that the settings are now in `siteconfrig.py` rather than `customize.py`.
### All versions
* GPAW also needs a number of so-called "Atomic PAW Setup" files. The latest files
can be found on the [GPAW website, Atomic PAW Setups page](https://wiki.fysik.dtu.dk/gpaw/setups/setups.html).
For the testing we used []`gpaw-setups-0.9.20000.tar.gz`](https://wiki.fysik.dtu.dk/gpaw-files/gpaw-setups-0.9.20000.tar.gz)
for all versions of GPAW. The easiest way to install these files is to simpy untar
the file and set the environment variable GPAW_SETUP_PATH to point to that directory.
In the examples provided we use the `share/gpaw-setups` subdirectory of the install
directory for this purpose.
* Up to and including version 20.1.0, GPAW does comes with a test suite which can be
used after installation.
* Running the sequential tests:
gpaw test
Help is available through
gpaw test -h
* Running those tests, but using multiple cores (e.g., 4):
gpaw test -j 4
We did experience that crashed that cause segmentation faults get unnoticed
in this setup. They are not mentioned as failed.
* Running the parallel benchmarks on a SLURM cluster will depend on the version of GPAW.
* Versions that build the parallel interpreter (19.8.1 and older):
srun -n 4 gpaw-python -m gpaw test
* Versions with the parallel so library using the regular Python interpreter (20.1.0 and above):
srun -n 4 python -m gpaw test
* Depending on the Python installation, some tests may fail with error messages that point
to a package in the Standard Python Library that is not present. Some of these errors have no
influence on the benchmarks as that part of the code is not triggered by the benchmark.
* The full test suite is missing in GPAW 20.10.0. There is a brief sequential test
that can be run with
gpaw test
and a parallel one that can be run with
gpaw -P 4 test
* Multiple versions of GPAW likely contain a bug in `c/bmgs/fd.c` (around line 44
in GPAW 1.5.2). The code enforces vectorization on OpenMP 4 compilers by using
`#pragma omp simd`. However, it turns out that the data is not always correctly
aligned, so if the reaction of the compiler to `#pragma omp simd` is to fully vectorize
and use load/store instructions for aligned data, crashes may occur. It did happen
during the benchmark development when compiling with the Intel C compiler. The
solution for that compiler is to add `-qno-openmp-simd` to the compiler flags.
## Problems observed during testing
* On AMD Epyc systems, there seems to be a bug in the Intel MKL FFT libraries/FFTW
wrappers in the 2020 compilers. Downgrading to the MKL libraries of the 2018
compilers or using the FFTW libraries solves the problem.
This has been observed not only in GPAW, but also in some other DFT packages.
* The GPAW test code in versions 1.5.2 till 20.1.0 detects that matplotlib is not installed
and will skip this test. We did however observe a failed test when Python could not find
the SQLite package as the Python standard library sqlite3 package is used.
#!/bin/bash
#
# Installation script for GPAW 1.5.2:
# * Using the existing IntelPython3 module on the system which has an optimized
# NumPy and SciPy included.
# * Using the matching version of ase, 3.17.0
# * Compiling with the Intel compilers
#
# The FFT library is discovered at runtime. With the settings used in this script
# this should be MKL FFT, but it is possible to change this at runtime to either
# MKL, FFTW or the built-in NumPy FFT routines, see the installation instructions
# (link below).
#
# The original installation instructions for GPAW can be found at
# https://gitlab.com/gpaw/gpaw/-/blob/1.5.2/doc/install.rst
#
packageID='1.5.2-IntelPython3-icc'
install_root=$VSC_SCRATCH/UEABS
systemID=CalcUA-vaughan-rome
download_dir=$install_root/Downloads
install_dir=$install_root/$systemID/Packages/GPAW-manual/$packageID
modules_dir=$install_root/$systemID/Modules/GPAW-manual
build_dir="/dev/shm/$USER/GPAW-manual/$packageID"
patch_dir=$VSC_DATA/Projects/PRACE/GPAW-experiments/UEABS/build/patches
libxc_version='4.3.4'
ase_version='3.17.0'
GPAW_version='1.5.2'
GPAWsetups_version='0.9.20000' # Check version on https://wiki.fysik.dtu.dk/gpaw/setups/setups.html
py_maj_min='3.7'
################################################################################
#
# Prepare the system
#
#
# Load modules
#
module purge
module load calcua/2020a
module load intel/2020a
module load IntelPython3/2020a
module load buildtools/2020a
#
# Create the directories and make sure they are clean if that matters
#
/usr/bin/mkdir -p $download_dir
/usr/bin/mkdir -p $install_dir
/usr/bin/rm -rf $install_dir
/usr/bin/mkdir -p $install_dir
/usr/bin/mkdir -p $modules_dir