# GPAW - A Projected Augmented Wave code ## Summary version 1.0 ## Purpose of the benchmark [GPAW](https://wiki.fysik.dtu.dk/gpaw/) is a density-functional theory (DFT) program for ab initio electronic structure calculations using the projector augmented wave method. It uses a uniform real-space grid representation of the electronic wavefunctions that allows for excellent computational scalability and systematic converge properties. The GPAW benchmark tests MPI parallelization and the quality of the provided mathematical libraries, including BLAS, LAPACK, ScaLAPACK, and FFTW-compatible library. There is also a CUDA-based implementation for GPU systems. ## Characteristics of the benchmark GPAW is written mostly in Python, but includes also computational kernels written in C as well as leveraging external libraries such as NumPy, BLAS and ScaLAPACK. Parallelisation is based on message-passing using MPI with no support for multithreading. There have been various developments for GPGPUs and MICs in the past using either CUDA or pyMIC/libxstream. Many of those branches see no development anymore. The relevant CUDA version for this benchmark is available in a [separate GitLab for CUDA development, cuda branch](https://gitlab.com/mlouhivu/gpaw/tree/cuda). There is currently no active support for non-CUDA accelerator platforms. For the UEABS benchmark version 2.2, the following versions of GPAW were tested: * CPU-based: * Version 20.1.0, as this one is the version on which the most recent GPU commits are based. * Version 20.10.0, as it was the most recent version during the development of the UEABS 2.2 benchmark suite. * GPU-based: As there is no official release of the GPU version and as it is at the moment of the release of the UEABS version 2.2 under heavy development to also support AMD GPUs, there is no official support for the GPU version ([the cuda branch of the GitLab for CUDA development](https://gitlab.com/mlouhivu/gpaw/tree/cuda)) in UEABS version 2.2. Versions 1.5.2 and 19.8.1 were also considered but are not compatible with the regular input files provided here. Hence support for those versions of GPAW was dropped in this version of the UEABS. There are three benchmark cases, denotes S, M and L. ### Case S: Carbon nanotube A ground state calculation for a carbon nanotube in vacuum. By default uses a 6-6-10 nanotube with 240 atoms (freely adjustable) and serial LAPACK with an option to use ScaLAPACK. Expected to scale up to 10 nodes and/or 100 MPI tasks. This benchmark runs fast. Expect execution times around 1 minutes on 100 cores of a modern x86 cluster. Input file: [benchmark/1_S_carbon-nanotube/input.py](benchmark/1_S_carbon-nanotube/input.py) This input file still works with version 1.5.2 and 19.8.1 of GPAW. ### Case M: Copper filament A ground state calculation for a copper filament in vacuum. By default uses a 3x4x4 FCC lattice with 71 atoms (freely adjustable through the variables `x`, `y` and `z` in the input file) and ScaLAPACK for parallelisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks. Input file: [benchmark/2_M_copper-filament/input.py](benchmark/2_M_copper-filament/input.py) This input file does not work with GPAW 1.5.2 and 19.8.1. It requires GPAW 20.1.0 or 20.10.0. Please try older versions of the UEABS if you want to use these versions of GPAW. The benchmark runs best when using full nodes. Expect a performance drop on other configurations. ### Case L: Silicon cluster A ground state calculation for a silicon cluster in vacuum. By default the cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms, and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes and/or 10000 MPI tasks. Input file: [benchmark/3_L_silicon-cluster/input.py](benchmark/3_L_silicon-cluster/input.py) This input file does not work with GPAW 1.5.2 and 19.8.1. It requires GPAW 20.1.0 or 20.10.0. Please try older versions of the UEABS if you want to use these versions of GPAW. ## Mechanics of building the benchmark Installing and running GPAW has changed a lot in the since the previous versions of the UEABS. GPAW version numbering changed in 2019. Version 1.5.3 is the last version with the old numbering. In 2019 the development team switched to a version numbering scheme based on year, month and patchlevel, e.g., 19.8.1 for the second version released in August 2019. Another change is in the Python packages used to install GPAW. Versions up to and including 19.8.1 use the `distutils` package while versions 20.1.0 and later are based on `setuptools`. This does affect the installation process. Running GPAW is no longer done via a wrapper executable `gpaw-python` that replaces the Python interpreter (it internally links to the libpython library) and that provides the MPI functionality. Since version 20.1.0, the standard Python interpreter is used and the MPI functionality is included in the `_gpaw.so` shared library. ### Available instructions The [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/) only contains the [installation instructions](https://wiki.fysik.dtu.dk/gpaw/install.html) for the current version. For the installation instructions with a list of dependencies for older versions, download the code (see below) and look for the file `doc/install.rst` or go to the [GPAW GitLab](https://gitlab.com/gpaw), select the tag for the desired version and view the file `doc/install.rst`. The [GPAW wiki](https://wiki.fysik.dtu.dk/gpaw/) also provides some [platform specific examples](https://wiki.fysik.dtu.dk/gpaw/platforms/platforms.html). ### List of dependencies GPAW is Python code but it also contains some C code for some performance-critical parts and to interface to a number of libraries on which it depends. Hence GPAW has the following requirements: * C compiler with MPI support * BLAS, LAPACK, BLACS and ScaLAPACK. ScaLAPACK is optional for GPAW, but mandatory for the UEABS benchmarks. It is used by the medium and large cases and optional for the small case. * Python. GPAW 20.1.0 requires Python 3.5-3.8 and GPAW 20.10.0 Python 3.6-3.9. * Mandatory Python packages: * [NumPY](https://pypi.org/project/numpy/) 1.9 or later (for GPAW 20.1.0/20.10.0) GPAW versions before 20.10.0 produce warnings when used with NumPy 1.19.x. * [SciPy](https://pypi.org/project/scipy/) 0.14 or later (for GPAW 20.1.0/20.10.0) * [FFTW](http://www.fftw.org) is highly recommended. As long as the optional libvdwxc component is not used, the MKL FFTW wrappers can also be used. Recent versions of GPAW also show good performance using just the NumPy-provided FFT routines provided that NumPy has been built with a highly optimized FFT library. * [LibXC](https://www.tddft.org/programs/libxc/) 3.X or 4.X for GPAW 20.1.0 and 20.10.0. LibXC is a library of exchange-correlation functions for density-functional theory. None of the versions currently mentions LibXC 5.X as officially supported. * [ASE, Atomic Simulation Environment](https://wiki.fysik.dtu.dk/ase/), a Python package from the same group that develops GPAW * Check the release notes of GPAW as the releases of ASE and GPAW should match. The benchmarks were tested using ASE 3.19.3 with GPAW 20.1.0 and ASE 3.20.1 with GPAW 20.1.0. * ASE has some optional dependencies that are not needed for the benchmarking: Matplotlib (2.0.0 or newer), tkinter (Tk interface, part of the Standard Python Library) and Flask. * Optional components of GPAW that are not used by the UEABS benchmarks: * [libvdwxc](https://gitlab.com/libvdwxc/libvdwxc), a portable C library of density functionals with van der Waals interactions for density functional theory. This library does not work with the MKL FFTW wrappers. * [ELPA](https://elpa.mpcdf.mpg.de/), which should improve performance for large systems when GPAW is used in [LCAO mode](https://wiki.fysik.dtu.dk/gpaw/documentation/lcao/lcao.html) In addition, the GPU version needs: * NVIDIA CUDA toolkit * [PyCUDA](https://pypi.org/project/pycuda/) Installing GPAW also requires a number of standard build tools on the system, including * [GNU autoconf](https://www.gnu.org/software/autoconf/) is needed to generate the configure script for libxc * [GNU Libtool](https://www.gnu.org/software/libtool/) is needed. If not found, the configure process of libxc produces very misleading error messages that do not immediately point to libtool missing. * [GNU make](https://www.gnu.org/software/make/) ### Download of GPAW GPAW is freely available under the GPL license. The source code of the CPU version can be downloaded from the [GitLab repository](https://gitlab.com/gpaw/gpaw) or as a tar package for each release from [PyPi](https://pypi.org/simple/gpaw/). For example, to get version 20.1.0 using git: ```bash git clone -b 20.1.0 https://gitlab.com/gpaw/gpaw.git ``` The CUDA development version is available in [the cuda branch of a separate GitLab](https://gitlab.com/mlouhivu/gpaw/tree/cuda). To get the current development version using git: ```bash git clone -b cuda https://gitlab.com/mlouhivu/gpaw.git ``` ### Install Crucial for the configuration of GPAW is a proper `siteconfig.py` file (GPAW 20.1.0 and later, earlier versions used `customize.py`). The defaults used by GPAW may not offer optimal performance and the automatic detection of the libraries also fails on some systems. The UEABS repository contains additional instructions: * [general instructions](build/build-CPU.md) Example [build scripts](build/examples/) are also available. ## Mechanics of Running the Benchmark ### Download of the benchmark sets As each benchmark has only a single input file, these can be downloaded right from this repository. 1. [Testcase S: Carbon nanotube input file](benchmark/1_S_carbon-nanotube/input.py) 2. [Testcase M: Copper filament input file](benchmark/2_M_copper-filament/input.py) 3. [Testcase L: Silicon cluster input file](benchmark/3_L_silicon-cluster/input.py) ### Running the benchmarks These instructions are exclusively for GPAW 20.1.0 and later. There are two different ways to start GPAW. One way is through `mpirun`, `srun` or an equivalent process starter and the `gpaw python` command: ``` srun gpaw python input.py ``` The second way is by simply using the `-P` flag of the `gpaw` command and let it use a process starter internally: ``` gpaw -P 100 python input.py ``` will run on 100 cores. There is a third but non-recommended option: ``` srun python3 input.py ``` That option however doesn't do the imports in the same way that the `gpaw` script would do. ## Verification of Results The results of the benchmarks can be verified with the following piece of code: ```bash bmtime=$(grep "Total:" output.txt | sed -e 's/Total: *//' | cut -d " " -f 1) iterations=$(grep "Converged after" output.txt | cut -d " " -f 3) dipole=$(grep "Dipole" output.txt | cut -d " " -f 5 | sed -e 's/)//') fermi=$(grep "Fermi level:" output.txt | cut -d ":" -f 2 | sed -e 's/ //g') energy=$(grep "Extrapolated: " output.txt | cut -d ":" -f 2 | sed -e 's/ //g') echo -e "\nResult information:\n" \ " * Time: $bmtime s\n" \ " * Number of iterations: $iterations\n" \ " * Dipole (3rd component): $dipole\n" \ " * Fermi level: $fermi\n" \ " * Extrapolated energy: $energy\n" ``` The time is the time measured by GPAW itself for the main part of the computations and is what is used as the benchmark result. The other numbers can serve as verification of the results and were obtained with GPAW 20.1.0, 20.10.0 and 21.1.0. ### Case S: Carbon nanotube The expected values are: * Number of iterations: 12 * Dipole (3rd component): Between -115.17 and -115.16 * Fermi level: Between -4.613 and -4.611 * Extrapolated energy: Between -2397.63 and -2397.62 ### Case M: Copper filament The expected values are: * Number of iterations: 19 * Dipole (3rd component): Between -80.51 and -80.50 * Fermi level: Between -4.209 and -4.207 * Extrapolated energy: Between -473.5 and -473.3 ### Case L: Silicon cluster With this test case, some of the results differ between version 20.1.0 and 20.10.0 on one hand and version 21.1.0 on the other hand. The expected values are: * Number of iterations: Between 30 and 35 * Dipole (3rd component):between -0.493 and -0.491 * Fermi level:between -2.67 and -2.66 * Extrapolated energy: Between -3784 and -3783 Note: Though not used for the benchmarking in the final report, some testing was done with version 21.1.0 also. In this version, some external library routines were replaced by new internal implementations that cause changes in some results. For 21.1.0, the expected values are: * Number of iterations: Between 30 and 35 * Dipole (3rd component): between -0.462 and -0.461 * Fermi level: between -2.59 and -2.58 * Extrapolated energy: Between -3784 and -3783