diff --git a/gpaw/README.md b/gpaw/README.md index f39b861188e2b2f20f8fc8be5dced89534c8de75..32e4a9ee6ef4618517559d4b7c135543deea3161 100644 --- a/gpaw/README.md +++ b/gpaw/README.md @@ -6,15 +6,18 @@ ## Purpose of the benchmark -[GPAW](https://wiki.fysik.dtu.dk/gpaw/) is a density-functional theory (DFT) -program for ab initio electronic structure calculations using the projector -augmented wave method. It uses a uniform real-space grid representation of the -electronic wavefunctions that allows for excellent computational scalability -and systematic converge properties. +[GPAW](https://wiki.fysik.dtu.dk/gpaw/) is an efficient program package for electronic structure calculations based on the density functional theory (DFT) and the time-dependent density functional theory (TD-DFT). The density-functional theory allows studies of ground state properties such as energetics and equilibrium geometries, while the time-dependent density functional theory can be used for calculating excited state properties such as optical spectra. The program package includes two complementary implementations of time-dependent density functional theory: a linear response formalism and a time-propagation in real time. + +GPAW uses the projector augmented wave (PAW) method that allows one to get rid of the core electrons and work with soft pseudo valence wave functions. The PAW method can be applied on the same footing to all elements, for example, it provides a reliable description of the transition metal elements and the first row elements with open p-shells that are often problematic for standard pseudopotentials. A further advantage of the PAW method is that it is an all-electron method (frozen core approximation) and there is a one to one transformation between the pseudo and all-electron quantities. + +The equations of the (time-dependent) density functional theory within the PAW method are discretized using finite-differences and uniform real-space grids. The real-space representation allows flexible boundary conditions, as the system can be finite or periodic in one, two or three dimensions (e.g. cluster, slab, bulk). The accuracy of the discretization is controlled basically by single parameter, the grid spacing. The real-space representation allows also efficient parallelization with domain decomposition. + +GPAW offers several parallelization levels. The most basic parallelization strategy is domain decomposition over the real-space grid. In magnetic systems it is possible to parallelize over spin, and in systems that have k-points (surfaces or bulk systems) parallelization over k-points is also possible. Furthermore, parallelization over electronic states is possible in DFT and in real-time TD-DFT calculations. GPAW is written in Python and C and parallelized with MPI. The GPAW benchmark tests MPI parallelization and the quality of the provided mathematical libraries, including BLAS, LAPACK, ScaLAPACK, and FFTW-compatible library. There is -also a CUDA-based implementation for GPU systems. +also a CUDA-based implementation for GPU systems, though that one still has no official +releases at the time of the development of this version of the UEABS. ## Characteristics of the benchmark @@ -47,10 +50,10 @@ Versions 1.5.2 and 19.8.1 were also considered but are not compatible with the r input files provided here. Hence support for those versions of GPAW was dropped in this version of the UEABS. -There are three benchmark cases, denotes S, M and L. +There are three benchmark cases, denoted A (small), B (mediuum) and C (large). -### Case S: Carbon nanotube +### Case A (small): Carbon nanotube A ground state calculation for a carbon nanotube in vacuum. By default uses a 6-6-10 nanotube with 240 atoms (freely adjustable) and serial LAPACK with an @@ -58,19 +61,19 @@ option to use ScaLAPACK. Expected to scale up to 10 nodes and/or 100 MPI tasks. This benchmark runs fast. Expect execution times around 1 minutes on 100 cores of a modern x86 cluster. -Input file: [benchmark/1_S_carbon-nanotube/input.py](benchmark/1_S_carbon-nanotube/input.py) +Input file: [benchmark/A_carbon-nanotube/input.py](benchmark/A_carbon-nanotube/input.py) This input file still works with version 1.5.2 and 19.8.1 of GPAW. -### Case M: Copper filament +### Case B (medium): Copper filament A ground state calculation for a copper filament in vacuum. By default uses a 3x4x4 FCC lattice with 71 atoms (freely adjustable through the variables `x`, `y` and `z` in the input file) and ScaLAPACK for parallelisation. Expected to scale up to 100 nodes and/or 1000 MPI tasks. -Input file: [benchmark/2_M_copper-filament/input.py](benchmark/2_M_copper-filament/input.py) +Input file: [benchmark/B_copper-filament/input.py](benchmark/B_copper-filament/input.py) This input file does not work with GPAW 1.5.2 and 19.8.1. It requires GPAW 20.1.0 or 20.10.0. Please try older versions of the UEABS if you want to use @@ -80,14 +83,14 @@ The benchmark runs best when using full nodes. Expect a performance drop on other configurations. -### Case L: Silicon cluster +### Case C (large): Silicon cluster A ground state calculation for a silicon cluster in vacuum. By default the cluster has a radius of 15Å (freely adjustable) and consists of 702 atoms, and ScaLAPACK is used for parallelisation. Expected to scale up to 1000 nodes and/or 10000 MPI tasks. -Input file: [benchmark/3_L_silicon-cluster/input.py](benchmark/3_L_silicon-cluster/input.py) +Input file: [benchmark/C_silicon-cluster/input.py](benchmark/C_silicon-cluster/input.py) This input file does not work with GPAW 1.5.2 and 19.8.1. It requires GPAW 20.1.0 or 20.10.0. Please try older versions of the UEABS if you want to use @@ -138,16 +141,16 @@ Hence GPAW has the following requirements: for the small case. * Python. GPAW 20.1.0 requires Python 3.5-3.8 and GPAW 20.10.0 Python 3.6-3.9. * Mandatory Python packages: - * [NumPY](https://pypi.org/project/numpy/) 1.9 or later (for GPAW 20.1.0/20.10.0) + * [NumPy](https://pypi.org/project/numpy/) 1.9 or later (for GPAW 20.1.0/20.10.0) GPAW versions before 20.10.0 produce warnings when used with NumPy 1.19.x. * [SciPy](https://pypi.org/project/scipy/) 0.14 or later (for GPAW 20.1.0/20.10.0) * [FFTW](http://www.fftw.org) is highly recommended. As long as the optional libvdwxc component is not used, the MKL FFTW wrappers can also be used. Recent versions of GPAW also show good performance using just the NumPy-provided FFT routines provided that NumPy has been built with a highly optimized FFT library. - * [LibXC](https://www.tddft.org/programs/libxc/) 3.X or 4.X for GPAW 20.1.0 and 20.10.0. - LibXC is a library of exchange-correlation functions for density-functional theory. - None of the versions currently mentions LibXC 5.X as officially supported. + * [Libxc](https://www.tddft.org/programs/libxc/) 3.X or 4.X for GPAW 20.1.0 and 20.10.0. + Libxc is a library of exchange-correlation functions for density-functional theory. + None of the versions currently mentions libxc 5.X as officially supported. * [ASE, Atomic Simulation Environment](https://wiki.fysik.dtu.dk/ase/), a Python package from the same group that develops GPAW * Check the release notes of GPAW as the releases of ASE and GPAW should match. @@ -169,9 +172,9 @@ In addition, the GPU version needs: Installing GPAW also requires a number of standard build tools on the system, including * [GNU autoconf](https://www.gnu.org/software/autoconf/) is needed to generate the - configure script for LibXC + configure script for libxc * [GNU Libtool](https://www.gnu.org/software/libtool/) is needed. If not found, - the configure process of LibXC produces very misleading + the configure process of libxc produces very misleading error messages that do not immediately point to libtool missing. * [GNU make](https://www.gnu.org/software/make/) @@ -217,9 +220,9 @@ Example [build scripts](build/examples/) are also available. As each benchmark has only a single input file, these can be downloaded right from this repository. - 1. [Testcase S: Carbon nanotube input file](benchmark/1_S_carbon-nanotube/input.py) - 2. [Testcase M: Copper filament input file](benchmark/2_M_copper-filament/input.py) - 3. [Testcase L: Silicon cluster input file](benchmark/3_L_silicon-cluster/input.py) + 1. [Test Case A: Carbon nanotube input file](benchmark/A_carbon-nanotube/input.py) + 2. [Test Case B: Copper filament input file](benchmark/B_copper-filament/input.py) + 3. [Test Case C: Silicon cluster input file](benchmark/C_silicon-cluster/input.py) ### Running the benchmarks @@ -272,7 +275,7 @@ and is what is used as the benchmark result. The other numbers can serve as verification of the results and were obtained with GPAW 20.1.0, 20.10.0 and 21.1.0. -### Case S: Carbon nanotube +### Case A (small): Carbon nanotube The expected values are: * Number of iterations: 12 @@ -281,7 +284,7 @@ The expected values are: * Extrapolated energy: Between -2397.63 and -2397.62 -### Case M: Copper filament +### Case B (medium): Copper filament The expected values are: * Number of iterations: 19 @@ -290,7 +293,7 @@ The expected values are: * Extrapolated energy: Between -473.5 and -473.3 -### Case L: Silicon cluster +### Case C (large): Silicon cluster With this test case, some of the results differ between version 20.1.0 and 20.10.0 on one hand and version 21.1.0 on the other hand. diff --git a/gpaw/benchmark/1_S_carbon-nanotube/input.py b/gpaw/benchmark/A_carbon-nanotube/input.py similarity index 100% rename from gpaw/benchmark/1_S_carbon-nanotube/input.py rename to gpaw/benchmark/A_carbon-nanotube/input.py diff --git a/gpaw/benchmark/2_M_copper-filament/input.py b/gpaw/benchmark/B_copper-filament/input.py similarity index 100% rename from gpaw/benchmark/2_M_copper-filament/input.py rename to gpaw/benchmark/B_copper-filament/input.py diff --git a/gpaw/benchmark/3_L_silicon-cluster/input.py b/gpaw/benchmark/C_silicon-cluster/input.py similarity index 100% rename from gpaw/benchmark/3_L_silicon-cluster/input.py rename to gpaw/benchmark/C_silicon-cluster/input.py diff --git a/gpaw/build/build-CPU.md b/gpaw/build/build-CPU.md index a71399ebad6028c77fd3b27475d31e74595cbd62..0ca156c7b32f77179c5d9a901fd41a37fc525259 100644 --- a/gpaw/build/build-CPU.md +++ b/gpaw/build/build-CPU.md @@ -1,4 +1,4 @@ -# Detailed GPAW installation instructions on non-accelerated systems +# Detailed GPAW installation instructions on non-acclerated systems These instructions are in addition to the brief instructions in [README.md](../README.md). @@ -13,8 +13,8 @@ GPAW needs (for the UEABS benchmarks) * [Python](https://www.python.org/): GPAW 20.1.0 requires Python 3.5-3.8, and GPAW 20.10.0 and 21.1.0 require Python 3.6-3.9. * [MPI library](https://www.mpi-forum.org/) - * [LibXC](https://www.tddft.org/programs/LibXC/). GPAW 20.1.0, - 20.10.0 and 21.1.0 all need LibXC 3.x or 4.x. + * [Libxc](https://www.tddft.org/programs/libxc/). GPAW 20.1.0, + 20.10.0 and 21.1.0 all need libxc 3.x or 4.x. * (Optimized) [BLAS](http://www.netlib.org/blas/) and [LAPACK](http://www.netlib.org/lapack/) libraries. There are both commercial and free and open source versions of these libraries. @@ -22,7 +22,7 @@ GPAW needs (for the UEABS benchmarks) will give very poor performance. Most optimized LAPACK libraries actually only optimize a few critical routines while the remaining routines are compiled from the reference version. Most processor vendors for HPC machines and system vendors - offer optimized versions of these libraries. + offer optmized versions of these libraries. * [ScaLAPACK](http://www.netlib.org/scalapack/) and the underlying communication layer [BLACS](http://www.netlib.org/blacs/). * [FFTW](http://www.fftw.org/) or compatible FFT library. @@ -60,7 +60,7 @@ GPAW needs * [ASE, Atomic Simulation Environment](https://wiki.fysik.dtu.dk/ase/), a Python package from the same group that develops GPAW. The required versions is 3.18.0 or later for GPAW 20.1.0, 20.10.0 and 21.1.0. - ASE has a couple of dependencies + ASE has a couple of dependendencies that are not needed for running the UEABS benchmarks. However, several Python package install methods will trigger the installation of those packages, and with them may require a chain of system libraries. @@ -69,7 +69,7 @@ GPAW needs This package is optional and not really needed to run the benchmarks. Matplotlib pulls in a lot of other dependencies. When installing ASE with pip, it will try to pull in matplotlib and its dependencies - * [pillow](https://pypi.org/project/Pillow/) needs several external + * [pillow](https://pypi.org/project/Pillow/) needs several exgternal libraries. During the development of the benchmarks, we needed at least zlib, libjpeg-turbo (or compatible libjpeg library) and freetype. Even though the pillow documentation claimed that libjpeg was optional, @@ -90,7 +90,7 @@ GPAW needs code * [itsdangerous](https://pypi.org/project/itsdangerous/) * [Werkzeug](https://pypi.org/project/Werkzeug/) - * [click](https://pypi.org/project/click/) + * [click]() ## Tested configurations @@ -145,28 +145,28 @@ Also, the instructions below will need to be adapted to the specific libraries that are being used. Other prerequisites: - * LibXC + * libxc * Python interpreter * Python package NumPy * Python package SciPy * Python package ase -### Installing LibXC +### Installing libxc - * Installing LibXC requires GNU automake and GNU buildtool besides GNU make and a + * Installing libxc requires GNU automake and GNU buildtool besides GNU make and a C compiler. The build process is the usual GNU configure - make - make install cycle, but the `configure` script still needs to be generated with autoreconf. - * Download LibXC: - * The latest version of LibXC can be downloaded from - [the LibXC download page](https://www.tddft.org/programs/libxc/download/). + * Download libxc: + * The latest version of libxc can be downloaded from + [the libxc download page](https://www.tddft.org/programs/libxc/download/). However, that version may not be officially supported by GPAW. - * It is also possible to download all recent versions of LibXC from - [the LibXC GitLab](https://gitlab.com/libxc/libxc) + * It is also possible to download all recent versions of libxc from + [the libxc GitLab](https://gitlab.com/libxc/libxc) * Select the tag corresponding to the version you want to download in the branch/tag selection box. * Then use the download button and select the desired file type. - * Download URLs look like `https://gitlab.com/libxc/libxc/-/archive/4.3.4/libxc-4.3.4.tar.bz2`. + * Dowload URLs look like `https://gitlab.com/libxc/libxc/-/archive/4.3.4/libxc-4.3.4.tar.bz2`. * Untar the file in the build directory. @@ -228,8 +228,8 @@ of NumPy, SciPy and GPAW itself proves much more important. use the NumPy FFT routines. * GPAW also needs a number of so-called "Atomic PAW Setup" files. The latest files can be found on the [GPAW website, Atomic PAW Setups page](https://wiki.fysik.dtu.dk/gpaw/setups/setups.html). - For the testing we used [`gpaw-setups-0.9.20000.tar.gz`](https://wiki.fysik.dtu.dk/gpaw-files/gpaw-setups-0.9.20000.tar.gz) - for all versions of GPAW. The easiest way to install these files is to simply untar + For the testing we used []`gpaw-setups-0.9.20000.tar.gz`](https://wiki.fysik.dtu.dk/gpaw-files/gpaw-setups-0.9.20000.tar.gz) + for all versions of GPAW. The easiest way to install these files is to simpy untar the file and set the environment variable GPAW_SETUP_PATH to point to that directory. In the examples provided we use the `share/gpaw-setups` subdirectory of the install directory for this purpose.