# Quantum Espresso in the United European Applications Benchmark Suite (UEABS) ## Document Author: A. Emerson (a.emerson@cineca.it) , Cineca. ## Introduction Quantum Espresso is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. Full documentation is available from the project website [QuantumEspresso](https://www.quantum-espresso.org/). In this README we give information relevant for its use in the UEABS. ### Standard CPU version For the UEABS activity we have used mainly version v6.0 but later versions are now available. ### GPU version The GPU port of Quantum Espresso is a version of the program which has been completely re-written in CUDA FORTRAN by Filippo Spiga. The version program used in these experiments is v6.0, even though further versions becamse available later during the activity. ## Installation and requirements ### Standard The Quantum Espresso source can be downloaded from the projects GitHub repository,[QE](https://github.com/QEF/q-e/tags). Requirements can be found from the website but you will need a good FORTRAN and C compiler with an MPI library and optionally (but highly recommended) an optimised linear algebra library. ### GPU version For complete build requirements and information see the following GitHub site: [QE-GPU](https://github.com/fspiga/qe-gpu) A short summary is given below: Essential * The PGI compiler version 17.4 or above. * You need NVIDIA TESLA GPUS such as Kepler (K20, K40, K80) or Pascal (P100) or Volta (V100). No other cards are supported. NVIDIA TESLA P100 and V100 are strongly recommend for their on-board memory capacity and double precision performance. Optional * A parallel linear algebra library such as Scalapack, Intel MKL or IBM ESSL. If none is available on your system then the installation can use a version supplied with the distribution. ## Downloading the software ### Standard From the website, for example: ```bash wget https://github.com/QEF/q-e/releases/download/qe-6.3/qe-6.3.tar.gz ``` ### GPU Available from the web site given above. You can use, for example, ```git clone``` to download the software: ```bash git clone https://github.com/fspiga/qe-gpu.git ``` ## Compiling and installing the application ### Standard installation Installation is achieved by the usual ```configure, make, make install ``` procedure. However, it is recommended that the user checks the __make.inc__ file created by this procedure before performing the make. For example, using the Intel compilers, ```bash module load intel intelmpi CC=icc FC=ifort MPIF90=mpiifort ./configure --enable-openmp --with-scalapack=intel ``` Assuming the __make.inc__ file is acceptable, the user can then do: ```bash make; make install ``` ### GPU Check the __README.md__ file in the downloaded files since the procedure varies from distribution to distribution. Most distributions do not have a ```configure``` command. Instead you copy a __make.inc__ file from the __install__ directory, and modify that directly before running make. A number of templates are available in the distribution: - make.inc_x86-64 - make.inc_CRAY_PizDaint - make.inc_POWER_DAVIDE - make.inc_POWER_SUMMITDEV The second and third are particularly relevant in the PRACE infrastructure (ie. for CSCS PizDaint and CINECA DAVIDE). Run __make__ to see the options available. For the UEABS you should select the pw program (the only module currently available) ``` make pw ``` The QE-GPU executable will appear in the directory `GPU/PW` and is called `pw-gpu.x`. ## Running the program - general procedure Of course you need some input before you can run calculations. The input files are of two types: 1. A control file usually called `pw.in` 2. One or more pseudopotential files with extension `.UPF` The pseudopotential files are placed in a directory specified in the control file with the tag pseudo\_dir. Thus if we have ```shell pseudo_dir=./ ``` then QE-GPU will look for the pseudopotential files in the current directory. If using the PRACE benchmark suite the data files can be downloaded from the QE website or the PRACE respository. For example, ```shell wget http://www.prace-ri.eu/UEABS/Quantum\_Espresso/QuantumEspresso_TestCaseA.tar.gz ``` Once uncompressed you can then run the program like this (e.g. using MPI over 16 cores): ```shell mpirun -n 16 pw-gpu.x -input pw.in ``` but check your system documentation since mpirun may be replaced by `mpiexec, runjob, aprun, srun,` etc. Note also that normally you are not allowed to run MPI programs interactively without using the batch system. ### Parallelisation options Quantum Espresso uses various levels of parallelisation, the most important being MPI parallelisation over the *k points* available in the input system. This is achieved with the ```-npool``` program option. Thus for the AUSURF input which has 2 k points we can run: ```bash srun -n 64 pw.x -npool 2 -input pw.in ``` which would allocate 32 MPI tasks per k-point. The number of MPI tasks must be a multiple of the number of k-points. For the TA2O5 input, which has 26 k-points, we could try: ```bash srun -n 52 pw.x -npool 26 -input pw.in ``` but we may wish to use fewer pools but with more tasks per pool: ```bash srun -n 52 pw.x -npool 13 -input pw.in ``` It is also possible to control the number of MPI tasks used in the diagonalization of the subspace Hamiltonian. This is possible with the ```-ndiag``` parameter which must be a square number. For example with the AUSURF input with k-points we can assign 4 processes for the Hamiltonian diagonisation: ```bash srun -n 64 pw.x -npool 2 -ndiag 4 -input pw.in ``` ### Hints for running the GPU version #### Memory limitations The GPU port of Quantum Espresso runs almost entirely in the GPU memory. This means that jobs are restricted by the memory of the GPU device, normally 16-32 GB, regardless of the main node memory. Thus, unless many nodes are used the user is likely to see job failures due to lack of memory, even for small datasets. For example, on the CSCS Piz Daint supercomputer each node has only 1 NVIDIA Tesla P100 (16GB) which means that you will need at least 4 nodes to run even the smallest dataset (AUSURF in the UEABS). ## Execution In the UEABS repository you will find a directory for each computer system tested, together with installation instructions and job scripts. In the following we describe in detail the execution procedure for the Marconi computer system. ### Execution on the Cineca Marconi KNL system Quantum Espresso has already been installed for the KNL nodes of Marconi and can be accessed via a specific module: ``` shell module load profile/knl module load autoload qe/6.0_knl ``` On Marconi the default is to use the MCDRAM as cache, and have the cache mode set as quadrant. Other settings for the KNLs on Marconi haven't been substantailly tested for Quantum Espresso (e.g. flat mode) but significant differences in performance for most inputs are not expected. An example SLURM batch script for the A2 partition is given below: ``` shell #!/bin/bash #SBATCH -N2 #SBATCH --tasks-per-node=64 #SBATCH -A #SBATCH -t 1:00:00 module purge module load profile/knl module load autoload qe/6.0_knl export OMP_NUM_THREADS=1 export MKL_NUM_THREADS=${OMP_NUM_THREADS} srun pw.x -npool 2 -ndiag 16 -input file.in > file.out ``` In the above with the SLURM directives we have asked for 2 KNL nodes (each with 68 cores) in cache/quadrant mode and 93 Gb main memory each. We are running QE in MPI-only mode using 64 MPI processes/node with the k-points in 2 pools; the diagonalisation of the Hamiltonian will be done by 16 (4x4) tasks. Note that this script needs to be submitted using the KNL scheduler as follows: ``` shell module load env-knl sbatch myjob ``` Please check the Cineca documentation for information on using the [Marconi KNL partition] (https://wiki.u-gov.it/confluence/display/SCAIUS/UG3.1%3A+MARCONI+UserGuide#UG3.1:MARCONIUserGuide-SystemArchitecture). ## UEABS test cases | UEABS name | QE name | Description | k-points | Notes| |------------|---------------|-------------|----------|------| | Small test case | AUSURF | 112 atoms | 2 | < 4-8 nodes on most systems | | Medium test case | TA2O5 | Tantalum oxide| 26| Medium scaling, often 20 nodes | | Large test case | CNT | Carbon nanotube | | Large scaling runs only. Memory and time requirements high| __Last updated: 14-January-2019__