# CP2K ## Summary Version 1.0 ## Purpose of Benchmark CP2K is a freely available quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. ## Characteristics of Benchmark CP2K can be used to perform DFT calculations using the QuickStep algorithm. This applies mixed Gaussian and plane waves approaches (such as GPW and GAPW). Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimisation using NEB or dimer method. CP2K is written in Fortran 2008 and can be run in parallel using a combination of multi-threading, MPI, and CUDA. All of CP2K is MPI parallelised, with some additional loops also being OpenMP parallelised. It is therefore most important to take advantage of MPI parallelisation, however running one MPI rank per CPU core often leads to memory shortage. At this point OpenMP threads can be used to utilise all CPU cores without suffering an overly large memory footprint. The optimal ratio between MPI ranks and OpenMP threads depends on the type of simulation and the system in question. CP2K supports CUDA, allowing it to offload some linear algebra operations including sparse matrix multiplications to the GPU through its DBCSR acceleration layer. FFTs can optionally also be offloaded to the GPU. Benefits of GPU offloading may yield improved performance depending on the type of simulation and the system in question. ## Mechanics of Building Benchmark GNU make and Python 2.x are required for the build process, as are a Fortran 2003 compiler and matching C compiler, e.g. gcc/gfortran (gcc >=4.6 works, later version is recommended). CP2K can benefit from a number of external libraries for improved performance. It is advised to use vendor-optimized versions of these libraries. If these are not available on your machine, there exist freely available implementations of these libraries. Overview of build process: 1. Install Libint. 2. Install Libxc. 3. Install FFTW library (or use MKL's FFTW3 interface). 4. Check if LAPACK, BLAS, SCALAPACK and BLACS are provided and install if not. 5. Install optional libraries - ELPA, libxsmm, libgrid. 6. Build CP2K and link to Libint, Libxc, FFTW, LAPACK, BLAS, SCALAPACK and BLACS, and to relevant CUDA libraries if building for GPU. ### Download the source code wget https://github.com/cp2k/cp2k/releases/download/v7.1.0/cp2k-7.1.tar.bz2 bunzip2 cp2k-7.1.tar.bz2 tar xvf cp2k-7.1.tar cd cp2k-7.1 ### Install or locate libraries **LIBINT** The following commands will uncompress and install the LIBINT library required for the UEABS benchmarks: wget https://github.com/cp2k/libint-cp2k/releases/download/v2.6.0/libint-v2.6.0-cp2k-lmax-4.tgz tar zxvf libint-v2.6.0-cp2k-lmax-4.tgz cd libint-v2.6.0-cp2k-lmax-4 ./configure CC=cc CXX=CC FC=ftn --enable-fortran --prefix=install_path : must not be this directory make make install Note: The environment variables ``CC`` and ``CXX`` are optional and can be used to specify the C and C++ compilers to use for the build (the example above is configured to use the compiler wrappers ``cc`` and ``CC`` used on Cray systems). **LIBXC** Libxc - v4.0.3 or later : http://www.tddft.org/programs/octopus/wiki/index.php/Libxc **FFTW** FFTW3 : http://www.fftw.org or provided as an interface by MKL **LAPACK & BLAS** Can be provided from: netlib : http://netlib.org/lapack & http://netlib.org/blas MKL : part of the Intel MKL installation LibSci : installed on Cray platforms ATLAS : http://math-atlas.sf.net OpenBLAS : http://www.openblas.net clBLAS : http://gpuopen.com/compute-product/clblas/ **SCALAPACK and BLACS** Can be provided from: netlib : http://netlib.org/scalapack/ MKL : part of the Intel MKL installation LibSci : installed on Cray platforms **Optional libraries** ELPA : https://elpa.mpcdf.mpg.de/elpa-tar-archive libgrid : within CP2K distribution - cp2k/tools/autotune_grid libxsmm : https://github.com/hfp/libxsmm ### Create the arch file Before compiling the choice of compilers, the library locations and compilation and linker flags need to be specified. This is done in an arch (architecture) file. Example arch files for a number of common architecture examples can be found inside the ``cp2k/arch`` directory. The names of these files match the pattern architecture.version (e.g., Linux-x86-64-gfortran.sopt). The case "version=psmp" corresponds to the hybrid MPI + OpenMP version that you should build to run the UEABS benchmarks. Machine specific examples can be found in the relevent subdirectory. In most cases you need to create a custom arch file, either from scratch or by modifying an existing one that roughly fits the cpu type, compiler, and installation paths of libraries on your system. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of the testing reports for some platforms (click on the status field for a platform, and search for 'ARCH-file' in the resulting output). As a guide for GNU compilers the following should be included in the ``arch`` file: **Specification of which compiler and linker commands to use:** CC = gcc FC = mpif90 LD = mpif90 AR = ar -r CP2K is primarily a Fortran code, so only the Fortran compiler needs to be MPI-enabled. **Specification of the ``DFLAGS`` variable, which should include:** -D__parallel \ -D__SCALAPACK \ -D__LIBINT \ -D__FFTW3 \ -D__LIBXC \ # Optional DFLAGS for linking performance libraries: -D__LIBXSMM \ -D__ELPA=201911 \ -D__HAS_LIBGRID \ -D__SIRIUS \ -D__MKL : if relying on MKL for ScaLAPACK and/or an FFTW interface **Specification of compiler flags ``FCFLAGS`` (for gfortran):** FCFLAGS = $(DFLAGS) -ffree-form -fopenmp : Required FCFLAGS = $(DFLAGS) -ffree-form -fopenmp -O3 -ffast-math -funroll-loops : Recommended If you want to link any libraries containing header files you should pass the path to the directory containing these to FCFLAGS in the format ``-I/path_to_include_dir`` -I$(path_to_libint)/include **Specification of libraries to link to:** LIBS = -L$(path_to_libint)/lib -lint2 : Required for LIBINT -L$(path_to_libxc)/lib -lxc90 -lxc03 -lxc : Required for LIBXC -lfftw3 -lfftw3_threads -lz -ldl -lstdc++ If you use MKL to provide ScaLAPACK and/or an FFTW interface the LIBS variable should be used to pass the relevant flags provided by the MKL Link Line Advisor (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor), which you should use carefully in order to generate the right options for your system. ### Build the executable To build the hybrid MPI+OpenMP executable ``cp2k.psmp`` using ``your_arch_file.psmp`` run make in the cp2k directory. make -j N ARCH=your_arch_file VERSION=psmp : on N threads make ARCH=your_arch_file VERSION=psmp : serially The executable ``cp2k.psmp`` will then be located in: cp2k/exe/your_arch_file ### Compiling CP2K for CUDA enabled GPUs The arch files for compiling CP2K for CUDA enabled GPUs can be found under the relavent machine example. The additional steps for CUDA compilation are: 1. Load the cuda module. 2. Ensure that CUDA_PATH variable is set. 3. Add CUDA specific options to the arch file. **Addtional required compiler and linker commands:** NVCC = nvcc **Additional ``DFLAGS``:** -D__ACC -D__DBCSR_ACC -D__PW_CUDA **Set ``NVFLAGS``:** NVFLAGS = $(DFLAGS) -O3 -arch sm_60 **Additional required libraries to link to:** -lcudart -lcublas -lcufft -lrt ## Mechanics of Running Benchmark The general way to run the benchmarks with the hybrid parallel executable is: export OMP_NUM_THREADS=X parallel_launcher launcher_options path_to_/cp2k.psmp -i inputfile.inp -o logfile Where: * The environment variable for the number of threads must be set before calling the executable. * The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on Cray systems or srun when using Slurm. * launcher_options specifies parallel placement in terms of total numbers of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which should be equal to the value given to OMP_NUM_THREADS). This is not necessary if parallel runtime options are picked up by the launcher from the job environment. * You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest. * The inputfile usually has the extension .inp, and may specify within it further requried files (such as basis sets, potentials, etc.) You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest. For tier-1 systems the best performance is usually obtained with pure MPI, while for tier-0 systems the best performance is typically obtained using 1 MPI task per node with the number of threads being equal to the number of cores per node. ### UEABS benchmarks **A) H2O-512** * Ab initio molecular dynamics simulation of 512 water molecules (10 MD steps) * Uses the Born-Oppenheimer approach via Quickstep DFT **B) LiH-HFX** * DFT energy calculation for a 216 atom LiH crystal * Requires generation of initial wave function .wfn prior to run * Run ``input_bulk_B88_3.inp`` to generate the wavefunction and then rename the resulting wfn file - ```cp LiH_bulk_3-RESTART.wfn B88.wfn``` **C) H2O-DFT-LS** * Single energy calculation of 2048 water molecules * Uses linear scaling DFT Test Case | System | Number of Atoms | Run type | Description | Location | ----------|------------|-----------------|---------------|------------------------------------------------------|---------------------------------| a | H2O-512 | 1236 | MD | Uses the Born-Oppenheimer approach via Quickstep DFT | ``/tests/QS/benchmark/`` | b | LiH-HFX | 216 | Single-energy | GAPW with hybrid Hartree-Fock exchange | ``/tests/QS/benchmark_HFX/LiH`` | c | H2O-DFT-LS | 6144 | Single-energy | Uses linear scaling DFT | ``/tests/QS/benchmark_DM_LS`` | More information in the form of a README and an example job script is included in each benchmark tar file. ## Verification of Results The run walltime is reported near the end of logfile: grep "CP2K " logfile | awk -F ' ' '{print $7}'