Commit 40be11e1 authored by Holly Judge's avatar Holly Judge
Browse files

Update README.md

parent 3133af34
## Overview
This README together with the provided benchmark input files, CP2K build configuration ("arch") files and other details provided in each subdirectory corresponding to a machine specify how the CP2K benchmarking results presented in the PRACE-5IP deliverable D7.5 ("Evaluation of Accelerated and Non-accelerated Benchmarks") were obtained and provide general guidance on building CP2K and running the UEABS benchmarks.
In short, the procedure followed to generate CP2K UEABS benchmark results for the D7.5 deliverable on a given machine was:
1. Compile Libint
2. Compile Libxc
3. Compile FFTW library (or use MKL's FFTW3 interface)
4. Compile CP2K and link to Libint, Libxc, FFTW, LAPACK, BLAS, SCALAPACK and BLACS, and to relevant CUDA libraries if building for GPU
5. Run the benchmarks, namely:
- Test Case A: H2O-1024
- Test Case B: LiH-HFX (adjusting the MAX_MEMORY input parameter to take into account available on-node memory)
- Test Case C: H2O-DFT-LS
Gcc/gfortran was used to compile the libraries and CP2K itself, with the exception of the Frioul machine on which the Intel compiler was used to compile for the Knights Landing processor. In general it is recommended to use an MPI library built using the same compiler used to build CP2K. General information about building CP2K and libraries it depends on can be found in INSTALL.md included in the CP2K source distribution.
The reported walltime for a given run is obtained by querying the resulting .log file for CP2K's internal timing, as follows:
```
grep "CP2K " *.log
```
### Optional
Optional performance libraries including ELPA, libgrid, and libsmm/libxsmm can and should be built and linked to when compiling a CP2K executable for maximum performance for production usage. This was not done for the results presented in deliverable D7.5 due to the effort and complexity involved in doing so for the range of machines on which benchmark results were generated, which included PRACE Tier-0 production machines as well as pre-production prototypes. Information about these libraries can be found in INSTALL.md included in the CP2K source distribution.
## 1. Compile Libint
The Libint library can be obtained from <http://sourceforge.net/projects/libint/files/v1-releases/libint-1.1.4.tar.gz>
The specific commands used to build version 1.1.4 of Libint using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README. By default the build process only creates static (.a) libraries. If you want to be able to link dynamically to Libint when building CP2K you can pass the flag
--enable-shared to ./configure in order to produce shared libraries (.so).
If you can, it is easiest to build Libint on the same processor architecture on which you will run CP2K. This typically correspond to being to compile directly on the relevant compute nodes of the machine. If this is not possible and if you are forced instead to compile on nodes with a different processor architecture to the compute nodes on which CP2K will eventually run, see the section below on cross-compiling Libint.
More information about Libint can be found inside the CP2K distribution base directory in
```
/tools/hfx_tools/libint_tools/README_LIBINT
```
### Cross-compiling Libint for compute nodes
If you are forced to cross-compile Libint for compute nodes on nodes that have a different processor architecture, follow these instructions. They assume you will be able to call a parallel application launcher like ``mpirun`` or ``mpiexec`` during your build process in order to run compiled code.
In ``/src/lib/`` edit the files ``MakeRules`` and ``MakeRules.in``.
On the last line of each file, replace
```
cd $(LIBSRCLINK); $(TOPOBJDIR)/src/bin/$(NAME)/$(COMPILER)
```
with
```
cd $(LIBSRCLINK); $(PARALLEL_LAUNCHER_COMMAND) $(TOPOBJDIR)/src/bin/$(NAME)/$(COMPILER)
```
Then run
```
export PARALLEL_LAUNCHER_COMMAND="mpirun -n 1"
```
replacing ``mpirun`` with a different parallel application launcher such as ``mpiexec`` (or ``aprun`` if applicable). When proceeding with the configure stage, include the configure flag ``cross-compiling=yes``.
## 2. Compile Libxc
The Libxc library can be obtained from <https://tddft.org/programs/libxc/download/previous/>. Version 4.2.3 was used for deliverable D7.5.
The specific commands used to build Libxc using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README.
## 3. Compile FFTW
If FFTW (FFTW3) is not already available preinstalled on your system it can be obtained from <http://www.fftw.org/download.html>
The specific commands used to build FFTW using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README.
Alternatively, you can use MKL's FFTW3 interface.
## 4. Compile CP2K
The CP2K source code can be downloaded from <https://github.com/cp2k/cp2k/releases/>. Version 6.1 of CP2K was used to generate the results reported in D7.5.
The general procedure is to create a custom so-called arch (architecture) file inside the ``arch`` directory in the CP2K distribution, which includes examples for a number of common architectures. The arch file specifies build parameters such as the choice of compilers, library locations and compilation and linker flags. Building the hybrid MPI + OpenMP ("psmp") version of CP2K (most convenient for running benchmarks) in accordance with a given arch file is then accomplished by entering the ``makefiles`` directory in the distribution and running
```
make -j number_of_cores_available_to_you ARCH=arch_file_name VERSION=psmp
```
If the build is successful, the resulting executable ``cp2k.psmp`` can be found inside ``/exe/arch_file_name/`` in the CP2K base directory.
Detailed information about arch file and library options and overall build procedure can be found in the ``INSTALL.md`` readme file. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of
the testing reports for some platforms (click on the status field for a platform, and search for 'ARCH-file' in the resulting output).
Specific arch files used to build CP2K for deliverable D7.5 can be found in the machine-specific subdirectories in this repository.
### Linking to MKL
If you are linking to Intel's MKL library to provide LAPACK, BLAS, SCALAPACK and BLACS (and possibly FFTW3) you should choose linking options using the [MKL Link Line Advisor tool](https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor), carefully selecting the options relevant on your machine environment.
### Building a CUDA-enabled version of CP2K
See the ``PizDaint`` subdirectory for an example arch file that enables GPU acceleration with CUDA. The ``-arch`` NVIDIA flag should be adjusted to choose the right option for the particular Nvidia GPU architecture in question. For example ``-arch sm35`` matches the Tesla K40 and K80 GPU architectures.
## 5. Running the UEABS benchmarks
**The exact input files used to generate the results presented in deliverable D7.5 are included in machine-specific subdirectories accompanying this README**
After building the hybrid MPI+OpenMP version of CP2K you have an executable ending in `.psmp`. The general way to run the benchmarks with the hybrid parallel executable is, e.g. for 2 threads per rank:
```
export OMP_NUM_THREADS=2
parallel_launcher launcher_options path_to_cp2k.psmp -i inputfile -o logfile
```
Where:
- The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on Cray systems or srun when using Slurm.
- launcher\_options specifies parallel placement in terms of total numbers of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which should be equal to the value given to OMP\_NUM\_THREADS). This is not necessary if parallel runtime options are picked up by the launcher from the job environment.
You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest.
### Test Case A: H2O-512
Test Case A is the H2O-512 benchmark included in the CP2K distribution (in ``/tests/QS/benchmark/``) and consists of ab-initio molecular dynamics simulation of 512 liquid water molecules using the Born-Oppenheimer approach via Quickstep DFT.
### Test Case B: LiH-HFX
Test Case B is the LiH-HFX benchmark included in the CP2K distribution (in ``/tests/QS/benchmark_HFX/LiH``) and consists of a DFT energy calculation for a 216 atom LiH crystal.
Provided input files include:
- ``input_bulk_HFX_3.inp``: input for the actual benchmark HFX run
- ``input_bulk_B88_3.inp``: needed to generate an initial wave function (wfn file) for the HFX run - this should be run once before running the actual HFX benchmark
After the B88 run has been performed once, you should rename the resulting wavefunction file for use in the HFX benchmark runs as follows:
```
cp LiH_bulk_3-RESTART.wfn B88.wfn
```
#### Notes
The amount of memory available per MPI process must be altered according to the number of MPI processes being used. If this is not done the benchmark will crash with an out of memory (OOM) error. The input file keyword ``MAX_MEMORY`` in ``input_bulk_HFX_3.inp`` needs to be changed as follows:
```
MAX_MEMORY 14000
```
should be changed to
```
MAX_MEMORY new_value
```
The new value of ``MAX_MEMORY`` is chosen by dividing the total amount of memory available on a node by the number of MPI processes being used per node.
If a shorter runtime is desirable, the following line in ``input_bulk_HFX_3.inp``:
```
MAX_SCF 20
```
may be changed to
```
MAX_SCF 1
```
in order to reduce the maximum number of SCF cycles and hence the execution time.
If the runtime or required memory needs to be reduced so the benchmark can run on a smaller number of nodes, the OPT1 basis set can be used instead of the default OPT2. To this end, the line
```
BASIS_SET OPT2
```
in ``input_bulk_B88_3.inp`` and in ``input_bulk_HFX_3.inp`` should be changed to
```
BASIS_SET OPT1
```
### Test Case C: H2O-DFT-LS
Test Case C is the H2O-DFT-LS benchmark included in the CP2K distribution (in ``tests/QS/benchmark_DM_LS``) and consists of a single-point energy calculation using linear-scaling DFT of 2048 water molecules.
## Overview
This README together with the provided benchmark input files, CP2K build configuration ("arch") files and other details provided in each subdirectory corresponding to a machine specify how the CP2K benchmarking results presented in the PRACE-5IP deliverable D7.5 ("Evaluation of Accelerated and Non-accelerated Benchmarks") were obtained and provide general guidance on building CP2K and running the UEABS benchmarks.
# CP2K
In short, the procedure followed to generate CP2K UEABS benchmark results for the D7.5 deliverable on a given machine was:
1. Compile Libint
2. Compile Libxc
3. Compile FFTW library (or use MKL's FFTW3 interface)
4. Compile CP2K and link to Libint, Libxc, FFTW, LAPACK, BLAS, SCALAPACK and BLACS, and to relevant CUDA libraries if building for GPU
5. Run the benchmarks, namely:
- Test Case A: H2O-1024
- Test Case B: LiH-HFX (adjusting the MAX_MEMORY input parameter to take into account available on-node memory)
- Test Case C: H2O-DFT-LS
## Summary Version
Gcc/gfortran was used to compile the libraries and CP2K itself, with the exception of the Frioul machine on which the Intel compiler was used to compile for the Knights Landing processor. In general it is recommended to use an MPI library built using the same compiler used to build CP2K. General information about building CP2K and libraries it depends on can be found in INSTALL.md included in the CP2K source distribution.
1.0
The reported walltime for a given run is obtained by querying the resulting .log file for CP2K's internal timing, as follows:
## Purpose of Benchmark
```
grep "CP2K " *.log
```
CP2K is a freely available quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems.
### Optional
Optional performance libraries including ELPA, libgrid, and libsmm/libxsmm can and should be built and linked to when compiling a CP2K executable for maximum performance for production usage. This was not done for the results presented in deliverable D7.5 due to the effort and complexity involved in doing so for the range of machines on which benchmark results were generated, which included PRACE Tier-0 production machines as well as pre-production prototypes. Information about these libraries can be found in INSTALL.md included in the CP2K source distribution.
## Characteristics of Benchmark
CP2K can be used to perform DFT calculations using the QuickStep algorithm. This applies mixed Gaussian and plane waves approaches (such as GPW and GAPW). Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, …), and classical force fields (AMBER, CHARMM, …). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimisation using NEB or dimer method.
CP2K is written in Fortran 2008 and can be run in parallel using a combination of multi-threading, MPI, and CUDA. All of CP2K is MPI parallelised, with some additional loops also being OpenMP parallelised. It is therefore most important to take advantage of MPI parallelisation, however running one MPI rank per CPU core often leads to memory shortage. At this point OpenMP threads can be used to utilise all CPU cores without suffering an overly large memory footprint. The optimal ratio between MPI ranks and OpenMP threads depends on the type of simulation and the system in question. CP2K supports CUDA, allowing it to offload some linear algebra operations including sparse matrix multiplications to the GPU through its DBCSR acceleration layer. FFTs can optionally also be offloaded to the GPU. Benefits of GPU offloading may yield improved performance depending on the type of simulation and the system in question.
## 1. Compile Libint
The Libint library can be obtained from <http://sourceforge.net/projects/libint/files/v1-releases/libint-1.1.4.tar.gz>
## Mechanics of Building Benchmark
The specific commands used to build version 1.1.4 of Libint using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README. By default the build process only creates static (.a) libraries. If you want to be able to link dynamically to Libint when building CP2K you can pass the flag
--enable-shared to ./configure in order to produce shared libraries (.so).
GNU make and Python 2.x are required for the build process, as are a Fortran 2003 compiler and matching C compiler, e.g. gcc/gfortran (gcc >=4.6 works, later version is recommended).
If you can, it is easiest to build Libint on the same processor architecture on which you will run CP2K. This typically correspond to being to compile directly on the relevant compute nodes of the machine. If this is not possible and if you are forced instead to compile on nodes with a different processor architecture to the compute nodes on which CP2K will eventually run, see the section below on cross-compiling Libint.
CP2K can benefit from a number of external libraries for improved performance. It is advised to use vendor-optimized versions of these libraries. If these are not available on your machine, there exist freely available implementations of these libraries including but not limited to those listed below.
More information about Libint can be found inside the CP2K distribution base directory in
```
/tools/hfx_tools/libint_tools/README_LIBINT
```
<!--- (CP2K is built using a Fortran 2003 compiler and matching C compiler such as gfortran/gcc (version 4.6 and above) and compiled with GNU make. CP2K makes use of a variety of different libraries. Some are essential for running in parallel and others may be used to increase the performance. The steps to build CP2K are as follows:) -->
### Cross-compiling Libint for compute nodes
If you are forced to cross-compile Libint for compute nodes on nodes that have a different processor architecture, follow these instructions. They assume you will be able to call a parallel application launcher like ``mpirun`` or ``mpiexec`` during your build process in order to run compiled code.
### Download the source code
In ``/src/lib/`` edit the files ``MakeRules`` and ``MakeRules.in``.
On the last line of each file, replace
Download a CP2K release from https://sourceforge.net/projects/cp2k/files/ or follow instructions at https://www.cp2k.org/download to check out the relevant branch of the CP2K GitHub repository.
```
cd $(LIBSRCLINK); $(TOPOBJDIR)/src/bin/$(NAME)/$(COMPILER)
```
with
### Install or locate required libraries
**LAPACK & BLAS**
```
cd $(LIBSRCLINK); $(PARALLEL_LAUNCHER_COMMAND) $(TOPOBJDIR)/src/bin/$(NAME)/$(COMPILER)
```
Can be provided from:
Then run
netlib : http://netlib.org/lapack & http://netlib.org/blas
MKL : part of the Intel MKL installation
LibSci : installed on Cray platforms
ATLAS : http://math-atlas.sf.net
OpenBLAS : http://www.openblas.net
clBLAS : http://gpuopen.com/compute-product/clblas/
```
export PARALLEL_LAUNCHER_COMMAND="mpirun -n 1"
```
replacing ``mpirun`` with a different parallel application launcher such as ``mpiexec`` (or ``aprun`` if applicable). When proceeding with the configure stage, include the configure flag ``cross-compiling=yes``.
**SCALAPACK and BLACS**
Can be provided from:
## 2. Compile Libxc
The Libxc library can be obtained from <https://tddft.org/programs/libxc/download/previous/>. Version 4.2.3 was used for deliverable D7.5.
netlib : http://netlib.org/scalapack/
MKL : part of the Intel MKL installation
LibSci : installed on Cray platforms
The specific commands used to build Libxc using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README.
**LIBINT**
## 3. Compile FFTW
Available from - https://www.cp2k.org/static/downloads/libint-1.1.4.tar.gz
If FFTW (FFTW3) is not already available preinstalled on your system it can be obtained from <http://www.fftw.org/download.html>
The following commands will uncompress and install the LIBINT library required for the UEABS benchmarks:
The specific commands used to build FFTW using GCC on a number of different machines can be found in the machine-specific subdirectories accompanying this README.
tar xzf libint-1.1.4
cd libint-1.1.4
./configure CC=cc CXX=CC --prefix=install_path : must not be this directory
make
make install
Alternatively, you can use MKL's FFTW3 interface.
Note: The environment variables ``CC`` and ``CXX`` are optional and can be used to specify the C and C++ compilers to use for the build (the example above is configured to use the compiler wrappers ``cc`` and ``CC`` used on Cray systems).
### Install optional libraries
## 4. Compile CP2K
The CP2K source code can be downloaded from <https://github.com/cp2k/cp2k/releases/>. Version 6.1 of CP2K was used to generate the results reported in D7.5.
FFTW3 : http://www.fftw.org or provided as an interface by MKL
Libxc : http://www.tddft.org/programs/octopus/wiki/index.php/Libxc
ELPA : https://www.cp2k.org/static/downloads/elpa-2016.05.003.tar.gz
libgrid : within CP2K distribution - cp2k/tools/autotune_grid
libxsmm : https://www.cp2k.org/static/downloads/libxsmm-1.4.4.tar.gz
The general procedure is to create a custom so-called arch (architecture) file inside the ``arch`` directory in the CP2K distribution, which includes examples for a number of common architectures. The arch file specifies build parameters such as the choice of compilers, library locations and compilation and linker flags. Building the hybrid MPI + OpenMP ("psmp") version of CP2K (most convenient for running benchmarks) in accordance with a given arch file is then accomplished by entering the ``makefiles`` directory in the distribution and running
### Compile CP2K
```
make -j number_of_cores_available_to_you ARCH=arch_file_name VERSION=psmp
```
Before compiling the choice of compilers, the library locations and compilation and linker flags need to be specified. This is done in an arch (architecture) file. Example arch files for a number of common architecture examples can be found inside the ``cp2k/arch`` directory. The names of these files match the pattern architecture.version (e.g., Linux-x86-64-gfortran.sopt). The case "version=psmp" corresponds to the hybrid MPI + OpenMP version that you should build to run the UEABS benchmarks. Machine specific examples can be found in the relevent subdirectory.
If the build is successful, the resulting executable ``cp2k.psmp`` can be found inside ``/exe/arch_file_name/`` in the CP2K base directory.
In most cases you need to create a custom arch file, either from scratch or by modifying an existing one that roughly fits the cpu type, compiler, and installation paths of libraries on your system. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of the testing reports for some platforms (click on the status field for a platform, and search for 'ARCH-file' in the resulting output).
Detailed information about arch file and library options and overall build procedure can be found in the ``INSTALL.md`` readme file. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of
the testing reports for some platforms (click on the status field for a platform, and search for 'ARCH-file' in the resulting output).
As a guide for GNU compilers the following should be included in the ``arch`` file:
Specific arch files used to build CP2K for deliverable D7.5 can be found in the machine-specific subdirectories in this repository.
**Specification of which compiler and linker commands to use:**
### Linking to MKL
CC = gcc
FC = mpif90
LD = mpif90
AR = ar -r
If you are linking to Intel's MKL library to provide LAPACK, BLAS, SCALAPACK and BLACS (and possibly FFTW3) you should choose linking options using the [MKL Link Line Advisor tool](https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor), carefully selecting the options relevant on your machine environment.
CP2K is primarily a Fortran code, so only the Fortran compiler needs to be MPI-enabled.
### Building a CUDA-enabled version of CP2K
**Specification of the ``DFLAGS`` variable, which should include:**
See the ``PizDaint`` subdirectory for an example arch file that enables GPU acceleration with CUDA. The ``-arch`` NVIDIA flag should be adjusted to choose the right option for the particular Nvidia GPU architecture in question. For example ``-arch sm35`` matches the Tesla K40 and K80 GPU architectures.
-D__parallel : to build parallel CP2K executable)
-D__SCALAPACK : to link SCALAPACK
-D__LIBINT : to link to LIBINT
-D__MKL : if relying on MKL for ScaLAPACK and/or an FFTW interface
-D__HAS_NO_SHARED_GLIBC : for convenience on HPC systems, see INSTALL.md file
Additional DFLAGS which are needed to link to performance libraries, such as -D__FFTW3 to link to FFTW3, are listed in the INSTALL file.
## 5. Running the UEABS benchmarks
**The exact input files used to generate the results presented in deliverable D7.5 are included in machine-specific subdirectories accompanying this README**
**Specification of compiler flags ``FCFLAGS`` (for gfortran):**
After building the hybrid MPI+OpenMP version of CP2K you have an executable ending in `.psmp`. The general way to run the benchmarks with the hybrid parallel executable is, e.g. for 2 threads per rank:
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp : Required
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp -O3 -ffast-math -funroll-loops : Recommended
If you want to link any libraries containing header files you should pass the path to the directory containing these to FCFLAGS in the format -I/path_to_include_dir.
**Specification of libraries to link to:**
-L{path_to_libint}/lib -lderiv -lint : Required for LIBINT
If you use MKL to provide ScaLAPACK and/or an FFTW interface the LIBS variable should be used to pass the relevant flags provided by the MKL Link Line Advisor (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor), which you should use carefully in order to generate the right options for your system.
```
export OMP_NUM_THREADS=2
parallel_launcher launcher_options path_to_cp2k.psmp -i inputfile -o logfile
```
Where:
- The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on Cray systems or srun when using Slurm.
#### Building the executable
- launcher\_options specifies parallel placement in terms of total numbers of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which should be equal to the value given to OMP\_NUM\_THREADS). This is not necessary if parallel runtime options are picked up by the launcher from the job environment.
To build the hybrid MPI+OpenMP executable ``cp2k.psmp`` using *your_arch_file.psmp* run make in the ``cp2k/makefiles`` directory for v4-6 (or in the top-level cp2k directory for v7+).
make -j N ARCH=your_arch_file VERSION=psmp : on N threads
make ARCH=your_arch_file VERSION=psmp : serially
You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest.
The executable ``cp2k.psmp`` will then be located in:
### Test Case A: H2O-512
cp2k/exe/your_arch_file
Test Case A is the H2O-512 benchmark included in the CP2K distribution (in ``/tests/QS/benchmark/``) and consists of ab-initio molecular dynamics simulation of 512 liquid water molecules using the Born-Oppenheimer approach via Quickstep DFT.
### Test Case B: LiH-HFX
Test Case B is the LiH-HFX benchmark included in the CP2K distribution (in ``/tests/QS/benchmark_HFX/LiH``) and consists of a DFT energy calculation for a 216 atom LiH crystal.
### Compiling CP2K for CUDA enabled GPUs
Provided input files include:
Arch files for compiling CP2K for CUDA enabled GPUs can be found here:
- ``input_bulk_HFX_3.inp``: input for the actual benchmark HFX run
- ``input_bulk_B88_3.inp``: needed to generate an initial wave function (wfn file) for the HFX run - this should be run once before running the actual HFX benchmark
In general the main steps are:
After the B88 run has been performed once, you should rename the resulting wavefunction file for use in the HFX benchmark runs as follows:
1. Load the cuda module.
2. Ensure that CUDA_PATH variable is set.
3. Add the following to the arch file:
```
cp LiH_bulk_3-RESTART.wfn B88.wfn
```
#### Notes
The amount of memory available per MPI process must be altered according to the number of MPI processes being used. If this is not done the benchmark will crash with an out of memory (OOM) error. The input file keyword ``MAX_MEMORY`` in ``input_bulk_HFX_3.inp`` needs to be changed as follows:
**Addtional required compiler and linker commands**
```
MAX_MEMORY 14000
```
NVCC = nvcc
should be changed to
**Additional ``DFLAGS``**
```
MAX_MEMORY new_value
```
-D__ACC -D__DBCSR_ACC -D__PW_CUDA
**Set ``NVFLAGS``**
The new value of ``MAX_MEMORY`` is chosen by dividing the total amount of memory available on a node by the number of MPI processes being used per node.
NVFLAGS = $(DFLAGS) -O3 -arch sm_60
**Additional required libraries**
-lcudart -lcublas -lcufft -lrt
If a shorter runtime is desirable, the following line in ``input_bulk_HFX_3.inp``:
```
MAX_SCF 20
```
may be changed to
## Mechanics of Running Benchmark
The general way to run the benchmarks with the hybrid parallel executable is:
export OMP_NUM_THREADS=X
parallel_launcher launcher_options path_to_/cp2k.psmp -i inputfile.inp -o logfile
Where:
* The environment variable for the number of threads must be set before calling the executable.
* The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on Cray systems or srun when using Slurm.
* launcher_options specifies parallel placement in terms of total numbers of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which should be equal to the value given to OMP_NUM_THREADS). This is not necessary if parallel runtime options are picked up by the launcher from the job environment.
* You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest.
* The inputfile usually has the extension .inp, and may specify within it further requried files (such as basis sets, potentials, etc.)
```
MAX_SCF 1
```
You can try any combination of tasks per node and OpenMP threads per task to investigate absolute performance and scaling on the machine of interest. For tier-1 systems the best performance is usually obtained with pure MPI, while for tier-0 systems the best performance is typically obtained using 1 MPI task per node with the number of threads being equal to the number of cores per node.
in order to reduce the maximum number of SCF cycles and hence the execution time.
**UEABS benchmarks:**
If the runtime or required memory needs to be reduced so the benchmark can run on a smaller number of nodes, the OPT1 basis set can be used instead of the default OPT2. To this end, the line
Test Case | System | Number of Atoms | Run type | Description | Location |
----------|------------|-----------------|---------------|------------------------------------------------------|---------------------------------|
a | H2O-512 | 1236 | MD | Uses the Born-Oppenheimer approach via Quickstep DFT | ``/tests/QS/benchmark/`` |
b | LiHFX | 216 | Single-energy | Must create wavefuntion first - see benchmark README | ``/tests/QS/benchmark_HFX/LiH`` |
c | H2O-DFT-LS | 6144 | Single-energy | Uses linear scaling DFT | ``/tests/QS/benchmark_DM_LS`` |
More information in the form of a README and an example job script is included in each benchmark tar file.
```
BASIS_SET OPT2
```
## Verification of Results
in ``input_bulk_B88_3.inp`` and in ``input_bulk_HFX_3.inp`` should be changed to
```
BASIS_SET OPT1
```
The run walltime is reported near the end of logfile:
### Test Case C: H2O-DFT-LS
grep "CP2K " logfile | awk -F ' ' '{print $7}'
Test Case C is the H2O-DFT-LS benchmark included in the CP2K distribution (in ``tests/QS/benchmark_DM_LS``) and consists of a single-point energy calculation using linear-scaling DFT of 2048 water molecules.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment