Commit 5b328137 authored by Valeriu Codreanu's avatar Valeriu Codreanu
Browse files

Merge branch 'r2.1-dev' into 'master'

R2.1 dev

See merge request UEABS/ueabs!3
parents 74c3b0e4 070f6323
This diff is collapsed.
# UEABS Releases # UEABS Releases
## Version 2.1 (PRACE-5IP, April 30, 2019)
* Updated the benchmark suite to the status as used for the PRACE-5IP benchmarking deliverable D7.5 "Evaluation of Accelerated and Non-accelerated Benchmarks" (April 18, 2019)
* Removed GENE
## Version 2.0 (PRACE-5IP MS31, May 31, 2018) ## Version 2.0 (PRACE-5IP MS31, May 31, 2018)
* Reconstructed this versioned git repository from a "flat" web site representation * Reconstructed this versioned git repository from a "flat" web site representation
...@@ -39,4 +44,4 @@ This started as a fork / subset of UEABS Version 1.2: ...@@ -39,4 +44,4 @@ This started as a fork / subset of UEABS Version 1.2:
* GPAW: new dataset with reduced runtime. * GPAW: new dataset with reduced runtime.
## Version 1.0 (PRACE-2IP, October 31, 2013) ## Version 1.0 (PRACE-2IP, October 31, 2013)
* Initial Release * Initial Release
\ No newline at end of file
Build instructions for CP2K.
2016-10-26
===== 1. Get the code =====
Download a CP2K release from https://sourceforge.net/projects/cp2k/files/ or follow instructions at
https://www.cp2k.org/download to check out the relevant branch of the CP2K SVN repository. These
build instructions and the accompanying benchmark run instructions have been tested with release
4.1.
===== 2. Prerequisites & Libraries =====
GNU make and Python 2.x are required for the build process, as are a Fortran 2003 compiler and
matching C compiler, e.g. gcc/gfortran (gcc >=4.6 works, later version is recommended).
CP2K can benefit from a number of external libraries for improved performance. It is advised to use
vendor-optimized versions of these libraries. If these are not available on your machine, there
exist freely available implementations of these libraries including but not limited to those listed
below.
Required:
---------
The minimum set of libraries required to build a CP2K executable that will run the UEABS benchmarks
is:
1. LAPACK & BLAS, as provided by, for example:
netlib : http://netlib.org/lapack & http://netlib.org/blas
MKL : part of your Intel MKL installation, if available
LibSci : installed on Cray platforms
ATLAS : http://math-atlas.sf.net
OpenBLAS : http://www.openblas.net
clBLAS : http://gpuopen.com/compute-product/clblas/
2. SCALAPACK & BLACS, as provided by, for example:
netlib : http://netlib.org/scalapack/
MKL : part of your Intel MKL installation, if available
LibSci : installed on Cray platforms
3. LIBINT, available from https://www.cp2k.org/static/downloads/libint-1.1.4.tar.gz
(see build instructions in section 2.1 below)
Optional:
---------
The following libraries are optional but give a significant performance benefit:
4. FFTW3, available from http://www.fftw.org or provided as an interface by MKL
5. ELPA, available from https://www.cp2k.org/static/downloads/elpa-2016.05.003.tar.gz
6. libgrid, available from inside the distribution at cp2k/tools/autotune_grid
7. libxsmm, available from https://www.cp2k.org/static/downloads/libxsmm-1.4.4.tar.gz
More information can be found in the INSTALL file in the CP2K distribution.
2.1 Building LIBINT
-------------------
The following commands will uncompress and install the LIBINT library required for the UEABS
benchmarks:
tar xzf libint-1.1.4
cd libint-1.1.4
./configure CC=cc CXX=CC --prefix=some_path_other_than_this_directory
make
make install
The environment variables CC and CXX are optional and can be used to specify the C and C++ compilers
to use for the build (the example above is configured to use the compiler wrappers cc and CC used on
Cray systems). By default the build process only creates static libraries (ending in .a). If you
want to be able to link dynamically to LIBINT when building CP2K you can pass the flag
--enable-shared to ./configure in order to produce shared libraries (ending in .so). In that case
you will need to ensure that the library is located in place that is accessible at runtime and that
the LD_LIBRARY_PATH environment variable includes the LIBINT installation directory.
For more build options see ./configure --help.
===== 3. Building CP2K =====
If you have downloaded a tarball of the release, uncompress the file by running
tar xf cp2k-4.1.tar.bz2.
If necessary you can find additional information about building CP2K in the INSTALL file located in
the root directory of the CP2K distribution.
==== 3.1 Create or choose an arch file ====
Before compiling the choice of compilers, the library locations and compilation and linker flags
need to be specified. This is done in an arch (architecture) file. Example arch files for a number
of common architecture examples can be found inside cp2k/arch. The names of these files match the
pattern architecture.version (e.g., Linux-x86-64-gfortran.sopt). The case "version=psmp" corresponds
to the hybrid MPI + OpenMP version that you should build to run the UEABS benchmarks.
In most cases you need to create a custom arch file, either from scratch or by modifying an existing
one that roughly fits the cpu type, compiler, and installation paths of libraries on your
system. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of
the testing reports for some platforms (click on the status field for a platform, and search for
'ARCH-file' in the resulting output).
As a guided example, the following should be included in your arch file if you are compiling with
GNU compilers:
(a) Specification of which compiler and linker commands to use:
CC = gcc
FC = mpif90
LD = mpif90
CP2K is primarily a Fortran code, so only the Fortran compiler needs to be MPI-enabled.
(b) Specification of the DFLAGS variable, which should include:
-D__parallel (to build parallel CP2K executable)
-D__SCALAPACK (to link to ScaLAPACK)
-D__LIBINT (to link to LIBINT)
-D__MKL (if relying on MKL to provide ScaLAPACK and/or an FFTW interface)
-D__HAS_NO_SHARED_GLIBC (for convenience on HPC systems, see INSTALL file)
Additional DFLAGS needed to link to performance libraries, such as -D__FFTW3 to link to FFTW3,
are listed in the INSTALL file.
(c) Specification of compiler flags:
Required (for gfortran):
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp
Recommended additional flags (for gfortran):
FCFLAGS += -O3 -ffast-math -funroll-loops
If you want to link any libraries containing header files you should pass the path to the
directory containing these to FCFLAGS in the format -I/path_to_include_dir.
(d) Specification of linker flags:
LDFLAGS = $(FCFLAGS)
(e) Specification of libraries to link to:
Required (LIBINT):
-L/home/z01/z01/UEABS/CP2K/libint/1.1.4/lib -lderiv -lint
If you use MKL to provide ScaLAPACK and/or an FFTW interface the LIBS variable should be used
to pass the relevant flags provided by the MKL Link Line Advisor (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor),
which you should use carefully in order to generate the right options for your system.
(f) AR = ar -r
As an example, a simple arch file is shown below for ARCHER (http://www.archer.ac.uk), a Cray system
that uses compiler wrappers cc and ftn to compile C and Fortran code respectively, and which has
LIBINT installed in /home/z01/z01/user/cp2k/libs/libint/1.1.4. On Cray systems the compiler wrappers
automatically link in Cray's LibSci library which provides ScaLAPACK, hence there is no need for
explicit specification of the library location and library names in LIBS or relevant include paths
in FCFLAGS. This would not be the case if MKL was used instead.
#=============================================================================================
#
# Ensure the following environment modules are loaded before starting the build:
#
# PrgEnv-gnu
# cray-libsci
#
CC = cc
CPP =
FC = ftn
LD = ftn
AR = ar -r
DFLAGS = -D__parallel \
-D__SCALAPACK \
-D__LIBINT \
-D__HAS_NO_SHARED_GLIBC
CFLAGS = $(DFLAGS)
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp
FCFLAGS += -O3 -ffast-math -funroll-loops
LDFLAGS = $(FCFLAGS)
LIBS = -L/home/z01/z01/user/cp2k/libint/1.1.4/lib -lderiv -lint
#=============================================================================================
==== 3.2 Compile ===
Change directory to cp2k-4.1/makefiles
There is no configure stage. If the arch file for your machine is called
SomeArchitecture_SomeCompiler.psmp, then issue the following command to compile:
make ARCH=SomeArchitecture_SomeCompiler VERSION=psmp
or, if you are able to run in parallel with N threads:
make -j N ARCH=SomeArchitecture_SomeCompiler VERSION=psmp
There is also no "make install" stage. If everything goes fine, you'll find the executable cp2k.psmp
in the directory cp2k-4.1/exe/SomeArchitecture_SomeCompiler
CP2K can be downloaded from : http://www.cp2k.org/download
It is free for all users under GPL license,
see Obtaining CP2K section in the download page.
In UEABS(2IP) the 2.3 branch was used that can be downloaded from :
http://sourceforge.net/projects/cp2k/files/cp2k-2.3.tar.bz2
Data files are compatible with at least 2.4 branch.
Tier-0 data set requires the libint-1.1.4 library. If libint version 1
is not available on your machine, libint can be downloaded from :
http://sourceforge.net/projects/libint/files/v1-releases/libint-1.1.4.tar.gz
Run instructions for CP2K.
2016-10-26
After building the hybrid MPI+OpenMP version of CP2K you have an executable
called cp2k.psmp. The general way to run the benchmarks is:
export OMP_NUM_THREADS=##
parallel_launcher launcher_options path_to_cp2k.psmp -i inputfile -o logfile
Where:
o The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on
Cray systems or srun when using Slurm.
o The launcher_options include the parallel placement in terms of total numbers
of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which
should be equal to the value given to OMP_NUM_THREADS)
You can try any combination of tasks per node and OpenMP threads per task to
investigate absolute performance and scaling on the machine of interest.
For tier-1 systems the best performance is usually obtained with pure MPI,while
for tier-0 systems the best performance is typically obtained using 1 MPI task
per node with the number of threads being equal to the number of cores per node.
More information in the form of a README and an example job script is included
in each benchmark tar file.
The run walltime is reported near the end of logfile:
grep "CP2K " logfile | awk -F ' ' '{print $7}'
#
# module load openmpi cuda blas lapack
#
#
CC = gcc
CPP =
FC = mpif90
LD = mpif90
AR = ar -r
CPPFLAGS =
DFLAGS = -D__FFTW3 \
-D__LIBINT \
-D__LIBXC \
-D__parallel \
-D__SCALAPACK
SCALAPACK_LIB = /davide/home/userexternal/$USER/scalapack/scalapack
LIBINT_HOME = /davide/home/userexternal/$USER/cp2k/libs/libint/1.1.4
LIBXC_HOME = /davide/home/userexternal/$USER/cp2k/libs/libxc/4.1.1
FFTW_HOME = /davide/home/userexternal/$USER/fftw/3.3.8
FFTW_INC = $(FFTW_HOME)/include
FFTW_LIB = $(FFTW_HOME)/lib
FCFLAGS = $(DFLAGS) -O3 -ffast-math -ffree-form -funroll-loops -mcpu=power8\
-I$(FFTW_INC) -I$(LIBINT_HOME)/include -I$(LIBXC_HOME)/include
LDFLAGS = $(FCFLAGS) #-Wl,--start-group #-static
#LIBS = -L$(SCALAPACK_LIB) -lscalapack\
# -L$(LAPACK_LIB) -llapack\
# -L$(BLAS_LIB) -lblas\
# -L$(FFTW_LIB) -lfftw3\
# -L$(FFTW_LIB) -lfftw3_threads\
# -L$(LIBINT_HOME)/lib -lderiv\
# -L$(LIBINT_HOME)/lib -lint\
# -L$(LIBINT_HOME)/lib -lr12\
# -L$(LIBXC_HOME)/lib -lxcf03\
# -L$(LIBXC_HOME)/lib -lxc
LIBS = $(SCALAPACK_LIB)/libscalapack.a\
$(LAPACK_LIB)/liblapack.a\
$(BLAS_LIB)/libblas.a\
$(FFTW_LIB)/libfftw3.a\
$(FFTW_LIB)/libfftw3_threads.a\
$(LIBINT_HOME)/lib/libderiv.a\
$(LIBINT_HOME)/lib/libint.a\
$(LIBINT_HOME)/lib/libr12.a\
$(LIBXC_HOME)/lib/libxcf03.a\
$(LIBXC_HOME)/lib/libxc.a\
-ldl\
# -Wl,--end-group
\ No newline at end of file
```
module load gnu
./configure --prefix=/homeb/prcdeep/prdeep04/fftw/3.3.8 CFLAGS="-O3 -mcpu=power8"
make
make install
make clean
./configure --prefix=/homeb/prcdeep/prdeep04/fftw/3.3.8 CFLAGS="-O3 -mcpu=power8" --enable-threads
make
make install
```
```
module load gnu
export CC=gcc
export CXX=g++
CFLAGS="-O2 -ftree-vectorize -g -fno-omit-frame-pointer -mcpu=power8 -ffast-math"
CXXFLAGS=$CFLAGS
./configure --build=ppc64le-linux --prefix=/davide/home/userexternal/$USER/cp2k/libs/libint/1.1.4 --with-cc-optflags="${CFLAGS}" --with-cxx-optflags="${CXXFLAGS}"
make
make install
```
```
module load gnu
./configure --build=ppc64le CFLAGS="-mcpu=power8" --prefix=/davide/home/userexternal/$USER/cp2k/libs/libxc/4.1.1
make
make install
```
#
## export MODULEPATH=/usr/local/modulefiles/MISC:$MODULEPATH
## module load parastation/5.2.0-1
## export MKLROOT=/usr/local/intel/compilers_and_libraries_2018.0.128/linux/mkl
## export LD_LIBRARY_PATH=${MKLROOT}/lib
#
### module load Intel/2018.2.199-GCC-5.5.0 ParaStationMPI/5.2.1-1 imkl/2018.2.199
#
##### export MODULEPATH=/usr/local/software/haswell/Stages/2018a/modules/all/Compiler/mpi/intel/2018.2.199-GCC-5.5.0/:$MODULEPATH
##### module load ParaStationMPI/5.2.1-1
CC = gcc
CPP =
FC = mpif90
LD = mpif90
AR = ar -r
FFTW_LIB = /p/home/jusers/$USER/deep/fftw/3.3.8/lib
FFTW_INC = /p/home/jusers/$USER/deep/fftw/3.3.8/include
LIB_DIR = $(HOME)/cp2k/libs/
LIBINT_INC = $(LIB_DIR)/libint/1.1.4/include
LIBINT_LIB = $(LIB_DIR)/libint/1.1.4/lib
LIBXC_INC = $(LIB_DIR)/libxc/4.2.3/include
LIBXC_LIB = $(LIB_DIR)/libxc/4.2.3/lib
DFLAGS = -D__MKL -D__FFTW3 -D__LIBINT -D__LIBXC -D__parallel -D__SCALAPACK #-D__HAS_NO_SHARED_GLIBC
CPPFLAGS =
FCFLAGS = $(DFLAGS) -O3 -ffast-math -ffree-form -ffree-line-length-none\
-fopenmp -ftree-vectorize -funroll-loops -march=core-avx2\
-m64\
-I$(FFTW_INC) -I$(LIBINT_INC) -I$(LIBXC_INC) -I${MKLROOT}/include
LDFLAGS = $(FCFLAGS) #-static
LIBS = $(FFTW_LIB)/libfftw3.a\
$(FFTW_LIB)/libfftw3_threads.a\
-L$(MKLROOT)/lib/intel64\
-Wl,--no-as-needed\
-lmkl_scalapack_lp64\
-lmkl_gf_lp64\
-lmkl_sequential\
-lmkl_core\
-lmkl_blacs_intelmpi_lp64\
-lpthread -lm -ldl\
$(LIBXC_LIB)/libxcf03.a\
$(LIBXC_LIB)/libxc.a\
$(LIBINT_LIB)/libderiv.a\
$(LIBINT_LIB)/libint.a
H OPT2
12
1 0 0 1 1
27.463675 1.
1 0 0 1 1
6.855925825156615 1.
1 0 0 1 1
1.7679972172969496 1.
1 0 0 1 1
0.5118184226579885 1.
1 0 0 1 1
0.2016754773220471 1.
1 1 1 1 1
2.1240864600000005 1.
1 1 1 1 1
1.0736811657505927 1.
1 1 1 1 1
0.5683866184932796 1.
1 2 2 1 1
0.9283383999999999 1.
1 2 2 1 1
0.4958300002580446 1.
1 3 3 1 1
1.207348 1.
1 0 0 3 1
3079.70000000 0.00023473
461.52000000 0.00182450
105.06000000 0.00959330
Li OPT2
19
1 0 0 1 1
1336.0340589999998 1.
1 0 0 1 1
444.29981699128217 1.
1 0 0 1 1
147.79701849115634 1.
1 0 0 1 1
49.209451280132136 1.
1 0 0 1 1
16.4289566009762 1.
1 0 0 1 1
5.529399366643602 1.
1 0 0 1 1
1.9052823706290967 1.
1 0 0 1 1
0.700258742749093 1.
1 0 0 1 1
0.29958681484335464 1.
1 0 0 1 1
0.16636287624379748 1.
1 1 1 1 1
1.5709110000000002 1.
1 1 1 1 1
0.7487586392445541 1.
1 1 1 1 1
0.3861408827552758 1.
1 1 1 1 1
0.22620503193500366 1.
1 2 2 1 1
0.7792082 1.
1 2 2 1 1
0.40789925215656075 1.
1 3 3 1 1
0.737063 1.
1 0 0 3 1
70681.00000000 0.00000544
13594.00000000 0.00003328
3100.40000000 0.00019175
1 1 1 2 1
28.50000000 0.00036754
6.64000000 0.00322359
H OPT1
7
1 0 0 1 1
10.803149999999999 1.
1 0 0 1 1
2.6055222312529733 1.
1 0 0 1 1
0.6865228134990986 1.
1 0 0 1 1
0.23730032264884945 1.
1 1 1 1 1
2.155972 1.
1 1 1 1 1
1.0857651842749134 1.
1 2 2 1 1
1.0611532000000001 1.
Li OPT1
13
1 0 0 1 1
771.2361000000001 1.
1 0 0 1 1
225.91017430581485 1.
1 0 0 1 1
66.22352193085302 1.
1 0 0 1 1
19.462812884677934 1.
1 0 0 1 1
5.769972074371827 1.
1 0 0 1 1
1.760326085658556 1.
1 0 0 1 1
0.5861898214946101 1.
1 0 0 1 1
0.2423699504372975 1.
1 0 0 1 1
0.14168989536886728 1.
1 1 1 1 1
0.6572392 1.
1 1 1 1 1
0.4007350544328856 1.
1 1 1 1 1
0.26230296247060625 1.
1 2 2 1 1
0.9706849 1.
&FORCE_EVAL
METHOD Quickstep
&DFT
BASIS_SET_FILE_NAME ./BASIS_OPT
POTENTIAL_FILE_NAME ./POTENTIAL
&PRINT
&BASIS_MOLOPT_QUANTITIES
&END BASIS_MOLOPT_QUANTITIES
&END
&MGRID
CUTOFF 300
REL_CUTOFF 50
&END MGRID
&QS
METHOD GAPW
EPS_DEFAULT 1.0E-12
EPS_PGF_ORB 1.0E-16
EPS_FILTER_MATRIX 0.0e0
&END QS
&SCF
EPS_SCF 1.0E-7
MAX_SCF 10
SCF_GUESS ATOMIC
&OT
PRECONDITIONER FULL_ALL
&END
&OUTER_SCF
EPS_SCF 1.0E-7
MAX_SCF 10
&END
&END SCF
&XC
&XC_FUNCTIONAL
&BECKE88
&END
&END XC_FUNCTIONAL
&END
&END DFT
&SUBSYS
&CELL
ABC 12.252 12.252 12.252
&END CELL
&COORD
Li 0 0 0
Li 2.042 2.042 0
Li 2.042 0 2.042
Li 0 2.042 2.042
H 0 2.042 0
H 0 0 2.042
H 2.042 0 0
H 2.042 2.042 2.042
Li 0 0 4.084
Li 2.042 2.042 4.084
Li 2.042 0 6.126
Li 0 2.042 6.126
H 0 2.042 4.084
H 0 0 6.126
H 2.042 0 4.084
H 2.042 2.042 6.126
Li 0 0 8.168
Li 2.042 2.042 8.168
Li 2.042 0 10.21
Li 0 2.042 10.21
H 0 2.042 8.168
H 0 0 10.21
H 2.042 0 8.168
H 2.042 2.042 10.21
Li 0 4.084 0
Li 2.042 6.126 0
Li 2.042 4.084 2.042
Li 0 6.126 2.042
H 0 6.126 0
H 0 4.084 2.042
H 2.042 4.084 0
H 2.042 6.126 2.042
Li 0 4.084 4.084
Li 2.042 6.126 4.084
Li 2.042 4.084 6.126
Li 0 6.126 6.126
H 0 6.126 4.084
H 0 4.084 6.126
H 2.042 4.084 4.084
H 2.042 6.126 6.126
Li 0 4.084 8.168
Li 2.042 6.126 8.168
Li 2.042 4.084 10.21
Li 0 6.126 10.21
H 0 6.126 8.168
H 0 4.084 10.21
H 2.042 4.084 8.168
H 2.042 6.126 10.21
Li 0 8.168 0
Li 2.042 10.21 0
Li 2.042 8.168 2.042
Li 0 10.21 2.042
H 0 10.21 0
H 0 8.168 2.042
H 2.042 8.168 0
H 2.042 10.21 2.042
Li 0 8.168 4.084
Li 2.042 10.21 4.084
Li 2.042 8.168 6.126
Li 0 10.21 6.126
H 0 10.21 4.084
H 0 8.168 6.126
H 2.042 8.168 4.084
H 2.042 10.21 6.126
Li 0 8.168 8.168