Commit 7239fb83 authored by Valeriu Codreanu's avatar Valeriu Codreanu
Browse files

added v1.3 of UEABS

parent d5c40bfe
# Unified European Applications Benchmark Suite, version 1.2
# Unified European Applications Benchmark Suite, version 1.3
The Unified European Application Benchmark Suite (UEABS) is a set of 12 application codes taken from the pre-existing PRACE and DEISA application benchmark suites to form a single suite, with the objective of providing a set of scalable, currently relevant and publically available codes and datasets, of a size which can realistically be run on large systems, and maintained into the future. This work has been undertaken by Task 7.4 "Unified European Applications Benchmark Suite for Tier-0 and Tier-1" in the PRACE Second Implementation Phase (PRACE-2IP) project and will be updated and maintained by subsequent PRACE Implementation Phase projects.
......
Running a case is described in the following page:
http://www.code-saturne.org
Two test cases are available, the former using an hexa-based grid
and the latter a tetra-based grid.
Test Case A deals with the flow in a bundle of tubes.
A larger mesh (51M cells) is built from an original mesh of 13M cells.
The original mesh_input file (already preprocessed for Code_Saturne)
is to be found under MESH.
The user subroutines are under XE6_INTERLAGOS/SRC
The test case has been set up to run for 10 time-steps.
Test Case B models a lid-driven cavity and the cells are all tetras.
The total number of cells is about 110M.
The mesh is called mesh_input and the usersubroutines are available under SRC_UEABS
10 time-steps are run.
Build instructions for CP2K.
2014-04-09 : ntell@iasa.gr
CP2K needs a number of external libraries and a threads enabled MPI implementation.
These are : BLAS/LAPACK, BLACS/SCALAPACK, LIBINT, FFTW3.
It is advised to use the vendor optimized versions of these libraries.
If some of these are not available on your machine,
there some implementations of these libraries. Some of these are below.
1. BLAS/LAPACK :
netlib BLAS/LAPACK : http://netlib.org/lapack/
ATLAS : http://math-atlas.sf.net/
GotoBLAS : http://www.tacc.utexas.edu/tacc-projects
MKL : refer to your Intel MKL installation, if available
ACML : refer to your ACML installation if available
2. BLACS/SCALAPACK : http://netlib.org/scalapack/
Intel BLACS/SCALAPACK Implementation
3. LIBINT : http://sourceforge.net/projects/libint/files/v1-releases/
4. FFTW3 : http://www.fftw.org/
In the directory cp2k-VERSION/arch there are some ARCH files with instructions how
to build CP2K. For each architecture/compiler there are few arch files describing how to build cp2k.
Select one of the .psmp files that fits your architecture/compiler.
cd to cp2k-VERSION/makefiles
If the arch file for your machine is called SOMEARCH_SOMECOMPILER.psmp,
issue : make ARCH=SOMEARCH_SOMECOMPILER VERSION=psmp
If everything goes fine, you'll find the executable cp2k.psmp in the directory
cp2k-VERSION/exe/SOMEARCH_SOMECOMPILER
In most cases you need to create a custom arch file that fits cpu type,
compiler, and the installation path of external libraries.
As an example below is the arch file for a machine with mpif90/gcc/gfortran, that supports SSE2, has
all the external libraries installed under /usr/local/, uses ATLAS with full
LAPACK support for BLAS/LAPACK, Scalapack-2 for BLACS/Scalapack, fftw3 FFTW3 and libint-1.1.4:
#=======================================================================================================
CC = gcc
CPP =
FC = mpif90 -fopenmp
LD = mpif90 -fopenmp
2016-10-26
===== 1. Get the code =====
Download a CP2K release from https://sourceforge.net/projects/cp2k/files/ or follow instructions at
https://www.cp2k.org/download to check out the relevant branch of the CP2K SVN repository. These
build instructions and the accompanying benchmark run instructions have been tested with release
4.1.
===== 2. Prerequisites & Libraries =====
GNU make and Python 2.x are required for the build process, as are a Fortran 2003 compiler and
matching C compiler, e.g. gcc/gfortran (gcc >=4.6 works, later version is recommended).
CP2K can benefit from a number of external libraries for improved performance. It is advised to use
vendor-optimized versions of these libraries. If these are not available on your machine, there
exist freely available implementations of these libraries including but not limited to those listed
below.
Required:
---------
The minimum set of libraries required to build a CP2K executable that will run the UEABS benchmarks
is:
1. LAPACK & BLAS, as provided by, for example:
netlib : http://netlib.org/lapack & http://netlib.org/blas
MKL : part of your Intel MKL installation, if available
LibSci : installed on Cray platforms
ATLAS : http://math-atlas.sf.net
OpenBLAS : http://www.openblas.net
clBLAS : http://gpuopen.com/compute-product/clblas/
2. SCALAPACK & BLACS, as provided by, for example:
netlib : http://netlib.org/scalapack/
MKL : part of your Intel MKL installation, if available
LibSci : installed on Cray platforms
3. LIBINT, available from https://www.cp2k.org/static/downloads/libint-1.1.4.tar.gz
(see build instructions in section 2.1 below)
Optional:
---------
The following libraries are optional but give a significant performance benefit:
4. FFTW3, available from http://www.fftw.org or provided as an interface by MKL
5. ELPA, available from https://www.cp2k.org/static/downloads/elpa-2016.05.003.tar.gz
6. libgrid, available from inside the distribution at cp2k/tools/autotune_grid
7. libxsmm, available from https://www.cp2k.org/static/downloads/libxsmm-1.4.4.tar.gz
More information can be found in the INSTALL file in the CP2K distribution.
2.1 Building LIBINT
-------------------
The following commands will uncompress and install the LIBINT library required for the UEABS
benchmarks:
tar xzf libint-1.1.4
cd libint-1.1.4
./configure CC=cc CXX=CC --prefix=some_path_other_than_this_directory
make
make install
The environment variables CC and CXX are optional and can be used to specify the C and C++ compilers
to use for the build (the example above is configured to use the compiler wrappers cc and CC used on
Cray systems). By default the build process only creates static libraries (ending in .a). If you
want to be able to link dynamically to LIBINT when building CP2K you can pass the flag
--enable-shared to ./configure in order to produce shared libraries (ending in .so). In that case
you will need to ensure that the library is located in place that is accessible at runtime and that
the LD_LIBRARY_PATH environment variable includes the LIBINT installation directory.
For more build options see ./configure --help.
===== 3. Building CP2K =====
If you have downloaded a tarball of the release, uncompress the file by running
tar xf cp2k-4.1.tar.bz2.
If necessary you can find additional information about building CP2K in the INSTALL file located in
the root directory of the CP2K distribution.
==== 3.1 Create or choose an arch file ====
Before compiling the choice of compilers, the library locations and compilation and linker flags
need to be specified. This is done in an arch (architecture) file. Example arch files for a number
of common architecture examples can be found inside cp2k/arch. The names of these files match the
pattern architecture.version (e.g., Linux-x86-64-gfortran.sopt). The case "version=psmp" corresponds
to the hybrid MPI + OpenMP version that you should build to run the UEABS benchmarks.
In most cases you need to create a custom arch file, either from scratch or by modifying an existing
one that roughly fits the cpu type, compiler, and installation paths of libraries on your
system. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of
the testing reports for some platforms (click on the status field for a platform, and search for
'ARCH-file' in the resulting output).
As a guided example, the following should be included in your arch file if you are compiling with
GNU compilers:
(a) Specification of which compiler and linker commands to use:
CC = gcc
FC = mpif90
LD = mpif90
CP2K is primarily a Fortran code, so only the Fortran compiler needs to be MPI-enabled.
(b) Specification of the DFLAGS variable, which should include:
-D__parallel (to build parallel CP2K executable)
-D__SCALAPACK (to link to ScaLAPACK)
-D__LIBINT (to link to LIBINT)
-D__MKL (if relying on MKL to provide ScaLAPACK and/or an FFTW interface)
-D__HAS_NO_SHARED_GLIBC (for convenience on HPC systems, see INSTALL file)
Additional DFLAGS needed to link to performance libraries, such as -D__FFTW3 to link to FFTW3,
are listed in the INSTALL file.
(c) Specification of compiler flags:
Required (for gfortran):
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp
Recommended additional flags (for gfortran):
FCFLAGS += -O3 -ffast-math -funroll-loops
If you want to link any libraries containing header files you should pass the path to the
directory containing these to FCFLAGS in the format -I/path_to_include_dir.
(d) Specification of linker flags:
LDFLAGS = $(FCFLAGS)
(e) Specification of libraries to link to:
Required (LIBINT):
-L/home/z01/z01/UEABS/CP2K/libint/1.1.4/lib -lderiv -lint
If you use MKL to provide ScaLAPACK and/or an FFTW interface the LIBS variable should be used
to pass the relevant flags provided by the MKL Link Line Advisor (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor),
which you should use carefully in order to generate the right options for your system.
(f) AR = ar -r
As an example, a simple arch file is shown below for ARCHER (http://www.archer.ac.uk), a Cray system
that uses compiler wrappers cc and ftn to compile C and Fortran code respectively, and which has
LIBINT installed in /home/z01/z01/user/cp2k/libs/libint/1.1.4. On Cray systems the compiler wrappers
automatically link in Cray's LibSci library which provides ScaLAPACK, hence there is no need for
explicit specification of the library location and library names in LIBS or relevant include paths
in FCFLAGS. This would not be the case if MKL was used instead.
#=============================================================================================
#
# Ensure the following environment modules are loaded before starting the build:
#
# PrgEnv-gnu
# cray-libsci
#
CC = cc
CPP =
FC = ftn
LD = ftn
AR = ar -r
DFLAGS = -D__GFORTRAN -D__FFTSG -D__parallel -D__BLACS -D__SCALAPACK -D__FFTW3 -D__LIBINT -I/usr/local/fftw3/include -I/usr/local/libint-1.1.4/include
CPPFLAGS =
FCFLAGS = $(DFLAGS) -O3 -msse2 -funroll-loops -finline -ffree-form
FCFLAGS2 = $(DFLAGS) -O3 -msse2 -funroll-loops -finline -ffree-form
LDFLAGS = $(FCFLAGS)
LIBS = /usr/local/Scalapack/lib/libscalapack.a \
/usr/local/Atlas/lib/liblapack.a \
/usr/local/Atlas/lib/libf77blas.a \
/usr/local/Atlas/lib/libcblas.a \
/usr/local/Atlas/lib/libatlas.a \
/usr/local/fftw3/lib/libfftw3_threads.a \
/usr/local/fftw3/lib/libfftw3.a \
/usr/local/libint-1.1.4/lib/libderiv.a \
/usr/local/libint-1.1.4/lib/libint.a \
-lstdc++ -lpthread
OBJECTS_ARCHITECTURE = machine_gfortran.o
#=======================================================================================================
DFLAGS = -D__parallel \
-D__SCALAPACK \
-D__LIBINT \
-D__HAS_NO_SHARED_GLIBC
CFLAGS = $(DFLAGS)
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp
FCFLAGS += -O3 -ffast-math -funroll-loops
LDFLAGS = $(FCFLAGS)
LIBS = -L/home/z01/z01/user/cp2k/libint/1.1.4/lib -lderiv -lint
#=============================================================================================
==== 3.2 Compile ===
Change directory to cp2k-4.1/makefiles
There is no configure stage. If the arch file for your machine is called
SomeArchitecture_SomeCompiler.psmp, then issue the following command to compile:
make ARCH=SomeArchitecture_SomeCompiler VERSION=psmp
or, if you are able to run in parallel with N threads:
make -j N ARCH=SomeArchitecture_SomeCompiler VERSION=psmp
There is also no "make install" stage. If everything goes fine, you'll find the executable cp2k.psmp
in the directory cp2k-4.1/exe/SomeArchitecture_SomeCompiler
Run instructions for CP2K.
2013-08-13 : ntell@iasa.gr
2016-10-26
After build of hybrid MPI/OMP CP2K you have an executable called cp2k.psmp.
You can try any combination of TASKSPERNODE/THREADSPERTASK.
The input file is H2O-1024.inp for tier-1 and input_bulk_HFX_3.inp for tier-0 systems.
For tier-1 systems the best performance is usually obtained with pure MPI,
while for tier-0 systems the best performance is obtained using 1 MPI task per
node with the number of threads/MPI_Task being equal to the number of
cores/node.
Tier-0 case requires a converged wavefunction file, that can be obtained
running with any number of cores, 1024-2048 cores are suggested :
WRAPPER WRAPPER_OPTIONS PATH_TO_cp2k.psmp -i input_bulk_B88_3.inp -o input_bulk_B88_3.log
When this run finish, mv the saved restart file LiH_bulk_3-RESTART.wfn to
B88.wfn
After building the hybrid MPI+OpenMP version of CP2K you have an executable
called cp2k.psmp. The general way to run the benchmarks is:
The general way to run is :
WRAPPER WRAPPER_OPTIONS PATH_TO_cp2k.psmp -i inputfile -o logile
export OMP_NUM_THREADS=##
parallel_launcher launcher_options path_to_cp2k.psmp -i inputfile -o logfile
WRAPPER and WRAPPER_OPTIONS depend on system, batch system etc.
Few common pairs are :
Where:
CRAY : aprun -n TASKS -N NODES -d THREADSPERTASK
Curie : ccc_mrun with no options - obtained from batch system
Juqueen : runjob --np TASKS --ranks-per-node TASKSPERNOD --exp-env OMP_NUM_THREADS
Slurm : srun with no options, obtained from slurm if the variables below are set.
#SBATCH --nodes=NODES
#SBATCH --ntasks-per-node=TASKSPERNODE
#SBATCH --cpus-per-task=THREADSPERTASK
o The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on
Cray systems or srun when using Slurm.
o The launcher_options include the parallel placement in terms of total numbers
of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which
should be equal to the value given to OMP_NUM_THREADS)
The run walltime is reported near the end of logfile : grep ^\ CP2K\^ logile | tail -1 | awk -F ' ' '{print $7}'
You can try any combination of tasks per node and OpenMP threads per task to
investigate absolute performance and scaling on the machine of interest.
For tier-1 systems the best performance is usually obtained with pure MPI,while
for tier-0 systems the best performance is typically obtained using 1 MPI task
per node with the number of threads being equal to the number of cores per node.
More information in the form of a README and an example job script is included
in each benchmark tar file.
The run walltime is reported near the end of logfile:
grep "CP2K " logfile | awk -F ' ' '{print $7}'
Quantum Espresso in the Unified European Application Benchmark Suite (UEABS)
Dcoument Author: A. Emerson (a.emerson@cineca.it) , Cineca.
Last update: 29th -May-2017
Introduction
Quantum ESPRESSO (usually abbreviated as QE) is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials.
Software Requirements
Essential
* Quantum ESPRESSO 6.0. At the time of writing a later version (6.1) is available but the release notes report bug fixes and new functionality, rather than performance enhancements.
* A FORTRAN compiler
Optional
* A parallel linear algebra library such as Scalapack or Intel MKL. If none is available on your system then the installation can use a version supplied with the distribution.
Downloading the software
QE distribution
Many packages are available from the download page but since you need only the main base package for the benchmark suite, the expresso-6.0.tar.gz file will be sufficient. This can be downloaded as:
http://qe-forge.org/gf/download/frsrelease/224/1044/qe-6.0.tar.gz
Compiling the application
The QE documentation gives more details but for the benchmark suite this general procedure is followed.
1. Uncompress the main QE distribution:
tar zxvf espresso-6.0.tar.gz
cp 6.0.tar.gz espresso-6.0
2. Run configure and make:
./configure --enable-parallel --enable-openmp -–with-scalapack=intel
make pw
In the above, it is assumed that an Intel compiler is used for compilation so that the program can be linked with the MKL library. In the UEABS the pw.x program has been selected for the benchmarks so this is the one selected in the make.
Alternatively, you can do a
make all
which will compile all the programs available in the QE package.
A link to the QE executable will appear in the directory bin/ and is called pw.x.
Examples
We now give a run example for one of the PRACE architectures.
Marconi, KNL-partition, Cineca.
The latest version is available as a module within the Marconi KNL environment, which can be loaded with the following commands:
module load env-knl
module load profile/knl
module load autoload qe/6.0_knl
A typical batch job for QE on the KNLs is as follows:
#!/bin/bash
#PBS -l walltime=06:00:00
#PBS -l select=2:mpiprocs=34:ncpus=68
#PBS -A <your account_no>
#PBS -N jobname
module purge
module load profile/knl
module load autoload qe/6.0_knl
cd ${PBS_O_WORKDIR}
export OMP_NUM_THREADS=4
export MKL_NUM_THREADS=${OMP_NUM_THREADS}
mpirun pw.x -npool 4 -input file.in > file.out
This job will run on 2 KNL nodes, with 34 MPI processes per node and 4 OpenMP threads per core. Note that on Marconi the only memory configuration available is cache/quadrant.
The Quantum Espresso package can be freely downloaded from the following URL:
http://www.quantum-espresso.org/download/
13/08/2013
Running The Quantum Espresso Test Cases
---------------------------------------
1. Unpack the tar file containing the input files (command file .in and
pseudopotentials .UPF) in the directory where you want to run the program. For example,
tar zxvf QuantumEspresso_TestCaseA.tar.gz
2. Find the command file cp.in (test cae A) or pw.in (test case B) and check
that the variable pseudo_dir is set to the location of the UPF files, for
example the current directory
pseudo_dir = './'
3. Create a batch file and include in the file the command to launch MPI jobs
(this is system dependent). The benchmark data have been collected varying the
number of MPI tasks only so if the Quantum Espresso version has been compiled
with OpenMP support you should set the number of OpenMP threads to 1. Most
batch scripts will thus contain lines such as:
export OMP_NUM_THREADS=1
mpirun path-to-exectuable/pw.x < cp.in
but check your local documentation.
4. The output including timing information will be sent to standard output.
Cineca 13/08/2013
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment