Commit 5b328137 authored by Valeriu Codreanu's avatar Valeriu Codreanu
Browse files

Merge branch 'r2.1-dev' into 'master'

R2.1 dev

See merge request UEABS/ueabs!3
parents 74c3b0e4 070f6323
# Unified European Applications Benchmark Suite
The Unified European Application Benchmark Suite (UEABS) is a set of currently 14 application codes taken from the pre-existing PRACE and DEISA application benchmark suites, and extended with the PRACE Accelerator Benchmark Suite. The objective is providing a single benchmark suite of scalable, currently relevant and publicly available application codes and datasets, of a size which can realistically be run on large systems, and maintained into the future.
The Unified European Application Benchmark Suite (UEABS) is a set of currently 13 application codes taken from the pre-existing PRACE and DEISA application benchmark suites, and extended with the PRACE Accelerator Benchmark Suite. The objective is providing a single benchmark suite of scalable, currently relevant and publicly available application codes and datasets, of a size which can realistically be run on large systems, and maintained into the future.
The UEABS activity was started during the PRACE-PP project and was publicly released by the PRACE-2IP project.
The PRACE "Accelerator Benchmark Suite" was a PRACE-4IP activity.
......@@ -13,7 +13,7 @@ Contacts: Valeriu Codreanu <mailto:valeriu.codreanu@surfsara.nl> or Walter Lioen
Current Release
---------------
The current release is Version 2.0.
The current release is Version 2.1 (April 30, 2019).
See also the [release notes and history](RELEASES.md).
Running the suite
......@@ -21,7 +21,7 @@ Running the suite
Instructions to run each test cases of each codes can be found in the subdirectories of this repository.
For more details of the codes and datasets, and sample results, please see http://www.prace-ri.eu/IMG/pdf/d7.4_3ip.pdf
For more details of the codes and datasets, and sample results, please see the PRACE-5IP benchmarking deliverable D7.5 "Evaluation of Accelerated and Non-accelerated Benchmarks" (April 18, 2019) at http://www.prace-ri.eu/public-deliverables/ .
The application codes that constitute the UEABS are:
---------------------------------------------------
......@@ -30,15 +30,14 @@ The application codes that constitute the UEABS are:
- [Code_Saturne](#saturne)
- [CP2K](#cp2k)
- [GADGET](#gadget)
- [GENE](#gene)
- [GPAW](#gpaw)
- [GROMACS](#gromacs)
- [NAMD](#namd)
- [NEMO](#nemo)
- PFARM
- [PFARM](#pfarm)
- [QCD](#qcd)
- [Quantum Espresso](#espresso)
- SHOC
- [SHOC](#shoc)
- [SPECFEM3D](#specfem3d)
# ALYA <a name="alya"></a>
......@@ -46,10 +45,10 @@ The application codes that constitute the UEABS are:
The Alya System is a Computational Mechanics code capable of solving different physics, each one with its own modelization characteristics, in a coupled way. Among the problems it solves are: convection-diffusion reactions, incompressible flows, compressible flows, turbulence, bi-phasic flows and free surface, excitable media, acoustics, thermal flow, quantum mechanics (DFT) and solid mechanics (large strain). ALYA is written in Fortran 90/95 and parallelized using MPI and OpenMP.
- Web site: https://www.bsc.es/computer-applications/alya-system
- Code download: http://www.prace-ri.eu/UEABS/ALYA/1.1/alya3226.tar.gz
- Code download: https://repository.prace-ri.eu/ueabs/ALYA/1.1/alya3226.tar.gz
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/alya/ALYA_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/ALYA/1.3/ALYA_TestCaseA.tar.bz2
- Test Case B: http://www.prace-ri.eu/UEABS/ALYA/1.3/ALYA_TestCaseB.tar.bz2
- Test Case A: https://repository.prace-ri.eu/ueabs/ALYA/1.3/ALYA_TestCaseA.tar.bz2
- Test Case B: https://repository.prace-ri.eu/ueabs/ALYA/1.3/ALYA_TestCaseB.tar.bz2
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/alya/ALYA_Run_README.txt
# Code_Saturne <a name="saturne"></a>
......@@ -59,53 +58,38 @@ Code_Saturne&#174; is a multipurpose Computational Fluid Dynamics (CFD) software
Code_Saturne&#174; is based on a co-located finite volume approach that can handle three-dimensional meshes built with any type of cell (tetrahedral, hexahedral, prismatic, pyramidal, polyhedral) and with any type of grid structure (unstructured, block structured, hybrid). The code is able to simulate either incompressible or compressible flows, with or without heat transfer, and has a variety of models to account for turbulence. Dedicated modules are available for specific physics such as radiative heat transfer, combustion (e.g. with gas, coal and heavy fuel oil), magneto-hydro dynamics, and compressible flows, two-phase flows. The software comprises of around 350 000 lines of source code, with about 37% written in Fortran90, 50% in C and 15% in Python. The code is parallelised using MPI with some OpenMP.
- Web site: http://code-saturne.org
- Code download: http://code-saturne.org/cms/download or http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne-4.0.6_UEABS.tar.gz
- Code download: http://code-saturne.org/cms/download or https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne-4.0.6_UEABS.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
- Test Case A: http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
# CP2K <a name="cp2k"></a>
CP2K is a freely available (GPL) program to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials. It is very well and consistently written, standards-conforming Fortran 95, parallelized with MPI and in some parts with hybrid OpenMP+MPI as an option.
CP2K is a freely available quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modelling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, ...), and classical force fields (AMBER, CHARMM, ...). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimisation using NEB or dimer method.
CP2K provides state-of-the-art methods for efficient and accurate atomistic simulations, sources are freely available and actively improved. It has an active international development team, with the unofficial head quarters in the University of Zürich.
CP2K is written in Fortran 2008 and can be run in parallel using a combination of multi-threading, MPI, and CUDA. All of CP2K is MPI parallelised, with some additional loops also being OpenMP parallelised. It is therefore most important to take advantage of MPI parallelisation, however running one MPI rank per CPU core often leads to memory shortage. At this point OpenMP threads can be used to utilise all CPU cores without suffering an overly large memory footprint. The optimal ratio between MPI ranks and OpenMP threads depends on the type of simulation and the system in question. CP2K supports CUDA, allowing it to offload some linear algebra operations including sparse matrix multiplications to the GPU through its DBCSR acceleration layer. FFTs can optionally also be offloaded to the GPU. Benefits of GPU offloading may yield improved performance depending on the type of simulation and the system in question.
- Web site: https://www.cp2k.org/
- Code download: https://www.cp2k.org/download
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/cp2k/CP2K_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/CP2K/1.3/CP2K_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/CP2K/1.3/CP2K_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/cp2k/CP2K_Run_README.txt
- Code download: https://github.com/cp2k/cp2k/releases
- [Build & run instructions, details about benchmarks](./cp2k/README.md)
- Benchmarks:
- [Test Case A](./cp2k/benchmarks/TestCaseA_H2O-512)
- [Test Case B](./cp2k/benchmarks/TestCaseB_LiH-HFX)
- [Test Case C](./cp2k/benchmarks/TestCaseC_H2O-DFT-LS)
# GADGET <a name="gadget"></a>
GADGET is a freely available code for cosmological N-body/SPH simulations on massively parallel computers with distributed memory written by Volker Springel, Max-Plank-Institute for Astrophysics, Garching, Germany. GADGET is written in C and uses an explicit communication model that is implemented with the standardized MPI communication interface. The code can be run on essentially all supercomputer systems presently in use, including clusters of workstations or individual PCs. GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, either with, or without, periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range that is, in principle, unlimited. GADGET can therefore be used to address a wide array of astrophysics interesting problems, ranging from colliding and merging galaxies, to the formation of large-scale structure in the Universe. With the inclusion of additional physical processes such as radiative cooling and heating, GADGET can also be used to study the dynamics of the gaseous intergalactic medium, or to address star formation and its regulation by feedback processes.
- Web site: http://www.mpa-garching.mpg.de/gadget/
- Code download: http://www.prace-ri.eu/UEABS/GADGET/gadget3_Source.tar.gz
- Code download: https://repository.prace-ri.eu/ueabs/GADGET/gadget3_Source.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gadget/gadget3_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/GADGET/gadget3_TestCaseA.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/GADGET/gadget3_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gadget/gadget3_Run_README.txt
# GENE <a name="gene"></a>
GENE is a gyro kinetic plasma turbulence code which has been developed since the late 1990’s and is physically very comprehensive and flexible as well as computationally very efficient and highly scalable. Originally used for flux-tube simulations, today GENE also operates as a global code, either gradient- or flux-driven. An arbitrary number of gyro kinetic particle species can be taken into account, including electromagnetic effects and collisions. GENE is, in principle, able to cover the widest possible range of scales, all the way from the system size (where nonlocal effects or avalanches can play a role) down to sub-ion-gyroradius scales (where ETG or micro tearing modes may contribute to the transport), depending on the available computer resources. Moreover, there exist interfaces to various MHD equilibrium codes. GENE has been carefully benchmarked against theoretical results and other codes.
The GENE code is written in Fortran 90 and C and is parallelized with pure MPI. It strongly relies on a Fast Fourier Transform library and has built-in support for FFTW, MKL or ESSL. It also uses LAPACK and ScaLapack routines for LU decomposition and solution of a linear system of equations of moderate size (up to 1000 unknowns).
- Web site: http://gene.rzg.mpg.de/
- Code download: http://www.prace-ri.eu/UEABS/GENE/1.2/GENE.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: included with code download
- Test Case A: included with code download
- Test Case B: included with code download
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gene/GENE_Run_README.txt
# GPAW <a name="gpaw"></a>
GPAW is an efficient program package for electronic structure calculations based on the density functional theory (DFT) and the time-dependent density functional theory (TD-DFT). The density-functional theory allows studies of ground state properties such as energetics and equilibrium geometries, while the time-dependent density functional theory can be used for calculating excited state properties such as optical spectra. The program package includes two complementary implementations of time-dependent density functional theory: a linear response formalism and a time-propagation in real time.
......@@ -118,9 +102,13 @@ The program offers several parallelization levels. The most basic parallelizatio
- Web site: https://wiki.fysik.dtu.dk/gpaw/
- Code download: https://gitlab.com/gpaw/gpaw
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gpaw/GPAW_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/GPAW/GPAW_benchmark.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gpaw/GPAW_Run_README.txt
- Build instructions: [gpaw/README.md#install](gpaw/README.md#install)
- Benchmarks:
- [Case S: Carbon nanotube](gpaw/benchmark/carbon-nanotube)
- [Case M: Copper filament](gpaw/benchmark/copper-filament)
- [Case L: Silicon cluster](gpaw/benchmark/silicon-cluster)
- Run instructions:
[gpaw/README.md#running-the-benchmarks](gpaw/README.md#running-the-benchmarks)
# GROMACS <a name="gromacs"></a>
......@@ -149,8 +137,8 @@ Instructions:
- Web site: http://www.gromacs.org/
- Code download: http://www.gromacs.org/Downloads The UEABS benchmark cases require the use of 5.1.x or newer branch: the latest 2016 version is suggested.
- Test Case A: http://www.prace-ri.eu/UEABS/GROMACS/1.2/GROMACS_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/GROMACS/1.2/GROMACS_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/GROMACS/1.2/GROMACS_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/GROMACS/1.2/GROMACS_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gromacs/GROMACS_Run_README.txt
......@@ -168,30 +156,82 @@ NAMD is written in C++ and parallelised using Charm++ parallel objects, which ar
- Web site: http://www.ks.uiuc.edu/Research/namd/
- Code download: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/namd/NAMD_Download_README.txt
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/namd/NAMD_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/NAMD/1.2/NAMD_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/NAMD/1.2/NAMD_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/NAMD/1.2/NAMD_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/NAMD/1.2/NAMD_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/namd/NAMD_Run_README.txt
# NEMO <a name="nemo"></a>
NEMO (Nucleus for European Modeling of the Ocean) is a state-of-the-art modeling framework for oceanographic research, operational oceanography seasonal forecast and climate studies. Prognostic variables are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid. Within NEMO, the ocean is interfaced with a sea-ice model (LIM v2 and v3), passive tracer and biogeochemical models (TOP) and, via the OASIS coupler, with several atmospheric general circulation models. It also supports two-way grid embedding via the AGRIF software.
NEMO (Nucleus for European Modelling of the Ocean) [22] is mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by European consortium. It is intended to be tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process).
Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity.
The framework includes five major components:
In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases.
The model is implemented in Fortran 90, with preprocessing (C-pre-processor). It is optimized for vector computers and parallelized by domain decomposition with MPI. It supports modern C/C++ and Fortran compilers. All input and output is done with third party software called XIOS with dependency on NetCDF (Network Common Data Format) and HDF5. It is highly scalable and perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance.
### Test Case Description
The GYRE configuration has been built to model seasonal cycle of double gyre box model. It consists of idealized domain over which seasonal forcing is applied. This allows for studying large number of interactions and their combined contribution to large scale circulation.
The domain geometry is rectangular bounded by vertical walls and flat bottom. The configuration is meant to represent idealized north Atlantic or north pacific basin. The circulation is forced by analytical profiles of wind and buoyancy fluxes.
The wind stress is zonal and its curl changes sign at 22 and 36. It forces a subpolar gyre in the north, a subtropical gyre in the wider part of the domain and a small recirculation gyre in the southern corner. The net heat flux takes the form of a restoring toward a zonal apparent air temperature profile.
A portion of the net heat flux which comes from the solar radiation is allowed to penetrate within the water column. The fresh water flux is also prescribed and varies zonally. It is determined such as, at each time step, the basin-integrated flux is zero.
The basin is initialized at rest with vertical profiles of temperature and salinity uniformity applied to the whole domain. The GYRE configuration is set through the namelist_cfg file.
The horizontal resolution is determined by setting jp_cfg as follows:
`Jpiglo = 30 x jp_cfg + 2`
`Jpjglo = 20 x jp_cfg + 2`
In this configuration, we use default value of 30 ocean levels depicted by jpk=31. The GYRE configuration is an ideal case for benchmark test as it is very simple to increase the resolution and perform both weak and strong scalability experiment using the same input files. We use two configurations as follows:
**Test Case A**:
* jp_cfg = 128 suitable up to 1000 cores
* Number of Days: 20
* Number of Time steps: 1440
* Time step size: 20 mins
* Number of seconds per time step: 1200
**Test Case B**
* jp_cfg = 256 suitable up to 20,000 cores.
* Number of Days (real): 80
* Number of time step: 4320
* Time step size(real): 20 mins
* Number of seconds per time step: 1200
* Web site: <http://www.nemo-ocean.eu/>
* Download, Build and Run Instructions : <https://repository.prace-ri.eu/git/UEABS/ueabs/tree/master/nemo>
# PFARM <a name="pfarm"></a>
PFARM is part of a suite of programs based on the ‘R-matrix’ ab-initio approach to the variational solution of the many-electron Schrödinger
equation for electron-atom and electron-ion scattering. The package has been used to calculate electron collision data for astrophysical
applications (such as: the interstellar medium, planetary atmospheres) with, for example, various ions of Fe and Ni and neutral O, plus
other applications such as data for plasma modelling and fusion reactor impurities. The code has recently been adapted to form a compatible
interface with the UKRmol suite of codes for electron (positron) molecule collisions thus enabling large-scale parallel ‘outer-region’
calculations for molecular systems as well as atomic systems.
The PFARM outer-region application code EXDIG is domi-nated by the assembly of sector Hamiltonian matrices and their subsequent eigensolutions.
The code is written in Fortran 2003 (or Fortran 2003-compliant Fortran 95), is parallelised using MPI and OpenMP and is designed to take
advantage of highly optimised, numerical library routines. Hybrid MPI / OpenMP parallelisation has also been introduced into the code via
shared memory enabled numerical library kernels.
Accelerator-based implementations have been implemented for EXDIG, using off-loading (MKL or CuBLAS/CuSolver) for the standard (dense) eigensolver calculations that dominate overall run-time.
- CPU Code download: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/pfarm/RMX_MAGMA_CPU_mol.tar.gz
- GPU Code download: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/pfarm/RMX_MAGMA_GPU_mol.tar.gz
- Build & Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/pfarm/PFARM_Build_Run_README.txt
- Test Case A: https://repository.prace-ri.eu/UEABS/ueabs/blob/r2.1-dev/pfarm/PFARM_TestCaseA.tar.bz2
- Test Case B: https://repository.prace-ri.eu/UEABS/ueabs/blob/r2.1-dev/pfarm/PFARM_TestCaseB.tar.bz2
- the blue ocean (ocean dynamics, NEMO-OPA)
- the white ocean (sea-ice, NEMO-LIM)
- the green ocean (biogeochemistry, NEMO-TOP)
- the adaptive mesh refinement software (AGRIF)
- the assimilation component (NEMO_TAM)
NEMO is used by a large community: 240 projects in 27 countries (14 in Europe, 13 elsewhere) and 350 registered users (numbers for year 2008). The code is available under the CeCILL license (public license). The latest stable version is 3.6. NEMO is written in Fortran90 and parallelized with MPI.
- Web site: http://www.nemo-ocean.eu/
- Code download: http://www.prace-ri.eu/UEABS/NEMO/NEMO_Source.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/nemo/NEMO_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/NEMO/NEMO_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/nemo/NEMO_Run_README.txt
# QCD <a name="qcd"></a>
......@@ -237,7 +277,7 @@ Wilson fermions. The default lattice size is 16x16x16x16 for the
small test case and 64x64x64x32 for the medium test case.
- Code download: http://www.prace-ri.eu/UEABS/QCD/1.3/QCD_Source_TestCaseA.tar.gz
- Code download: https://repository.prace-ri.eu/ueabs/QCD/1.3/QCD_Source_TestCaseA.tar.gz
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/qcd/QCD_Build_README.txt
- Test Case A: included with source download
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/qcd/QCD_Run_README.txt
......@@ -252,10 +292,24 @@ QUANTUM ESPRESSO is written mostly in Fortran90, and parallelised using MPI and
- Web site: http://www.quantum-espresso.org/
- Code download: http://www.quantum-espresso.org/download/
- Build instructions: http://www.quantum-espresso.org/wp-content/uploads/Doc/user_guide/
- Test Case A: http://www.prace-ri.eu/UEABS/Quantum_Espresso/QuantumEspresso_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/Quantum_Espresso/QuantumEspresso_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/Quantum_Espresso/QuantumEspresso_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/Quantum_Espresso/QuantumEspresso_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/quantum_espresso/QE-guide.txt
# SHOC <a name="shoc"></a>
The Scalable HeterOgeneous Computing (SHOC) benchmark suite is a collection of benchmark programs testing the performance and stability of systems using computing devices with non-traditional architectures
for general purpose computing. It serves as synthetic benchmark suite in the UEABS context. Its initial focus is on systems containing Graphics Processing Units (GPUs) and multi-core processors, featuring implementations using both CUDA and OpenCL. It can be used on clusters as well as individual hosts.
Also, SHOC includes an Offload branch for the benchmarks that can be used to evaluate the Intel Xeon Phi x100 family.
The SHOC benchmark suite currently contains benchmark programs, categoried based on complexity. Some measure low-level "feeds and speeds" behavior (Level 0), some measure the performance of a higher-level operation such as a Fast Fourier Transform (FFT) (Level 1), and the others measure real application kernels (Level 2).
- Web site: https://github.com/vetter/shoc
- Code download: https://github.com/vetter/shoc/archive/master.zip
- Build instructions: https://repository.prace-ri.eu/git/ueabs/ueabs/blob/r2.1-dev/shoc/README_ACC.md
- Run instructions: https://repository.prace-ri.eu/git/ueabs/ueabs/blob/r2.1-dev/shoc/README_ACC.md
# SPECFEM3D <a name="specfem3d"></a>
The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). All SPECFEM3D_GLOBE software is written in Fortran90 with full portability in mind, and conforms strictly to the Fortran95 standard. It uses no obsolete or obsolescent features of Fortran77. The package uses parallel programming based upon the Message Passing Interface (MPI).
......@@ -267,9 +321,9 @@ In many geological models in the context of seismic wave propagation studies (ex
- Web site: http://geodynamics.org/cig/software/specfem3d_globe/
- Code download: http://geodynamics.org/cig/software/specfem3d_globe/
- Build instructions: http://www.geodynamics.org/wsvn/cig/seismo/3D/SPECFEM3D_GLOBE/trunk/doc/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf?op=file&rev=0&sc=0
- Test Case A: http://www.prace-ri.eu/UEABS/SPECFEM3D/SPECFEM3D_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/SPECFEM3D/SPECFEM3D_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/specfem3d/SPECFEM3D_Run_README.txt
- Test Case A: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/r2.1-dev/specfem3d/test_cases/SPECFEM3D_TestCaseA
- Test Case B: https://repository.prace-ri.eu/git/UEABS/ueabs/tree/r2.1-dev/specfem3d/test_cases/SPECFEM3D_TestCaseB
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/specfem3d/README.md
# UEABS Releases
## Version 2.1 (PRACE-5IP, April 30, 2019)
* Updated the benchmark suite to the status as used for the PRACE-5IP benchmarking deliverable D7.5 "Evaluation of Accelerated and Non-accelerated Benchmarks" (April 18, 2019)
* Removed GENE
## Version 2.0 (PRACE-5IP MS31, May 31, 2018)
* Reconstructed this versioned git repository from a "flat" web site representation
......@@ -39,4 +44,4 @@ This started as a fork / subset of UEABS Version 1.2:
* GPAW: new dataset with reduced runtime.
## Version 1.0 (PRACE-2IP, October 31, 2013)
* Initial Release
\ No newline at end of file
* Initial Release
Build instructions for CP2K.
2016-10-26
===== 1. Get the code =====
Download a CP2K release from https://sourceforge.net/projects/cp2k/files/ or follow instructions at
https://www.cp2k.org/download to check out the relevant branch of the CP2K SVN repository. These
build instructions and the accompanying benchmark run instructions have been tested with release
4.1.
===== 2. Prerequisites & Libraries =====
GNU make and Python 2.x are required for the build process, as are a Fortran 2003 compiler and
matching C compiler, e.g. gcc/gfortran (gcc >=4.6 works, later version is recommended).
CP2K can benefit from a number of external libraries for improved performance. It is advised to use
vendor-optimized versions of these libraries. If these are not available on your machine, there
exist freely available implementations of these libraries including but not limited to those listed
below.
Required:
---------
The minimum set of libraries required to build a CP2K executable that will run the UEABS benchmarks
is:
1. LAPACK & BLAS, as provided by, for example:
netlib : http://netlib.org/lapack & http://netlib.org/blas
MKL : part of your Intel MKL installation, if available
LibSci : installed on Cray platforms
ATLAS : http://math-atlas.sf.net
OpenBLAS : http://www.openblas.net
clBLAS : http://gpuopen.com/compute-product/clblas/
2. SCALAPACK & BLACS, as provided by, for example:
netlib : http://netlib.org/scalapack/
MKL : part of your Intel MKL installation, if available
LibSci : installed on Cray platforms
3. LIBINT, available from https://www.cp2k.org/static/downloads/libint-1.1.4.tar.gz
(see build instructions in section 2.1 below)
Optional:
---------
The following libraries are optional but give a significant performance benefit:
4. FFTW3, available from http://www.fftw.org or provided as an interface by MKL
5. ELPA, available from https://www.cp2k.org/static/downloads/elpa-2016.05.003.tar.gz
6. libgrid, available from inside the distribution at cp2k/tools/autotune_grid
7. libxsmm, available from https://www.cp2k.org/static/downloads/libxsmm-1.4.4.tar.gz
More information can be found in the INSTALL file in the CP2K distribution.
2.1 Building LIBINT
-------------------
The following commands will uncompress and install the LIBINT library required for the UEABS
benchmarks:
tar xzf libint-1.1.4
cd libint-1.1.4
./configure CC=cc CXX=CC --prefix=some_path_other_than_this_directory
make
make install
The environment variables CC and CXX are optional and can be used to specify the C and C++ compilers
to use for the build (the example above is configured to use the compiler wrappers cc and CC used on
Cray systems). By default the build process only creates static libraries (ending in .a). If you
want to be able to link dynamically to LIBINT when building CP2K you can pass the flag
--enable-shared to ./configure in order to produce shared libraries (ending in .so). In that case
you will need to ensure that the library is located in place that is accessible at runtime and that
the LD_LIBRARY_PATH environment variable includes the LIBINT installation directory.
For more build options see ./configure --help.
===== 3. Building CP2K =====
If you have downloaded a tarball of the release, uncompress the file by running
tar xf cp2k-4.1.tar.bz2.
If necessary you can find additional information about building CP2K in the INSTALL file located in
the root directory of the CP2K distribution.
==== 3.1 Create or choose an arch file ====
Before compiling the choice of compilers, the library locations and compilation and linker flags
need to be specified. This is done in an arch (architecture) file. Example arch files for a number
of common architecture examples can be found inside cp2k/arch. The names of these files match the
pattern architecture.version (e.g., Linux-x86-64-gfortran.sopt). The case "version=psmp" corresponds
to the hybrid MPI + OpenMP version that you should build to run the UEABS benchmarks.
In most cases you need to create a custom arch file, either from scratch or by modifying an existing
one that roughly fits the cpu type, compiler, and installation paths of libraries on your
system. You can also consult https://dashboard.cp2k.org, which provides sample arch files as part of
the testing reports for some platforms (click on the status field for a platform, and search for
'ARCH-file' in the resulting output).
As a guided example, the following should be included in your arch file if you are compiling with
GNU compilers:
(a) Specification of which compiler and linker commands to use:
CC = gcc
FC = mpif90
LD = mpif90
CP2K is primarily a Fortran code, so only the Fortran compiler needs to be MPI-enabled.
(b) Specification of the DFLAGS variable, which should include:
-D__parallel (to build parallel CP2K executable)
-D__SCALAPACK (to link to ScaLAPACK)
-D__LIBINT (to link to LIBINT)
-D__MKL (if relying on MKL to provide ScaLAPACK and/or an FFTW interface)
-D__HAS_NO_SHARED_GLIBC (for convenience on HPC systems, see INSTALL file)
Additional DFLAGS needed to link to performance libraries, such as -D__FFTW3 to link to FFTW3,
are listed in the INSTALL file.
(c) Specification of compiler flags:
Required (for gfortran):
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp
Recommended additional flags (for gfortran):
FCFLAGS += -O3 -ffast-math -funroll-loops
If you want to link any libraries containing header files you should pass the path to the
directory containing these to FCFLAGS in the format -I/path_to_include_dir.
(d) Specification of linker flags:
LDFLAGS = $(FCFLAGS)
(e) Specification of libraries to link to:
Required (LIBINT):
-L/home/z01/z01/UEABS/CP2K/libint/1.1.4/lib -lderiv -lint
If you use MKL to provide ScaLAPACK and/or an FFTW interface the LIBS variable should be used
to pass the relevant flags provided by the MKL Link Line Advisor (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor),
which you should use carefully in order to generate the right options for your system.
(f) AR = ar -r
As an example, a simple arch file is shown below for ARCHER (http://www.archer.ac.uk), a Cray system
that uses compiler wrappers cc and ftn to compile C and Fortran code respectively, and which has
LIBINT installed in /home/z01/z01/user/cp2k/libs/libint/1.1.4. On Cray systems the compiler wrappers
automatically link in Cray's LibSci library which provides ScaLAPACK, hence there is no need for
explicit specification of the library location and library names in LIBS or relevant include paths
in FCFLAGS. This would not be the case if MKL was used instead.
#=============================================================================================
#
# Ensure the following environment modules are loaded before starting the build:
#
# PrgEnv-gnu
# cray-libsci
#
CC = cc
CPP =
FC = ftn
LD = ftn
AR = ar -r
DFLAGS = -D__parallel \
-D__SCALAPACK \
-D__LIBINT \
-D__HAS_NO_SHARED_GLIBC
CFLAGS = $(DFLAGS)
FCFLAGS = $(DFLAGS) -ffree-form -fopenmp
FCFLAGS += -O3 -ffast-math -funroll-loops
LDFLAGS = $(FCFLAGS)
LIBS = -L/home/z01/z01/user/cp2k/libint/1.1.4/lib -lderiv -lint
#=============================================================================================
==== 3.2 Compile ===
Change directory to cp2k-4.1/makefiles
There is no configure stage. If the arch file for your machine is called
SomeArchitecture_SomeCompiler.psmp, then issue the following command to compile:
make ARCH=SomeArchitecture_SomeCompiler VERSION=psmp
or, if you are able to run in parallel with N threads:
make -j N ARCH=SomeArchitecture_SomeCompiler VERSION=psmp
There is also no "make install" stage. If everything goes fine, you'll find the executable cp2k.psmp
in the directory cp2k-4.1/exe/SomeArchitecture_SomeCompiler
CP2K can be downloaded from : http://www.cp2k.org/download
It is free for all users under GPL license,
see Obtaining CP2K section in the download page.
In UEABS(2IP) the 2.3 branch was used that can be downloaded from :
http://sourceforge.net/projects/cp2k/files/cp2k-2.3.tar.bz2
Data files are compatible with at least 2.4 branch.
Tier-0 data set requires the libint-1.1.4 library. If libint version 1
is not available on your machine, libint can be downloaded from :
http://sourceforge.net/projects/libint/files/v1-releases/libint-1.1.4.tar.gz
Run instructions for CP2K.
2016-10-26
After building the hybrid MPI+OpenMP version of CP2K you have an executable
called cp2k.psmp. The general way to run the benchmarks is:
export OMP_NUM_THREADS=##
parallel_launcher launcher_options path_to_cp2k.psmp -i inputfile -o logfile
Where:
o The parallel_launcher is mpirun, mpiexec, or some variant such as aprun on
Cray systems or srun when using Slurm.
o The launcher_options include the parallel placement in terms of total numbers
of nodes, MPI ranks/tasks, tasks per node, and OpenMP threads per task (which
should be equal to the value given to OMP_NUM_THREADS)
You can try any combination of tasks per node and OpenMP threads per task to
investigate absolute performance and scaling on the machine of interest.
For tier-1 systems the best performance is usually obtained with pure MPI,while
for tier-0 systems the best performance is typically obtained using 1 MPI task
per node with the number of threads being equal to the number of cores per node.
More information in the form of a README and an example job script is included
in each benchmark tar file.
The run walltime is reported near the end of logfile:
grep "CP2K " logfile | awk -F ' ' '{print $7}'
#
# module load openmpi cuda blas lapack
#
#
CC = gcc
CPP =
FC = mpif90
LD = mpif90
AR = ar -r
CPPFLAGS =
DFLAGS = -D__FFTW3 \
-D__LIBINT \
-D__LIBXC \
-D__parallel \
-D__SCALAPACK
SCALAPACK_LIB = /davide/home/userexternal/$USER/scalapack/scalapack
LIBINT_HOME = /davide/home/userexternal/$USER/cp2k/libs/libint/1.1.4
LIBXC_HOME = /davide/home/userexternal/$USER/cp2k/libs/libxc/4.1.1
FFTW_HOME = /davide/home/userexternal/$USER/fftw/3.3.8
FFTW_INC = $(FFTW_HOME)/include
FFTW_LIB = $(FFTW_HOME)/lib
FCFLAGS = $(DFLAGS) -O3 -ffast-math -ffree-form -funroll-loops -mcpu=power8\
-I$(FFTW_INC) -I$(LIBINT_HOME)/include -I$(LIBXC_HOME)/include
LDFLAGS = $(FCFLAGS) #-Wl,--start-group #-static
#LIBS = -L$(SCALAPACK_LIB) -lscalapack\
# -L$(LAPACK_LIB) -llapack\
# -L$(BLAS_LIB) -lblas\
# -L$(FFTW_LIB) -lfftw3\
# -L$(FFTW_LIB) -lfftw3_threads\
# -L$(LIBINT_HOME)/lib -lderiv\
# -L$(LIBINT_HOME)/lib -lint\
# -L$(LIBINT_HOME)/lib -lr12\
# -L$(LIBXC_HOME)/lib -lxcf03\
# -L$(LIBXC_HOME)/lib -lxc
LIBS = $(SCALAPACK_LIB)/libscalapack.a\
$(LAPACK_LIB)/liblapack.a\
$(BLAS_LIB)/libblas.a\
$(FFTW_LIB)/libfftw3.a\
$(FFTW_LIB)/libfftw3_threads.a\
$(LIBINT_HOME)/lib/libderiv.a\
$(LIBINT_HOME)/lib/libint.a\
$(LIBINT_HOME)/lib/libr12.a\
$(LIBXC_HOME)/lib/libxcf03.a\
$(LIBXC_HOME)/lib/libxc.a\
-ldl\
# -Wl,--end-group
\ No newline at end of file
```
module load gnu
./configure --prefix=/homeb/prcdeep/prdeep04/fftw/3.3.8 CFLAGS="-O3 -mcpu=power8"
make
make install