The Unified European Application Benchmark Suite (UEABS) is a set of currently 14 application codes taken from the pre-existing PRACE and DEISA application benchmark suites, and extended with the PRACE Accelerator Benchmark Suite. The objective is providing a single benchmark suite of scalable, currently relevant and publicly available application codes and datasets, of a size which can realistically be run on large systems, and maintained into the future.
The Unified European Application Benchmark Suite (UEABS) is a set of currently 13 application codes taken from the pre-existing PRACE and DEISA application benchmark suites, and extended with the PRACE Accelerator Benchmark Suite. The objective is providing a single benchmark suite of scalable, currently relevant and publicly available application codes and datasets, of a size which can realistically be run on large systems, and maintained into the future.
The UEABS activity was started during the PRACE-PP project and was publicly released by the PRACE-2IP project.
The PRACE "Accelerator Benchmark Suite" was a PRACE-4IP activity.
...
...
@@ -13,7 +13,7 @@ Contacts: Valeriu Codreanu <mailto:valeriu.codreanu@surfsara.nl> or Walter Lioen
Current Release
---------------
The current release is Version 2.0.
The current release is Version 2.1 (April 30, 2019).
See also the [release notes and history](RELEASES.md).
Running the suite
...
...
@@ -21,7 +21,7 @@ Running the suite
Instructions to run each test cases of each codes can be found in the subdirectories of this repository.
For more details of the codes and datasets, and sample results, please see http://www.prace-ri.eu/IMG/pdf/d7.4_3ip.pdf
For more details of the codes and datasets, and sample results, please see the PRACE-5IP benchmarking deliverable D7.5 "Evaluation of Accelerated and Non-accelerated Benchmarks" (April 18, 2019) at http://www.prace-ri.eu/public-deliverables/ .
The application codes that constitute the UEABS are:
@@ -30,15 +30,14 @@ The application codes that constitute the UEABS are:
-[Code_Saturne](#saturne)
-[CP2K](#cp2k)
-[GADGET](#gadget)
-[GENE](#gene)
-[GPAW](#gpaw)
-[GROMACS](#gromacs)
-[NAMD](#namd)
-[NEMO](#nemo)
- PFARM
-[PFARM](#pfarm)
-[QCD](#qcd)
-[Quantum Espresso](#espresso)
- SHOC
-[SHOC](#shoc)
-[SPECFEM3D](#specfem3d)
# ALYA <a name="alya"></a>
...
...
@@ -46,10 +45,10 @@ The application codes that constitute the UEABS are:
The Alya System is a Computational Mechanics code capable of solving different physics, each one with its own modelization characteristics, in a coupled way. Among the problems it solves are: convection-diffusion reactions, incompressible flows, compressible flows, turbulence, bi-phasic flows and free surface, excitable media, acoustics, thermal flow, quantum mechanics (DFT) and solid mechanics (large strain). ALYA is written in Fortran 90/95 and parallelized using MPI and OpenMP.
- Web site: https://www.bsc.es/computer-applications/alya-system
- Test Case A: http://www.prace-ri.eu/UEABS/ALYA/1.3/ALYA_TestCaseA.tar.bz2
- Test Case B: http://www.prace-ri.eu/UEABS/ALYA/1.3/ALYA_TestCaseB.tar.bz2
- Test Case A: https://repository.prace-ri.eu/ueabs/ALYA/1.3/ALYA_TestCaseA.tar.bz2
- Test Case B: https://repository.prace-ri.eu/ueabs/ALYA/1.3/ALYA_TestCaseB.tar.bz2
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/alya/ALYA_Run_README.txt
# Code_Saturne <a name="saturne"></a>
...
...
@@ -59,53 +58,38 @@ Code_Saturne® is a multipurpose Computational Fluid Dynamics (CFD) software
Code_Saturne® is based on a co-located finite volume approach that can handle three-dimensional meshes built with any type of cell (tetrahedral, hexahedral, prismatic, pyramidal, polyhedral) and with any type of grid structure (unstructured, block structured, hybrid). The code is able to simulate either incompressible or compressible flows, with or without heat transfer, and has a variety of models to account for turbulence. Dedicated modules are available for specific physics such as radiative heat transfer, combustion (e.g. with gas, coal and heavy fuel oil), magneto-hydro dynamics, and compressible flows, two-phase flows. The software comprises of around 350 000 lines of source code, with about 37% written in Fortran90, 50% in C and 15% in Python. The code is parallelised using MPI with some OpenMP.
- Web site: http://code-saturne.org
- Code download: http://code-saturne.org/cms/download or http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne-4.0.6_UEABS.tar.gz
- Code download: http://code-saturne.org/cms/download or https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne-4.0.6_UEABS.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Test Case A: http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
# CP2K <a name="cp2k"></a>
CP2K is a freely available (GPL) program to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials. It is very well and consistently written, standards-conforming Fortran 95, parallelized with MPI and in some parts with hybrid OpenMP+MPI as an option.
CP2K is a freely available quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modelling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, ...), and classical force fields (AMBER, CHARMM, ...). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimisation using NEB or dimer method.
CP2K provides state-of-the-art methods for efficient and accurate atomistic simulations, sources are freely available and actively improved. It has an active international development team, with the unofficial head quarters in the University of Zürich.
CP2K is written in Fortran 2008 and can be run in parallel using a combination of multi-threading, MPI, and CUDA. All of CP2K is MPI parallelised, with some additional loops also being OpenMP parallelised. It is therefore most important to take advantage of MPI parallelisation, however running one MPI rank per CPU core often leads to memory shortage. At this point OpenMP threads can be used to utilise all CPU cores without suffering an overly large memory footprint. The optimal ratio between MPI ranks and OpenMP threads depends on the type of simulation and the system in question. CP2K supports CUDA, allowing it to offload some linear algebra operations including sparse matrix multiplications to the GPU through its DBCSR acceleration layer. FFTs can optionally also be offloaded to the GPU. Benefits of GPU offloading may yield improved performance depending on the type of simulation and the system in question.
-[Build & run instructions, details about benchmarks](./cp2k/README.md)
- Benchmarks:
-[Test Case A](./cp2k/benchmarks/TestCaseA_H2O-512)
-[Test Case B](./cp2k/benchmarks/TestCaseB_LiH-HFX)
-[Test Case C](./cp2k/benchmarks/TestCaseC_H2O-DFT-LS)
# GADGET <a name="gadget"></a>
GADGET is a freely available code for cosmological N-body/SPH simulations on massively parallel computers with distributed memory written by Volker Springel, Max-Plank-Institute for Astrophysics, Garching, Germany. GADGET is written in C and uses an explicit communication model that is implemented with the standardized MPI communication interface. The code can be run on essentially all supercomputer systems presently in use, including clusters of workstations or individual PCs. GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, either with, or without, periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range that is, in principle, unlimited. GADGET can therefore be used to address a wide array of astrophysics interesting problems, ranging from colliding and merging galaxies, to the formation of large-scale structure in the Universe. With the inclusion of additional physical processes such as radiative cooling and heating, GADGET can also be used to study the dynamics of the gaseous intergalactic medium, or to address star formation and its regulation by feedback processes.
- Web site: http://www.mpa-garching.mpg.de/gadget/
- Test Case A: http://www.prace-ri.eu/UEABS/GADGET/gadget3_TestCaseA.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/GADGET/gadget3_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gadget/gadget3_Run_README.txt
# GENE <a name="gene"></a>
GENE is a gyro kinetic plasma turbulence code which has been developed since the late 1990’s and is physically very comprehensive and flexible as well as computationally very efficient and highly scalable. Originally used for flux-tube simulations, today GENE also operates as a global code, either gradient- or flux-driven. An arbitrary number of gyro kinetic particle species can be taken into account, including electromagnetic effects and collisions. GENE is, in principle, able to cover the widest possible range of scales, all the way from the system size (where nonlocal effects or avalanches can play a role) down to sub-ion-gyroradius scales (where ETG or micro tearing modes may contribute to the transport), depending on the available computer resources. Moreover, there exist interfaces to various MHD equilibrium codes. GENE has been carefully benchmarked against theoretical results and other codes.
The GENE code is written in Fortran 90 and C and is parallelized with pure MPI. It strongly relies on a Fast Fourier Transform library and has built-in support for FFTW, MKL or ESSL. It also uses LAPACK and ScaLapack routines for LU decomposition and solution of a linear system of equations of moderate size (up to 1000 unknowns).
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: included with code download
- Test Case A: included with code download
- Test Case B: included with code download
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gene/GENE_Run_README.txt
# GPAW <a name="gpaw"></a>
GPAW is an efficient program package for electronic structure calculations based on the density functional theory (DFT) and the time-dependent density functional theory (TD-DFT). The density-functional theory allows studies of ground state properties such as energetics and equilibrium geometries, while the time-dependent density functional theory can be used for calculating excited state properties such as optical spectra. The program package includes two complementary implementations of time-dependent density functional theory: a linear response formalism and a time-propagation in real time.
...
...
@@ -118,9 +102,13 @@ The program offers several parallelization levels. The most basic parallelizatio
- Code download: http://www.gromacs.org/Downloads The UEABS benchmark cases require the use of 5.1.x or newer branch: the latest 2016 version is suggested.
- Test Case A: http://www.prace-ri.eu/UEABS/GROMACS/1.2/GROMACS_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/GROMACS/1.2/GROMACS_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/GROMACS/1.2/GROMACS_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/GROMACS/1.2/GROMACS_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/gromacs/GROMACS_Run_README.txt
...
...
@@ -168,30 +156,82 @@ NAMD is written in C++ and parallelised using Charm++ parallel objects, which ar
- Test Case A: http://www.prace-ri.eu/UEABS/NAMD/1.2/NAMD_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/NAMD/1.2/NAMD_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/NAMD/1.2/NAMD_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/NAMD/1.2/NAMD_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/namd/NAMD_Run_README.txt
# NEMO <a name="nemo"></a>
NEMO (Nucleus for European Modeling of the Ocean) is a state-of-the-art modeling framework for oceanographic research, operational oceanography seasonal forecast and climate studies. Prognostic variables are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid. Within NEMO, the ocean is interfaced with a sea-ice model (LIM v2 and v3), passive tracer and biogeochemical models (TOP) and, via the OASIS coupler, with several atmospheric general circulation models. It also supports two-way grid embedding via the AGRIF software.
NEMO (Nucleus for European Modelling of the Ocean) [22] is mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by European consortium. It is intended to be tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process).
Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity.
The framework includes five major components:
In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases.
The model is implemented in Fortran 90, with preprocessing (C-pre-processor). It is optimized for vector computers and parallelized by domain decomposition with MPI. It supports modern C/C++ and Fortran compilers. All input and output is done with third party software called XIOS with dependency on NetCDF (Network Common Data Format) and HDF5. It is highly scalable and perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance.
### Test Case Description
The GYRE configuration has been built to model seasonal cycle of double gyre box model. It consists of idealized domain over which seasonal forcing is applied. This allows for studying large number of interactions and their combined contribution to large scale circulation.
The domain geometry is rectangular bounded by vertical walls and flat bottom. The configuration is meant to represent idealized north Atlantic or north pacific basin. The circulation is forced by analytical profiles of wind and buoyancy fluxes.
The wind stress is zonal and its curl changes sign at 22 and 36. It forces a subpolar gyre in the north, a subtropical gyre in the wider part of the domain and a small recirculation gyre in the southern corner. The net heat flux takes the form of a restoring toward a zonal apparent air temperature profile.
A portion of the net heat flux which comes from the solar radiation is allowed to penetrate within the water column. The fresh water flux is also prescribed and varies zonally. It is determined such as, at each time step, the basin-integrated flux is zero.
The basin is initialized at rest with vertical profiles of temperature and salinity uniformity applied to the whole domain. The GYRE configuration is set through the namelist_cfg file.
The horizontal resolution is determined by setting jp_cfg as follows:
`Jpiglo = 30 x jp_cfg + 2`
`Jpjglo = 20 x jp_cfg + 2`
In this configuration, we use default value of 30 ocean levels depicted by jpk=31. The GYRE configuration is an ideal case for benchmark test as it is very simple to increase the resolution and perform both weak and strong scalability experiment using the same input files. We use two configurations as follows:
**Test Case A**:
* jp_cfg = 128 suitable up to 1000 cores
* Number of Days: 20
* Number of Time steps: 1440
* Time step size: 20 mins
* Number of seconds per time step: 1200
**Test Case B**
* jp_cfg = 256 suitable up to 20,000 cores.
* Number of Days (real): 80
* Number of time step: 4320
* Time step size(real): 20 mins
* Number of seconds per time step: 1200
* Web site: <http://www.nemo-ocean.eu/>
* Download, Build and Run Instructions : <https://repository.prace-ri.eu/git/UEABS/ueabs/tree/master/nemo>
# PFARM <a name="pfarm"></a>
PFARM is part of a suite of programs based on the ‘R-matrix’ ab-initio approach to the variational solution of the many-electron Schrödinger
equation for electron-atom and electron-ion scattering. The package has been used to calculate electron collision data for astrophysical
applications (such as: the interstellar medium, planetary atmospheres) with, for example, various ions of Fe and Ni and neutral O, plus
other applications such as data for plasma modelling and fusion reactor impurities. The code has recently been adapted to form a compatible
interface with the UKRmol suite of codes for electron (positron) molecule collisions thus enabling large-scale parallel ‘outer-region’
calculations for molecular systems as well as atomic systems.
The PFARM outer-region application code EXDIG is domi-nated by the assembly of sector Hamiltonian matrices and their subsequent eigensolutions.
The code is written in Fortran 2003 (or Fortran 2003-compliant Fortran 95), is parallelised using MPI and OpenMP and is designed to take
advantage of highly optimised, numerical library routines. Hybrid MPI / OpenMP parallelisation has also been introduced into the code via
shared memory enabled numerical library kernels.
Accelerator-based implementations have been implemented for EXDIG, using off-loading (MKL or CuBLAS/CuSolver) for the standard (dense) eigensolver calculations that dominate overall run-time.
- CPU Code download: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/pfarm/RMX_MAGMA_CPU_mol.tar.gz
- Build & Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/pfarm/PFARM_Build_Run_README.txt
- Test Case A: https://repository.prace-ri.eu/UEABS/ueabs/blob/r2.1-dev/pfarm/PFARM_TestCaseA.tar.bz2
- Test Case B: https://repository.prace-ri.eu/UEABS/ueabs/blob/r2.1-dev/pfarm/PFARM_TestCaseB.tar.bz2
- the blue ocean (ocean dynamics, NEMO-OPA)
- the white ocean (sea-ice, NEMO-LIM)
- the green ocean (biogeochemistry, NEMO-TOP)
- the adaptive mesh refinement software (AGRIF)
- the assimilation component (NEMO_TAM)
NEMO is used by a large community: 240 projects in 27 countries (14 in Europe, 13 elsewhere) and 350 registered users (numbers for year 2008). The code is available under the CeCILL license (public license). The latest stable version is 3.6. NEMO is written in Fortran90 and parallelized with MPI.
- Test Case A: http://www.prace-ri.eu/UEABS/Quantum_Espresso/QuantumEspresso_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/Quantum_Espresso/QuantumEspresso_TestCaseB.tar.gz
- Test Case A: https://repository.prace-ri.eu/ueabs/Quantum_Espresso/QuantumEspresso_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/Quantum_Espresso/QuantumEspresso_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/quantum_espresso/QE-guide.txt
# SHOC <a name="shoc"></a>
The Scalable HeterOgeneous Computing (SHOC) benchmark suite is a collection of benchmark programs testing the performance and stability of systems using computing devices with non-traditional architectures
for general purpose computing. It serves as synthetic benchmark suite in the UEABS context. Its initial focus is on systems containing Graphics Processing Units (GPUs) and multi-core processors, featuring implementations using both CUDA and OpenCL. It can be used on clusters as well as individual hosts.
Also, SHOC includes an Offload branch for the benchmarks that can be used to evaluate the Intel Xeon Phi x100 family.
The SHOC benchmark suite currently contains benchmark programs, categoried based on complexity. Some measure low-level "feeds and speeds" behavior (Level 0), some measure the performance of a higher-level operation such as a Fast Fourier Transform (FFT) (Level 1), and the others measure real application kernels (Level 2).
- Run instructions: https://repository.prace-ri.eu/git/ueabs/ueabs/blob/r2.1-dev/shoc/README_ACC.md
# SPECFEM3D <a name="specfem3d"></a>
The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). All SPECFEM3D_GLOBE software is written in Fortran90 with full portability in mind, and conforms strictly to the Fortran95 standard. It uses no obsolete or obsolescent features of Fortran77. The package uses parallel programming based upon the Message Passing Interface (MPI).
...
...
@@ -267,9 +321,9 @@ In many geological models in the context of seismic wave propagation studies (ex
- Web site: http://geodynamics.org/cig/software/specfem3d_globe/
* Updated the benchmark suite to the status as used for the PRACE-5IP benchmarking deliverable D7.5 "Evaluation of Accelerated and Non-accelerated Benchmarks" (April 18, 2019)
* Removed GENE
## Version 2.0 (PRACE-5IP MS31, May 31, 2018)
* Reconstructed this versioned git repository from a "flat" web site representation
...
...
@@ -39,4 +44,4 @@ This started as a fork / subset of UEABS Version 1.2: