Newer
Older
# Unified European Applications Benchmark Suite, version 1.3
The Unified European Application Benchmark Suite (UEABS) is a set of 12 application codes taken from the pre-existing PRACE and DEISA application benchmark suites to form a single suite, with the objective of providing a set of scalable, currently relevant and publically available codes and datasets, of a size which can realistically be run on large systems, and maintained into the future. This work has been undertaken by Task 7.4 "Unified European Applications Benchmark Suite for Tier-0 and Tier-1" in the PRACE Second Implementation Phase (PRACE-2IP) project and will be updated and maintained by subsequent PRACE Implementation Phase projects.
For more details of the codes and datasets, and sample results, please see http://www.prace-ri.eu/IMG/pdf/d7.4_3ip.pdf
Release notes for version 1.3, released on 1st June 2018 as a result of PRACE-4IP activities.
Changes from version 1.2 are as follows:
- CP2K: updated build instructions and datasets
- QCD: new version of code and dataset
- ALYA: new datasets
- Code_Saturne: new code version, build instructions, and datasets
- Quantum Espresso: new build/run instructions
Release notes for version 1.2, released on 31st October 2016 as a result of PRACE-4IP activities.
Changes from version 1.1 are as follows:
- GENE: new version of code and additional new dataset.
- GPAW: new version of code and new dataset.
- GROMACS: new version of code and updated dataset.
- NAMD: new version of code and minor build and run instructions updates.
- NEMO: new version of code and replaced dataset.
Release notes for version 1.1, released on 31st May 2014 as a resut of PRACE-3IP activities.
Changes from version 1.0 are as follows:
- ALYA: new version of code and new datasets.
- Code_Saturne: additional large dataset, using tetrahedralelements.
- CP2K: new build instructions.
- GPAW: new dataset with reduced runtime.
The codes composing the UEABS are:
- [ALYA](#alya)
- [Code_Saturne](#saturne)
- [CP2K](#cp2k)
- [GADGET](#gadget)
- [GENE](#gene)
- [GPAW](#gpaw)
- [GROMACS](#gromacs)
- [NAMD](#namd)
- [NEMO](#nemo)
- [Quantum Espresso](#espresso)
- [SPECFEM3D](#specfem3d)
# ALYA <a name="alya"></a>
The Alya System is a Computational Mechanics code capable of solving different physics, each one with its own modelization characteristics, in a coupled way. Among the problems it solves are: convection-diffusion reactions, incompressible flows, compressible flows, turbulence, bi-phasic flows and free surface, excitable media, acoustics, thermal flow, quantum mechanics (DFT) and solid mechanics (large strain). ALYA is written in Fortran 90/95 and parallelized using MPI and OpenMP.
- Web site: https://www.bsc.es/computer-applications/alya-system
- Code download: http://www.prace-ri.eu/UEABS/ALYA/1.1/alya3226.tar.gz
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/alya/ALYA_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/ALYA/1.3/ALYA_TestCaseA.tar.bz2
- Test Case B: http://www.prace-ri.eu/UEABS/ALYA/1.3/ALYA_TestCaseB.tar.bz2
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/alya/ALYA_Run_README.txt
# Code_Saturne <a name="saturne"></a>
Code_Saturne® is a multipurpose Computational Fluid Dynamics (CFD) software package, which has been developed by EDF (France) since 1997. The code was originally designed for industrial applications and research activities in several fields related to energy production; typical examples include nuclear power thermal-hydraulics, gas and coal combustion, turbo-machinery, heating, ventilation, and air conditioning. In 2007, EDF released the code as open-source and this provides both industry and academia to benefit from its extensive pedigree. Code_Saturne®’s open-source status allows for answers to specific needs that cannot easily be made available in commercial “black box” packages. It also makes it possible for industrial users and for their subcontractors to develop and maintain their own independent expertise and to fully control the software they use.
Code_Saturne® is based on a co-located finite volume approach that can handle three-dimensional meshes built with any type of cell (tetrahedral, hexahedral, prismatic, pyramidal, polyhedral) and with any type of grid structure (unstructured, block structured, hybrid). The code is able to simulate either incompressible or compressible flows, with or without heat transfer, and has a variety of models to account for turbulence. Dedicated modules are available for specific physics such as radiative heat transfer, combustion (e.g. with gas, coal and heavy fuel oil), magneto-hydro dynamics, and compressible flows, two-phase flows. The software comprises of around 350 000 lines of source code, with about 37% written in Fortran90, 50% in C and 15% in Python. The code is parallelised using MPI with some OpenMP.
- Code download: http://code-saturne.org/cms/download or http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne-4.0.6_UEABS.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
- Test Case A: http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/Code_Saturne/1.3/Code_Saturne_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
CP2K is a freely available (GPL) program to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials. It is very well and consistently written, standards-conforming Fortran 95, parallelized with MPI and in some parts with hybrid OpenMP+MPI as an option.
CP2K provides state-of-the-art methods for efficient and accurate atomistic simulations, sources are freely available and actively improved. It has an active international development team, with the unofficial head quarters in the University of Zürich.
- Web site: https://www.cp2k.org/
- Code download: https://www.cp2k.org/download
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/cp2k/CP2K_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/CP2K/1.3/CP2K_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/CP2K/1.3/CP2K_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/cp2k/CP2K_Run_README.txt
# GADGET <a name="gadget"></a>
GADGET is a freely available code for cosmological N-body/SPH simulations on massively parallel computers with distributed memory written by Volker Springel, Max-Plank-Institute for Astrophysics, Garching, Germany. GADGET is written in C and uses an explicit communication model that is implemented with the standardized MPI communication interface. The code can be run on essentially all supercomputer systems presently in use, including clusters of workstations or individual PCs. GADGET computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces) and represents fluids by means of smoothed particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, either with, or without, periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self-gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range that is, in principle, unlimited. GADGET can therefore be used to address a wide array of astrophysics interesting problems, ranging from colliding and merging galaxies, to the formation of large-scale structure in the Universe. With the inclusion of additional physical processes such as radiative cooling and heating, GADGET can also be used to study the dynamics of the gaseous intergalactic medium, or to address star formation and its regulation by feedback processes.
- Web site: http://www.mpa-garching.mpg.de/gadget/
- Code download: http://www.prace-ri.eu/UEABS/GADGET/gadget3_Source.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/gadget/gadget3_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/GADGET/gadget3_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/gadget/gadget3_Run_README.txt
# GENE <a name="gene"></a>
GENE is a gyro kinetic plasma turbulence code which has been developed since the late 1990’s and is physically very comprehensive and flexible as well as computationally very efficient and highly scalable. Originally used for flux-tube simulations, today GENE also operates as a global code, either gradient- or flux-driven. An arbitrary number of gyro kinetic particle species can be taken into account, including electromagnetic effects and collisions. GENE is, in principle, able to cover the widest possible range of scales, all the way from the system size (where nonlocal effects or avalanches can play a role) down to sub-ion-gyroradius scales (where ETG or micro tearing modes may contribute to the transport), depending on the available computer resources. Moreover, there exist interfaces to various MHD equilibrium codes. GENE has been carefully benchmarked against theoretical results and other codes.
The GENE code is written in Fortran 90 and C and is parallelized with pure MPI. It strongly relies on a Fast Fourier Transform library and has built-in support for FFTW, MKL or ESSL. It also uses LAPACK and ScaLapack routines for LU decomposition and solution of a linear system of equations of moderate size (up to 1000 unknowns).
- Web site: http://gene.rzg.mpg.de/
- Code download: http://www.prace-ri.eu/UEABS/GENE/1.2/GENE.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: included with code download
- Test Case A: included with code download
- Test Case B: included with code download
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/gene/GENE_Run_README.txt
# GPAW <a name="gpaw"></a>
GPAW is an efficient program package for electronic structure calculations based on the density functional theory (DFT) and the time-dependent density functional theory (TD-DFT). The density-functional theory allows studies of ground state properties such as energetics and equilibrium geometries, while the time-dependent density functional theory can be used for calculating excited state properties such as optical spectra. The program package includes two complementary implementations of time-dependent density functional theory: a linear response formalism and a time-propagation in real time.
The program uses the projector augmented wave (PAW) method that allows one to get rid of the core electrons and work with soft pseudo valence wave functions. The PAW method can be applied on the same footing to all elements, for example, it provides a reliable description of the transition metal elements and the first row elements with open p-shells that are often problematic for standard pseudopotentials. A further advantage of the PAW method is that it is an all-electron method (frozen core approximation) and there is a one to one transformation between the pseudo and all-electron quantities.
The equations of the (time-dependent) density functional theory within the PAW method are discretized using finite-differences and uniform real-space grids. The real-space representation allows flexible boundary conditions, as the system can be finite or periodic in one, two or three dimensions (e.g. cluster, slab, bulk). The accuracy of the discretization is controlled basically by single parameter, the grid spacing. The real-space representation allows also efficient parallelization with domain decomposition.
The program offers several parallelization levels. The most basic parallelization strategy is domain decomposition over the real-space grid. In magnetic systems it is possible to parallelize over spin, and in systems that have k-points (surfaces or bulk systems) parallelization over k-points is also possible. Furthermore, parallelization over electronic states is possible in DFT and in real-time TD-DFT calculations. GPAW is written in Python and C and parallelized with MPI.
- Web site: https://wiki.fysik.dtu.dk/gpaw/
- Code download: https://gitlab.com/gpaw/gpaw
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/gpaw/GPAW_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/GPAW/GPAW_benchmark.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/gpaw/GPAW_Run_README.txt
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
# GROMACS <a name="gromacs"></a>
GROMACS is a versatile package to perform molecular dynamics, i.e. simulate the Newtonian equations of motion for systems with hundreds to millions of particles.
It is primarily designed for biochemical molecules like proteins, lipids and nucleic acids that have a lot of complicated bonded interactions, but since GROMACS is extremely fast at calculating the nonbonded interactions (that usually dominate simulations) many groups are also using it for research on non-biological systems, e.g. polymers.
GROMACS supports all the usual algorithms you expect from a modern molecular dynamics implementation, (check the online reference or manual for details), but there are also quite a few features that make it stand out from the competition:
- GROMACS provides extremely high performance compared to all other programs. A lot of algorithmic optimizations have been introduced in the code; we have for instance extracted the calculation of the virial from the innermost loops over pairwise interactions, and we use our own software routines to calculate the inverse square root. In GROMACS 4.6 and up, on almost all common computing platforms, the innermost loops are written in C using intrinsic functions that the compiler transforms to SIMD machine instructions, to utilize the available instruction-level parallelism. These kernels are available in either single and double precision, and in support all the different kinds of SIMD support found in x86-family (and other) processors.
- Also since GROMACS 4.6, we have excellent CUDA-based GPU acceleration on GPUs that have Nvidia compute capability >= 2.0 (e.g. Fermi or later)
- GROMACS is user-friendly, with topologies and parameter files written in clear text format. There is a lot of consistency checking, and clear error messages are issued when something is wrong. Since a C preprocessor is used, you can have conditional parts in your topologies and include other files. You can even compress most files and GROMACS will automatically pipe them through gzip upon reading.
- There is no scripting language – all programs use a simple interface with command line options for input and output files. You can always get help on the options by using the -h option, or use the extensive manuals provided free of charge in electronic or paper format.
- As the simulation is proceeding, GROMACS will continuously tell you how far it has come, and what time and date it expects to be finished.
- Both run input files and trajectories are independent of hardware endian-ness, and can thus be read by any version GROMACS, even if it was compiled using a different floating-point precision.
- GROMACS can write coordinates using lossy compression, which provides a very compact way of storing trajectory data. The accuracy can be selected by the user.
- GROMACS comes with a large selection of flexible tools for trajectory analysis – you won’t have to write any code to perform routine analyses. The output is further provided in the form of finished Xmgr/Grace graphs, with axis labels, legends, etc. already in place!
- A basic trajectory viewer that only requires standard X libraries is included, and several external visualization tools can read the GROMACS file formats.
- GROMACS can be run in parallel, using either the standard MPI communication protocol, or via our own “Thread MPI” library for single-node workstations.
- GROMACS contains several state-of-the-art algorithms that make it possible to extend the time steps is simulations significantly, and thereby further enhance performance without sacrificing accuracy or detail.
- The package includes a fully automated topology builder for proteins, even multimeric structures. Building blocks are available for the 20 standard aminoacid residues as well as some modified ones, the 4 nucleotide and 4 deoxinucleotide resides, several sugars and lipids, and some special groups like hemes and several small molecules.
- There is ongoing development to extend GROMACS with interfaces both to Quantum Chemistry and Bioinformatics/databases.
- GROMACS is Free Software, available under the GNU Lesser General Public License (LGPL), version 2.1. You can redistribute it and/or modify it under the terms of the LGPL as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.
Instructions:
- Web site: http://www.gromacs.org/
- Code download: http://www.gromacs.org/Downloads The UEABS benchmark cases require the use of 5.1.x or newer branch: the latest 2016 version is suggested.
- Test Case A: http://www.prace-ri.eu/UEABS/GROMACS/1.2/GROMACS_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/GROMACS/1.2/GROMACS_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/gromacs/GROMACS_Run_README.txt
# NAMD <a name="namd"></a>
NAMD is a widely used molecular dynamics application designed to simulate bio-molecular systems on a wide variety of compute platforms. NAMD is developed by the “Theoretical and Computational Biophysics Group” at the University of Illinois at Urbana Champaign. In the design of NAMD particular emphasis has been placed on scalability when utilizing a large number of processors. The application can read a wide variety of different file formats, for example force fields, protein structure, which are commonly used in bio-molecular science.
A NAMD license can be applied for on the developer’s website free of charge. Once the license has been obtained, binaries for a number of platforms and the source can be downloaded from the website.
Deployment areas of NAMD include pharmaceutical research by academic and industrial users. NAMD is particularly suitable when the interaction between a number of proteins or between proteins and other chemical substances is of interest. Typical examples are vaccine research and transport processes through cell membrane proteins.
NAMD is written in C++ and parallelised using Charm++ parallel objects, which are implemented on top of MPI.
- Web site: http://www.ks.uiuc.edu/Research/namd/
- Code download: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/namd/NAMD_Download_README.txt
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/namd/NAMD_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/NAMD/1.2/NAMD_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/NAMD/1.2/NAMD_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/namd/NAMD_Run_README.txt
# NEMO <a name="nemo"></a>
NEMO (Nucleus for European Modeling of the Ocean) is a state-of-the-art modeling framework for oceanographic research, operational oceanography seasonal forecast and climate studies. Prognostic variables are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid. Within NEMO, the ocean is interfaced with a sea-ice model (LIM v2 and v3), passive tracer and biogeochemical models (TOP) and, via the OASIS coupler, with several atmospheric general circulation models. It also supports two-way grid embedding via the AGRIF software.
The framework includes five major components:
- the blue ocean (ocean dynamics, NEMO-OPA)
- the white ocean (sea-ice, NEMO-LIM)
- the green ocean (biogeochemistry, NEMO-TOP)
- the adaptive mesh refinement software (AGRIF)
- the assimilation component (NEMO_TAM)
NEMO is used by a large community: 240 projects in 27 countries (14 in Europe, 13 elsewhere) and 350 registered users (numbers for year 2008). The code is available under the CeCILL license (public license). The latest stable version is 3.6. NEMO is written in Fortran90 and parallelized with MPI.
- Web site: http://www.nemo-ocean.eu/
- Code download: http://www.prace-ri.eu/UEABS/NEMO/NEMO_Source.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/nemo/NEMO_Build_README.txt
- Test Case A: http://www.prace-ri.eu/UEABS/NEMO/NEMO_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/nemo/NEMO_Run_README.txt
# QCD <a name="qcd"></a>
The QCD benchmark is, unlike the other benchmarks in the PRACE application benchmark suite, not a full application but a set of 5 kernels which are representative of some of the most compute-intensive parts of QCD calculations.
Each of the 5 kernels has one test case:
Kernel A is derived from BQCD (Berlin Quantum ChromoDynamics program), a hybrid Monte-Carlo code that simulates Quantum Chromodynamics with dynamical standard Wilson fermions. The computations take place on a four-dimensional regular grid with periodic boundary conditions. The kernel is a standard conjugate gradient solver with even/odd pre-conditioning. Lattice size is 322 x 642.
Kernel B is derived from SU3_AHiggs, a lattice quantum chromodynamics (QCD) code intended for computing the conditions of the Early Universe. Instead of “full QCD”, the code applies an effective field theory, which is valid at high temperatures. In the effective theory, the lattice is 3D. Lattice size is 2563.
Kernel C Lattice size is 84. Note that Kernel C can only be run in a weak scaling mode, where each CPU stores the same local lattice size, regardless of the number of CPUs. Ideal scaling for this kernel therefore corresponds to constant execution time, and performance is simply the reciprocal of the execution time.
Kernel D consists of the core matrix-vector multiplication routine for standard Wilson fermions. The lattice size is 644 .
Kernel E consists of a full conjugate gradient solution using Wilson fermions. Lattice size is 643 x 3.
- Code download: http://www.prace-ri.eu/UEABS/QCD/1.3/QCD_Source_TestCaseA.tar.gz
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/qcd/QCD_Build_README.txt
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/qcd/QCD_Run_README.txt
# Quantum Espresso <a name="espresso"></a>
QUANTUM ESPRESSO is an integrated suite of computer codes for electronic-structure calculations and materials modeling, based on density-functional theory, plane waves, and pseudopotentials (norm-conserving, ultrasoft, and projector-augmented wave). QUANTUM ESPRESSO stands for opEn Source Package for Research in Electronic Structure, Simulation, and Optimization. It is freely available to researchers around the world under the terms of the GNU General Public License. QUANTUM ESPRESSO builds upon newly restructured electronic-structure codes that have been developed and tested by some of the original authors of novel electronic-structure algorithms and applied in the last twenty years by some of the leading materials modeling groups worldwide. Innovation and efficiency are still its main focus, with special attention paid to massively parallel architectures, and a great effort being devoted to user friendliness. QUANTUM ESPRESSO is evolving towards a distribution of independent and inter-operable codes in the spirit of an open-source project, where researchers active in the field of electronic-structure calculations are encouraged to participate in the project by contributing their own codes or by implementing their own ideas into existing codes.
QUANTUM ESPRESSO is written mostly in Fortran90, and parallelised using MPI and OpenMP.
- Web site: http://www.quantum-espresso.org/
- Code download: http://www.quantum-espresso.org/download/
- Build instructions: http://www.quantum-espresso.org/wp-content/uploads/Doc/user_guide/
- Test Case A: http://www.prace-ri.eu/UEABS/Quantum_Espresso/QuantumEspresso_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/Quantum_Espresso/QuantumEspresso_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/quantum_espresso/QE-guide.txt
# SPECFEM3D <a name="specfem3d"></a>
The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). All SPECFEM3D_GLOBE software is written in Fortran90 with full portability in mind, and conforms strictly to the Fortran95 standard. It uses no obsolete or obsolescent features of Fortran77. The package uses parallel programming based upon the Message Passing Interface (MPI).
The SEM was originally developed in computational fluid dynamics and has been successfully adapted to address problems in seismic wave propagation. It is a continuous Galerkin technique, which can easily be made discontinuous; it is then close to a particular case of the discontinuous Galerkin technique, with optimized efficiency because of its tensorized basis functions. In particular, it can accurately handle very distorted mesh elements. It has very good accuracy and convergence properties. The spectral element approach admits spectral rates of convergence and allows exploiting hp-convergence schemes. It is also very well suited to parallel implementation on very large supercomputers as well as on clusters of GPU accelerating graphics cards. Tensor products inside each element can be optimized to reach very high efficiency, and mesh point and element numbering can be optimized to reduce processor cache misses and improve cache reuse. The SEM can also handle triangular (in 2D) or tetrahedral (3D) elements as well as mixed meshes, although with increased cost and reduced accuracy in these elements, as in the discontinuous Galerkin method.
In many geological models in the context of seismic wave propagation studies (except for instance for fault dynamic rupture studies, in which very high frequencies of supershear rupture need to be modeled near the fault, a continuous formulation is sufficient because material property contrasts are not drastic and thus conforming mesh doubling bricks can efficiently handle mesh size variations. This is particularly true at the scale of the full Earth. Effects due to lateral variations in compressional-wave speed, shear-wave speed, density, a 3D crustal model, ellipticity, topography and bathyletry, the oceans, rotation, and self-gravitation are included. The package can accommodate full 21-parameter anisotropy as well as lateral variations in attenuation. Adjoint capabilities and finite-frequency kernel simulations are also included.
- Web site: http://geodynamics.org/cig/software/specfem3d_globe/
- Code download: http://geodynamics.org/cig/software/specfem3d_globe/
- Build instructions: http://www.geodynamics.org/wsvn/cig/seismo/3D/SPECFEM3D_GLOBE/trunk/doc/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf?op=file&rev=0&sc=0
- Test Case A: http://www.prace-ri.eu/UEABS/SPECFEM3D/SPECFEM3D_TestCaseA.tar.gz
- Test Case B: http://www.prace-ri.eu/UEABS/SPECFEM3D/SPECFEM3D_TestCaseA.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/v1.3/specfem3d/SPECFEM3D_Run_README.txt