Newer
Older
# ALYA
## Summary Version
1.0
## Purpose of Benchmark
The Alya System is a Computational Mechanics code capable of solving different physics, each one with its own modelization characteristics, in a coupled way. Among the problems it solves are: convection-diffusion reactions, incompressible flows, compressible flows, turbulence, bi-phasic flows and free surface, excitable media, acoustics, thermal flow, quantum mechanics (DFT) and solid mechanics (large strain). ALYA is written in Fortran 90/95 and parallelized using MPI and OpenMP.
* Web site: https://www.bsc.es/computer-applications/alya-system
* Test Case A: https://gitlab.com/bsc-alya/benchmarks/sphere-16M
* Test Case B: https://gitlab.com/bsc-alya/benchmarks/sphere-132M
## Mechanics of Building Benchmark
You can compile alya using CMake. It follows the classic CMake configuration, except for the compiler management that has been customized by the developers.
### Creation of the build directory
In your alya directory, create a new build directory:
To configure cmake using the command line, type the following:
If you want to customize the build options, use -DOPTION=value. For example, to enable GPU as it follows:
For more information: https://gitlab.com/bsc-alya/alya/-/wikis/Documentation/Installation
## Mechanics of Running Benchmark
### Datasets
The parameters used in the datasets try to represent at best typical industrial runs in order to obtain representative speedups. For example, the iterative solvers are never converged to machine accuracy, but only as a percentage of the initial residual.
The different datasets are:
Test Case A: SPHERE_16.7M ... 16.7M sphere mesh
Test Case B: SPHERE_132M .... 132M sphere mesh
### How to execute Alya with a given dataset
In order to run ALYA, you need at least the following input files per execution:
X.dom.dat
X.ker.dat
X.nsi.dat
X.dat
In our case X=sphere
To execute a simulation, you must be inside the input directory and you should submit a job like:
mpirun Alya.x sphere
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
--------------------------
There are many ways to compute the scalability of Nastin module.
1. **For the complete cycle including: element assembly + boundary assembly + subgrid scale assembly + solvers, etc.**
> In *.nsi.cvg file, column "30. Elapsed CPU time"
2. **For single kernels: element assembly, boundary assembly, subgrid scale assembly, solvers**. Single kernels. Here, average and maximum times are indicated in *.nsi.cvg at each iteration of each time step:
> Element assembly: 19. Ass. ave cpu time 20. Ass. max cpu time
>
> Boundary assembly: 33. Bou. ave cpu time 34. Bou. max cpu time
>
> Subgrid scale assembly: 31. SGS ave cpu time 32. SGS max cpu time
>
> Iterative solvers: 21. Sol. ave cpu time 22. Sol. max cpu time
>
> Note that in the case of using Runge-Kutta time integration (the case
> of the sphere), the element and boundary assembly times are this of
> the last assembly of current time step (out of three for third order).
3. **Using overall times**.
> At the end of *.log file, total timings are shown for all modules. In
> this case we use the first value of the NASTIN MODULE.
Contact
-------
If you have any question regarding the runs, please feel free to contact Guillaume Houzeaux: guillaume.houzeaux@bsc.es