Skip to content
README.md 6.28 KiB
Newer Older
Janko's avatar
Janko committed
# Alya - Large Scale Computational Mechanics
Victor's avatar
Victor committed

Alya is a simulation code for high performance computational mechanics. Alya solves coupled multiphysics problems using high performance computing techniques for distributed and shared memory supercomputers, together with vectorization and optimization at the node level.

Homepage: https://www.bsc.es/research-development/research-areas/engineering-simulations/alya-high-performance-computational

Alya is avaialble to collaboratoring projects and a specific version is being distributed as part of the PRACE Unified European Applications Benchmark Suite (http://www.prace-ri.eu/ueabs/#ALYA)


## Building Alya for GPU accelerators


The library currently supports four solvers:GMRES, Deflated Conjugate Gradient, Conjugate Gradient, and Pipelined Conjugate Gradient.
The only pre-conditioner supported at the moment is 'diagonal'.

Keywords to use the solvers:

Janko's avatar
Janko committed
```shell
Victor's avatar
Victor committed
NINJA GMRES               : GGMR
NINJA Deflated CG         : GDECG
NINJA CG                  : GCG
NINJA Pipelined CG        : GPCG

PRECONDITIONER            : DIAGONAL
Janko's avatar
Janko committed
```
Victor's avatar
Victor committed
Other options are same a CPU based solver.

### GPGPU Building

Janko's avatar
Janko committed
This version was tested with the Intel Compilers 2017.1, bullxmpi-1.2.9.1 and NVIDIA CUDA 7.5. Ensure that the wrappers `mpif90` and `mpicc` point to the correct binaries and that `$CUDA_HOME` is set.
Victor's avatar
Victor committed

Alya can be used with just MPI or hybrid MPI-OpenMP parallelism. Standard execution mode is to rely on MPI only.

Janko's avatar
Janko committed
 - Uncompress the source and configure the depending Metis library and Alya build options:
Janko's avatar
Janko committed

Janko's avatar
Janko committed
```shell
   tar xvf  alya-prace-acc.tar.bz2
```
Janko's avatar
Janko committed

Janko's avatar
Janko committed
 -  Edit the file `Alya/Thirdparties/metis-4.0/Makefile.in` to select the compiler and target platform. Uncomment the specific lines and add optimization parameters, e.g.
Janko's avatar
Janko committed

Janko's avatar
Janko committed
```shell
  OPTFLAGS = -O3 -xCORE-AVX2
```
Janko's avatar
Janko committed

Janko's avatar
Janko committed
 -  Then build Metis4 
Janko's avatar
Janko committed

Janko's avatar
Janko committed
```shell
  $ cd Alya/Executables/unix
  $ make metis4
```
Janko's avatar
Janko committed

Janko's avatar
Janko committed
 - For Alya there are several example configurations, copy one, e.g. for Intel Compilers:
Janko's avatar
Janko committed

Janko's avatar
Janko committed
```shell
Janko's avatar
Janko committed
  $ cp configure.in/config_ifort.in config.in
Janko's avatar
Janko committed
```
Janko's avatar
Janko committed

Janko's avatar
Janko committed
 - Edit the config.in:
Janko's avatar
Janko committed
  Add the corresponding platform optimization flags to `FCFLAGS`, e.g. 

Janko's avatar
Janko committed
```shell
  FCFLAGS  = -module $O -c -xCORE-AVX2
```
 - MPI: No changes in the configure file are necessary. By default you use metis4 and 4 byte integers.
 - MPI-hybrid (with OpenMP) : Uncomment the following lines for OPENMP version:
Janko's avatar
Janko committed

Janko's avatar
Janko committed
```shell
Victor's avatar
Victor committed
              CSALYA     := $(CSALYA)   -qopenmp (-fopenmp for GCC Compilers)
              EXTRALIB   := $(EXTRALIB) -qopenmp (-fopenmp for gcc Compilers)
Janko's avatar
Janko committed
```
Janko's avatar
Janko committed
 - Configure and build Alya (-x Release version; -g Debug version, plus uncommenting debug and checking flags in config.in)

Janko's avatar
Janko committed
```shell
 ./configure -x nastin parall
 make NINJA=1 -j num_processors
```
Victor's avatar
Victor committed

### GPGPU Usage

Each problem needs a GPUconfig.dat. A sample is available at "Alya/Thirdparties/ninja" and needs to be copied to the work directory. A README file in the same location provides further information

Extract the small one node test case:
 0.-"tar xvf cavity1_hexa_med.tar.bz2 && cd cavity1_hexa_med"
 1.-Copy the GPUconfig file to your work directory. "cp ../Alya/Thirdparties/ninja/GPUconfig.dat ."
 To use the GPU, you have to replace 'GMRES' by 'GGMR' and 'DEFLATED_CG' by 'GDECG', both in cavity1_hexa.nsi.dat
 3.-Edit the job script to submit the calculation to the batch system.
    -job.sh:  Modify the path where you have your Alya.x (compiled with MPI options)
    -sbatch job.sh"
    Alternatively execute directly: "OMP_NUM_THREADS=4 mpirun -np 16 Alya.x cavity1_hexa"

    Runtime on 16-core Xeon E5-2630 v3 @ 2.40GHz with 2 NVIDIA K80: ~1:30 min
    Runtime on 16-core Xeon E5-2630 v3 @ 2.40GHz no GPU:            ~2:00 min


## Building Alya for Intel Xeon Phi Knights Landing (KNL)


The Xeon Phi processor version of Alya is currently relying on compiler assisted optimization for AVX-512. Porting of performance critical kernels to the new assembly instructions is underway. There will not be a version for first generation Xeon Phi Knights Corner coprocessors.

### KNL Building


This version was tested with the Intel Compilers 2017.1, Intel MPI 2017.1 Ensure that the wrappers mpif90 and mpicc point to the correct binaries.

Alya can be used with just MPI or hybrid MPI-OpenMP parallelism. Standard execution mode is to rely on MPI only.

0.-Uncompress the file alya-prace-acc.tar.bz2
1.-"cd Alya/Thirdparties/metis-4.0/"
  -Edit the file 'Makefile.in' to select the compiler and target platform. Uncomment the specific lines and add optimization parameters, e.g. CC = icc -xMIC-AVX512
  -Execute 'make'
2.-"cd Alya/Executables/unix"
3.-There are several example configurations, copy one, e.g. for Intel Compilers:
  -"cp configure.in/config_ifort.in config.in"
4.-Edit the config.in:
  Add the corresponding platform optimization flags to FCFLAGS, e.g. "-module $O -c -xMIC-AVX512"
  MPI: No changes in the configure file are necessary. By default you use metis4 and 4 byte integers.
  MPI-hybrid (with OpenMP) : Uncomment the following lines for OPENMP version:
              CSALYA     := $(CSALYA)   -qopenmp (-fopenmp for GCC Compilers)
              EXTRALIB   := $(EXTRALIB) -qopenmp (-fopenmp for gcc Compilers)
5.-Run "./configure -x nastin parall" (-x Release version; -g Debug version, plus uncommenting debug and checking flags in config.in)
6.-"make -j num_processors"


### KNL Usage

Extract the small one node test case:
 0.-"tar xvf cavity1_hexa_med.tar.bz2 && cd cavity1_hexa_med"
 1.-Edit the job script to submit the calculation to the batch system.
    -job.sh:  Modify the path where you have your Alya.x
    -sbatch job.sh"
    Alternatively execute directly: "OMP_NUM_THREADS=4 mpirun -np 16 Alya.x cavity1_hexa"

    Runtime on 68-core Xeon Phi(TM) CPU 7250 1.40GHz: ~3:00 min


## Remarks


If the number of elements is too low for a scalability analysis, Alya includes a mesh multiplication technique. This tool can be used by selecting an input option in the ker.dat file. This option is the number of mesh multiplication levels one wants to apply (0 meaning no mesh multiplication). At each multiplication level, the number of elements is multiplied by 8, so one can obtain a huge mesh automatically in order to study the scalability of the code on different architectures. Note that the mesh multiplication is carried out in parallel and thus should not impact the duration of the simulation process.