Commit bdf0cd80 authored by Valeriu Codreanu's avatar Valeriu Codreanu
Browse files

Merge branch 'r2.1-dev' into 'master'

R2.1 dev

See merge request UEABS/ueabs!4
parents 5b328137 21a3f3ed
iwoph17_paper:
image: aergus/latex
script:
- mkdir -vp doc/build/tex
- latexmk -help
- cd doc/iwoph17/ && latexmk -output-directory=../build/tex -pdf t72b.tex
artifacts:
paths:
- doc/build/tex/t72b.pdf
pages:
image: alpine
script:
- apk --no-cache add py2-pip python-dev
- pip install sphinx
- apk --no-cache add make
- ls -al
- pwd
- cd doc/sphinx && make html && cd -
- mv doc/build/sphinx/html public
only:
- master
artifacts:
paths:
- public
.PHONY:doc clean
doc:
$(MAKE) html -C doc/sphinx/
clean:
$(MAKE) clean -C doc/sphinx/
......@@ -53,21 +53,22 @@ The Alya System is a Computational Mechanics code capable of solving different p
# Code_Saturne <a name="saturne"></a>
Code_Saturne&#174; is a multipurpose Computational Fluid Dynamics (CFD) software package, which has been developed by EDF (France) since 1997. The code was originally designed for industrial applications and research activities in several fields related to energy production; typical examples include nuclear power thermal-hydraulics, gas and coal combustion, turbo-machinery, heating, ventilation, and air conditioning. In 2007, EDF released the code as open-source and this provides both industry and academia to benefit from its extensive pedigree. Code_Saturne&#174;’s open-source status allows for answers to specific needs that cannot easily be made available in commercial “black box” packages. It also makes it possible for industrial users and for their subcontractors to develop and maintain their own independent expertise and to fully control the software they use.
Code_Saturne is open-source multi-purpose CFD software, primarily developed by EDF R&D and maintained by them. It relies on the Finite Volume method and a collocated arrangement of unknowns to solve the Navier-Stokes equations, for incompressible or compressible flows, laminar or turbulent flows and non-Newtonian and Newtonian fluids. A highly parallel coupling library (Parallel Locator Exchange - PLE) is also available in the distribution to account for other physics, such as conjugate heat transfer and structure mechanics. For the incompressible solver, the pressure is solved using an integrated Algebraic Multi-Grid algorithm and the scalars are computed by conjugate gradient methods or Gauss-Seidel/Jacobi.
Code_Saturne&#174; is based on a co-located finite volume approach that can handle three-dimensional meshes built with any type of cell (tetrahedral, hexahedral, prismatic, pyramidal, polyhedral) and with any type of grid structure (unstructured, block structured, hybrid). The code is able to simulate either incompressible or compressible flows, with or without heat transfer, and has a variety of models to account for turbulence. Dedicated modules are available for specific physics such as radiative heat transfer, combustion (e.g. with gas, coal and heavy fuel oil), magneto-hydro dynamics, and compressible flows, two-phase flows. The software comprises of around 350 000 lines of source code, with about 37% written in Fortran90, 50% in C and 15% in Python. The code is parallelised using MPI with some OpenMP.
The original version of the code is written in C for pre-postprocessing, IO handling, parallelisation handling, linear solvers and gradient computation, and Fortran 95 for most of the physics implementation. MPI is used on distributed memory machines and OpenMP pragmas have been added to the most costly parts of the code to handle potential shared memory. The version used in this work (also freely available) relies also on CUDA to take advantage of potential GPU acceleration.
- Web site: http://code-saturne.org
- Code download: http://code-saturne.org/cms/download or https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne-4.0.6_UEABS.tar.gz
The equations are solved iteratively using time-marching algorithms, and most of the time spent during a time step is usually due to the computation of the velocity-pressure coupling, for simple physics. For this reason, the two test cases chosen for the benchmark suite have been designed to assess the velocity-pressure coupling computation, and rely on the same configuration, with a mesh 8 times larger for Test Case B than for Test Case A, the time step being halved to ensure a correct Courant number.
- Web site: https://code-saturne.org
- Code download: https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/CS_5.3_PRACE_UEABS.tar.gz
- Disclaimer: please note that by downloading the code from this website, you agree to be bound by the terms of the GPL license.
- Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
- Test Case A: https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne_TestCaseA.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/Code_Saturne/1.3/Code_Saturne_TestCaseB.tar.gz
- Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/code_saturne/Code_Saturne_Build_Run_4.0.6.pdf
- Build and Run instructions: [code_saturne/Code_Saturne_Build_Run_5.3_UEABS.pdf](code_saturne/Code_Saturne_Build_Run_5.3_UEABS.pdf)
- Test Case A: https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/CS_5.3_PRACE_UEABS_CAVITY_13M.tar.gz
- Test Case B: https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/CS_5.3_PRACE_UEABS_CAVITY_111M.tar.gz
# CP2K <a name="cp2k"></a>
CP2K is a freely available quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modelling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, ...), and classical force fields (AMBER, CHARMM, ...). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimisation using NEB or dimer method.
CP2K is a freely available quantum chemistry and solid-state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modelling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, ...), and classical force fields (AMBER, CHARMM, ...). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimisation, and transition state optimisation using NEB or dimer method.
CP2K is written in Fortran 2008 and can be run in parallel using a combination of multi-threading, MPI, and CUDA. All of CP2K is MPI parallelised, with some additional loops also being OpenMP parallelised. It is therefore most important to take advantage of MPI parallelisation, however running one MPI rank per CPU core often leads to memory shortage. At this point OpenMP threads can be used to utilise all CPU cores without suffering an overly large memory footprint. The optimal ratio between MPI ranks and OpenMP threads depends on the type of simulation and the system in question. CP2K supports CUDA, allowing it to offload some linear algebra operations including sparse matrix multiplications to the GPU through its DBCSR acceleration layer. FFTs can optionally also be offloaded to the GPU. Benefits of GPU offloading may yield improved performance depending on the type of simulation and the system in question.
......
In order to build ALYA (Alya.x), please follow these steps:
- Go to: Thirdparties/metis-4.0 and build the Metis library (libmetis.a) using 'make'
- Go to the directory: Executables/unix
- Adapt the file: configure-marenostrum-mpi.txt to your own MPI wrappers and paths
- Build the Metis library (libmetis.a) using "make metis4"
- Adapt the file: configure.in to your own MPI wrappers and paths (examples on the configure.in folder)
- Execute:
./configure -x -f=configure-marenostrum-mpi.txt nastin parall
./configure -x nastin parall
make
Data sets
---------
The parameters used in the datasets try to represent at best typical industrial runs in order to obtain representative speedups. For example, the iterative solvers
are never converged to machine accuracy, as the system solution is inside a non-linear loop.
The datasets represent the solution of the cavity flow at Re=100. A small mesh of 10M elements should be used for Tier-1 supercomputers while a 30M element mesh
is specifically designed to run on Tier-0 supercomputers.
However, the number of elements can be multiplied by using the mesh multiplication option in the file *.ker.dat (DIVISION=0,2,3...). The mesh multiplication is
carried out in parallel and the numebr of elements is multiplied by 8 at each of these levels. "0" means no mesh multiplication.
The parameters used in the datasets try to represent at best typical industrial runs in order to obtain representative speedups. For example, the iterative solvers are never converged to machine accuracy, but only as a percentage of the initial residual.
The different datasets are:
cavity10_tetra ... 10M tetrahedra mesh
cavity30_tetra ... 30M tetrahedra mesh
SPHERE_16.7M ... 16.7M sphere mesh
SPHERE_132M .... 132M sphere mesh
How to execute Alya with a given dataset
----------------------------------------
......@@ -20,30 +14,44 @@ How to execute Alya with a given dataset
In order to run ALYA, you need at least the following input files per execution:
X.dom.dat
X.typ.dat
X.geo.dat
X.bcs.dat
X.inflow_profile.bcs
X.ker.dat
X.nsi.dat
X.dat
In our case, there are 2 different inputs, so X={cavity10_tetra,cavity30_tetra}
To execute a simulation, you must be inside the input directory and you should submit a job like:
In our case X=sphere
mpirun Alya.x cavity10_tetra
or
mpirun Alya.x cavity30_tetra
To execute a simulation, you must be inside the input directory and you should submit a job like:
mpirun Alya.x sphere
How to measure the speedup
--------------------------
1. Edit the fensap.nsi.cvg file
2. You will see ten rows, each one corresponds to one simulation timestep
3. Go to the second row, it starts with a number 2
4. Get the last number of this row, that corresponds to the elapsed CPU time of this timestep
5. Use this value in order to measure the speedup
There are many ways to compute the scalability of Nastin module.
1. For the complete cycle including: element assembly + boundary assembly + subgrid scale assembly + solvers, etc.
2. For single kernels: element assembly, boundary assembly, subgrid scale assembly, solvers
3. Using overall times
1. In *.nsi.cvg file, column "30. Elapsed CPU time"
2. Single kernels. Here, average and maximum times are indicated in *.nsi.cvg at each iteration of each time step:
Element assembly: 19. Ass. ave cpu time 20. Ass. max cpu time
Boundary assembly: 33. Bou. ave cpu time 34. Bou. max cpu time
Subgrid scale assembly: 31. SGS ave cpu time 32. SGS max cpu time
Iterative solvers: 21. Sol. ave cpu time 22. Sol. max cpu time
Note that in the case of using Runge-Kutta time integration (the case of the sphere), the element and boundary assembly times are this of the last assembly of current time step (out of three for third order).
3. At the end of *.log file, total timings are shown for all modules. In this case we use the first value of the NASTIN MODULE.
Contact
-------
......
......@@ -160,29 +160,6 @@ Alya can be used with just MPI or hybrid MPI-OpenMP parallelism. Standard execut
make -j num_processors
```
### KNL Usage
- Extract the small one node test case.
```shell
$ tar xvf cavity1_hexa_med.tar.bz2 && cd cavity1_hexa_med
$ cp ../Alya/Thirdparties/ninja/GPUconfig.dat .
```
- Edit the job script to submit the calculation to the batch system.
```shell
job.sh: Modify the path where you have your Alya.x (compiled with MPI options)
sbatch job.sh
```
Alternatively execute directly:
```shell
OMP_NUM_THREADS=4 mpirun -np 16 Alya.x cavity1_hexa
```
<!-- Runtime on 68-core Xeon Phi(TM) CPU 7250 1.40GHz: ~3:00 min -->
## Remarks
......
************************************************************************************
Code_Saturne 4.2.2 is linked to PETSC developer's version, in order to benefit from
its GPU implementation. Note that the normal release of PETSC does not support GPU.
************************************************************************************
Installation
************************************************************************************
The version has been tested for K80s, and with the following settings:-
-OPENMPI 2.0.2
-GCC 4.8.5
-CUDA 7.5
To install Code_Saturne 4.2.2, 4 libraries are required, BLAS, LAPACK, SOWING and CUSP.
The tests have been carried out with lapack-3.6.1 for BLAS and LAPACK, sowing-1.1.23-p1
for SOWING and cusplibrary-0.5.1 for CUSP.
PETSC is first installed, and PATH_TO_PETSC, PATH_TO_CUSP, PATH_TO_SOWING, PATH_TO_LAPACK
have to be updated in INSTALL_PETSC_GPU_sm37 under petsc-petsc-a31f61e8abd0
PETSC is configured for K80s, ./INSTALL_PETSC_GPU_sm37 is used from petsc-petsc-a31f61e8abd0
It is finally compiled and installed, by typing make and make install.
Before installing Code_Saturne, adapt PATH_TO_PETSC in InstallHPC.sh under SATURNE_4.2.2
and type ./InstallHPC.sh
The code should be installed and code_saturne be found under:
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne, which should return:
Usage: ./code_saturne <topic>
Topics:
help
autovnv
bdiff
bdump
compile
config
create
gui
info
run
salome
submit
Options:
-h, --help show this help message and exit
************************************************************************************
Test case - Cavity 13M
************************************************************************************
In CAVITY_13M.tar.gz are found the mesh+partitions and 2 sets of subroutines, one for CPU and the
second one for GPU, i.e.:
CAVITY_13M/PETSC_CPU/SRC/*
CAVITY_13M/PETSC_GPU/SRC/*
CAVITY_13M/MESH/mesh_input_13M
CAVITY_13M/MESH/partition_METIS_5.1.0/*
To prepare a run, it is required to set up a "study" with 2 directories, one for CPU and the other one for GPU
as, for instance:
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --study NEW_CAVITY_13M PETSC_CPU
cd NEW_CAVITY_13M
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --case PETSC_GPU
The mesh has to be copied from CAVITY_13M/MESH/mesh_input_13M into NEW_CAVITY_13M/MESH/.
And the same has to be done for partition_METIS_5.1.0.
The subroutines contained in CAVITY_13M/PETSC_CPU/SRC should be copied into NEW_CAVITY_13M/PETSC_CPU/SRC and
the subroutines contained in CAVITY_13M/PETSC_GPU/SRC should be copied into NEW_CAVITY_13M/PETSC_GPU/SRC.
In each DATA subdirectory of NEW_CAVITY_13M/PETSC_CPU and NEW_CAVITY_13M/PETSC_GPU, the path
to the mesh+partition has to be set as:
cd DATA
cp REFERENCE/cs_user_scripts.py .
edit cs_user_scripts.py
At line 138, change None to "../MESH/mesh_input_13M"
At line 139, change None to "../MESH/partition_METIS_5.1.0"
At this stage, everything is set to run both simulations, one for the CPU and the other one for the GPU.
cd NEW_CAVITY_13M/PETSC_CPU
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
submit the job
cd NEW_CAVITY_13M/PETSC_GPU
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
submit the job
************************************************************************************
Code_Saturne 4.2.2 is installed for KNLs. It is also linked to PETSC, but the
default linear solvers are the native ones.
************************************************************************************
Installation
************************************************************************************
The installation script is:
SATURNE_4.2.2/InstallHPC_with_PETSc.sh
The path to PETSC (official released version, and therefore assumed to be installed
on the machine) should be added to the aforementioned script.
After typing ./InstallHPC_with_PETSc.sh the code should be installed and code_saturne be found under:
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne, which should return:
Usage: ./code_saturne <topic>
Topics:
help
autovnv
bdiff
bdump
compile
config
create
gui
info
run
salome
submit
Options:
-h, --help show this help message and exit
************************************************************************************
Two cases are dealt with, TGV_256_CS_OPENMP.tar.gz to test the native solvers, and
CAVITY_13M_FOR_KNLs_WITH_PETSC.tar.gz to test Code_Saturne and PETSC on KNLs
************************************************************************************
First test case: TGV_256_CS_OPENMP.tar.gz
************************************************************************************
In TGV_256_CS_OPENMP.tar.gz are found the mesh and the set of subroutines for Code_Saturne on KNLs, i.e.:
TGV_256_CS_OPENMP/MESH/mesh_input_256by256by256
TGV_256_CS_OPENMP/ARCHER_KNL/SRC/*
To prepare a run, it is required to set up a "study" as, for instance:
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --study NEW_TGV_256_CS_OPENMP KNL
The mesh has to be copied from TGV_256_CS_OPENMP/MESH/mesh_input_256by256by256 into NEW_TGV_256_CS_OPENMP/MESH/.
The subroutines contained in TGV_256_CS_OPENMP/KNL/SRC should be copied into NEW_TGV_256_CS_OPENMP/KNL/SRC
In the DATA subdirectory of NEW_TGV_256_CS_OPENMP/KNL the path to the mesh has to be set as:
cd DATA
cp REFERENCE/cs_user_scripts.py .
edit cs_user_scripts.py
At line 138, change None to "../MESH/mesh_input_256by256by256"
At this stage, everything is set to run the simulation:
cd TGV_256_CS_OPENMP/KNL
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
submit the job
************************************************************************************
Second test case: CAVITY_13M_FOR_KNLs_WITH_PETSC.tar.gz
************************************************************************************
In CAVITY_13M.tar.gz are found the mesh and the set of subroutines for PETSC and KNLs, i.e.:
CAVITY_13M/MESH/mesh_input
CAVITY_13M/KNL/SRC/*
To prepare a run, it is required to set up a "study" as, for instance:
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --study NEW_CAVITY_13M PETSC_KNL
The mesh has to be copied from CAVITY_13M/MESH/mesh_input into NEW_CAVITY_13M/MESH/.
The subroutines contained in CAVITY_13M/KNL/SRC should be copied into NEW_CAVITY_13M/PETSC_KNL/SRC
In the DATA subdirectory of NEW_CAVITY_13M/PETSC_KNL the path to the mesh has to be set as:
cd DATA
cp REFERENCE/cs_user_scripts.py .
edit cs_user_scripts.py
At line 138, change None to "../MESH/mesh_input"
At this stage, everything is set to run the simulation:
cd NEW_CAVITY_13M/PETSC_KNL
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
submit the job
# Code_Saturne
Code_Saturne is open-source multi-purpose CFD software, primarily developed by EDF R&D and maintained by them. It relies on the Finite Volume method and a collocated arrangement of unknowns to solve the Navier-Stokes equations, for incompressible or compressible flows, laminar or turbulent flows and non-Newtonian and Newtonian fluids. A highly parallel coupling library (Parallel Locator Exchange - PLE) is also available in the distribution to account for other physics, such as conjugate heat transfer and structure mechanics. For the incompressible solver, the pressure is solved using an integrated Algebraic Multi-Grid algorithm and the scalars are computed by conjugate gradient methods or Gauss-Seidel/Jacobi.
The original version of the code is written in C for pre-postprocessing, IO handling, parallelisation handling, linear solvers and gradient computation, and Fortran 95 for most of the physics implementation. MPI is used on distributed memory machines and OpenMP pragmas have been added to the most costly parts of the code to handle potential shared memory. The version used in this work (also freely available) relies on CUDA to take advantage of potential GPU acceleration.
The equations are solved iteratively using time-marching algorithms, and most of the time spent during a time step is usually due to the computation of the velocity-pressure coupling, for simple physics. For this reason, the two test cases ([https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/Code_Saturne_Build_Run_5.3_UEABS.pdf](CS_5.3_PRACE_UEABS_CAVITY_13M.tar.gz) and [https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/Code_Saturne_Build_Run_5.3_UEABS.pdf](CS_5.3_PRACE_UEABS_CAVITY_111M.tar.gz)) chosen for the benchmark suite have been designed to assess the velocity-pressure coupling computation, and rely on the same configuration, with a mesh 8 times larger for CAVITY_111M than for CAVITY_13M, the time step being halved to ensure a correct Courant number.
## Building and running the code is described in the file
[Code_Saturne_Build_Run_5.3_UEABS.pdf](Code_Saturne_Build_Run_5.3_UEABS.pdf)
## The test cases are to be found under:
https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/CS_5.3_PRACE_UEABS_CAVITY_111M.tar.gz
https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/CS_5.3_PRACE_UEABS_CAVITY_13M.tar.gz
## The distribution is to be found under:
https://repository.prace-ri.eu/ueabs/Code_Saturne/2.1/CS_5.3_PRACE_UEABS.tar.gz
# Code_Saturn
## GPU Version
Code_Saturne 4.2.2 is linked to PETSC developer's version, in order to benefit from
its GPU implementation. Note that the normal release of PETSC does not support GPU.
### Installation
The version has been tested for K80s, and with the following settings:
* OPENMPI 2.0.2
* GCC 4.8.5
* CUDA 7.5
To install Code_Saturne 4.2.2, 4 libraries are required, BLAS, LAPACK, SOWING and CUSP.
The tests have been carried out with lapack-3.6.1 for BLAS and LAPACK, sowing-1.1.23-p1
for SOWING and cusplibrary-0.5.1 for CUSP.
PETSC is first installed, and `PATH_TO_PETSC`, `PATH_TO_CUSP`, `PATH_TO_SOWING`, `PATH_TO_LAPACK`
have to be updated in `INSTALL_PETSC_GPU_sm37` under `petsc-petsc-a31f61e8abd0`
PETSC is configured for K80s, `./INSTALL_PETSC_GPU_sm37` is used from `petsc-petsc-a31f61e8abd0`
It is finally compiled and installed, by typing `make` and `make install`.
Before installing Code\_Saturne, adapt `PATH_TO_PETSC` in `InstallHPC.sh` under `SATURNE_4.2.2`
and type `./InstallHPC.sh`
The code should be installed and code_saturne be found under:
```
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne
```
And should return:
```
Usage: ./code_saturne <topic>
Topics:
help
autovnv
bdiff
bdump
compile
config
create
gui
info
run
salome
submit
Options:
-h, --help show this help message and exit
```
### Test case - Cavity 13M
In `CAVITY_13M.tar.gz` are found the mesh+partitions and 2 sets of subroutines, one for CPU and the
second one for GPU, i.e.:
```
CAVITY_13M/PETSC_CPU/SRC/*
CAVITY_13M/PETSC_GPU/SRC/*
CAVITY_13M/MESH/mesh_input_13M
CAVITY_13M/MESH/partition_METIS_5.1.0/*
```
To prepare a run, it is required to set up a "study" with 2 directories, one for CPU and the other one for GPU
as, for instance:
```
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --study NEW_CAVITY_13M PETSC_CPU
cd NEW_CAVITY_13M
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --case PETSC_GPU
```
The mesh has to be copied from `CAVITY_13M/MESH/mesh_input_13M` into `NEW_CAVITY_13M/MESH/`.
And the same has to be done for `partition_METIS_5.1.0`.
The subroutines contained in `CAVITY_13M/PETSC_CPU/SRC` should be copied into `NEW_CAVITY_13M/PETSC_CPU/SRC` and
the subroutines contained in `CAVITY_13M/PETSC_GPU/SRC` should be copied into `NEW_CAVITY_13M/PETSC_GPU/SRC`.
In each DATA subdirectory of `NEW_CAVITY_13M/PETSC_CPU` and `NEW_CAVITY_13M/PETSC_GPU`, the path
to the mesh+partition has to be set as:
```
cd DATA
cp REFERENCE/cs_user_scripts.py .
```
edit cs_user_scripts.py:
* At line 138, change `None` to `../MESH/mesh_input_13M`
* At line 139, change `None` to `../MESH/partition_METIS_5.1.0`
At this stage, everything is set to run both simulations, one for the CPU and the other one for the GPU.
```
cd NEW_CAVITY_13M/PETSC_CPU
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
```
Then submit the job.
```
cd NEW_CAVITY_13M/PETSC_GPU
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
```
submit the job
## KNL Version
Code_Saturne 4.2.2 is installed for KNLs. It is also linked to PETSC, but the
default linear solvers are the native ones.
### Installation
The installation script is:
```
SATURNE_4.2.2/InstallHPC_with_PETSc.sh
```
The path to PETSC (official released version, and therefore assumed to be installed
on the machine) should be added to the aforementioned script.
After typing `./InstallHPC_with_PETSc.sh` the code should be installed and code_saturne be found under:
```
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne
```
And should return:
```
Usage: ./code_saturne <topic>
Topics:
help
autovnv
bdiff
bdump
compile
config
create
gui
info
run
salome
submit
Options:
-h, --help show this help message and exit
```
### Running the code
Two cases are dealt with, `TGV_256_CS_OPENMP.tar.gz` to test the native solvers, and
`CAVITY_13M_FOR_KNLs_WITH_PETSC.tar.gz` to test Code_Saturne and PETSC on KNLs
#### First test case: TGV_256_CS_OPENMP
In `TGV_256_CS_OPENMP.tar.gz` are found the mesh and the set of subroutines for Code_Saturne on KNLs, i.e.:
```
TGV_256_CS_OPENMP/MESH/mesh_input_256by256by256
TGV_256_CS_OPENMP/ARCHER_KNL/SRC/*
```
To prepare a run, it is required to set up a "study" as, for instance:
```
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --study NEW_TGV_256_CS_OPENMP KNL
```
The mesh has to be copied from `TGV_256_CS_OPENMP/MESH/mesh_input_256by256by256` into `NEW_TGV_256_CS_OPENMP/MESH/`.
The subroutines contained in `TGV_256_CS_OPENMP/KNL/SRC` should be copied into `NEW_TGV_256_CS_OPENMP/KNL/SRC`.
In the `DATA` subdirectory of `NEW_TGV_256_CS_OPENMP/KNL` the path to the mesh has to be set as:
```
cd DATA
cp REFERENCE/cs_user_scripts.py .
```
Then edit `cs_user_scripts.py`:
At line 138, change `None` to `../MESH/mesh_input_256by256by256`
At this stage, everything is set to run the simulation:
```
cd TGV_256_CS_OPENMP/KNL
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne run --initialize
cd RESU/YYYYMMDD-HHMM
```
And then submit the job.
#### Second test case: CAVITY_13M_FOR_KNLs_WITH_PETSC
In `CAVITY_13M.tar.gz` are found the mesh and the set of subroutines for PETSC and KNLs, i.e.:
```
CAVITY_13M/MESH/mesh_input
CAVITY_13M/KNL/SRC/*
```
To prepare a run, it is required to set up a "study" as, for instance:
```
PATH_TO_CODE_SATURNE/SATURNE_4.2.2/code_saturne-4.2.2/arch/Linux/bin/code_saturne create --study NEW_CAVITY_13M PETSC_KNL
```