The original version of the code is written in C for pre-/post-processing, IO handling, parallelisation handling, linear solvers and gradient computation, and Fortran 95 for some of the physics-related implementation. Python is used to manage the simulations. MPI is used on distributed memory machines and OpenMP pragmas have been added to the most costly parts of the code to be used on shared memory architectures. The version used in this work relies on external libraries (AMGx - PETSc) to take advantage of potential GPU acceleration.
The equations are solved iteratively using time-marching algorithms, and most of the time spent during a time step is due to the computation of the velocity-pressure coupling, for simple physics. For this reason, the two test cases chosen for the benchmark suite have been designed to assess the velocity-pressure coupling computation, and rely on the same configuration, the 3-D lid-driven cavity, using tetrahedral cell meshes. The first case mesh contains over 13 million cells. The second test case is modular in the sense that mesh multiplication can be used to increase on-the-fly the mesh size.
The equations are solved iteratively using time-marching algorithms, and most of the time spent during a time step is due to the computation of the velocity-pressure coupling, for simple physics. For this reason, the test cases chosen for the benchmark suite have been designed to assess the velocity-pressure coupling computation, and rely on the same configuration, the 3-D lid-driven cavity, using tetrahedral cell meshes. The first case mesh contains over 13 million cells. The larger test cases are modular in the sense that mesh multiplication is used on-the-fly to increase their mesh size, using several level of refinements.
## Building Code_Saturne v7.0.0
The version 7.0.0 of Code_Saturne is to be found [here](https://www.code-saturne.org/cms/sites/default/files/releases/code_saturne-7.0.0.tar.gz).
...
...
@@ -104,7 +104,7 @@ This should create a folder RESU/YYYYMMDD-HHMM, which should contain the followi
- summary
## Running Code_Saturne v7.0.0
The name of the executable is ./cs_solver and, the code should be run as mpirun/mpiexec/poe/aprun ./cs_solver
The name of the executable is ./cs_solver and, the code should be run as mpirun/mpiexec/poe/aprun/srun ./cs_solver
## Example of timing
A script is used to compute the average time per time step, e.g. [_CS_collect_timing.sh_](https://repository.prace-ri.eu/git/UEABS/ueabs/-/blob/r2.2-dev/code_saturne/CS_collect_timing.sh), which returns:
...
...
@@ -117,7 +117,7 @@ for the case of the CAVITY_13M, run on 2 nodes of a Cray - AMD (Rome).
The same steps are carried for the larger cases using the CS_7.0.0_PRACE_UEABS_CAVITY_XXXM.tar.gz file.
These cases are built by mesh multiplication (also called global refinement) of the mesh used for CAVITY_13M.
If 1 (resp. 2 or 3) level(s) of refinement is/are used, the mesh is over 111M (resp. 889M or 7112M) cells large. The
third mesh (level 3) is definitely suitable to run using over 100,000 MPI tasks.\
third mesh (level 3) is definitely suitable to run using over 100,000 MPI tasks.
To make sure that the simulations are stable, the time step is adjusted depending on the refinement level used.