# Code_Saturne [Code_Saturne](https://www.code-saturne.org/cms/) is an open-source multi-purpose CFD software, primarily developed by EDF R&D and maintained by them. It relies on the Finite Volume method and a collocated arrangement of unknowns to solve the Navier-Stokes equations, for incompressible or compressible flows, laminar or turbulent flows and non-Newtonian and Newtonian fluids. A new discretisation based on the Compatible Discrete Operator (CDO) approach can be used for some physics. A highly parallel coupling library (Parallel Locator Exchange - PLE) is also available in the distribution to couple other software with different physics, such as for conjugate heat transfer and structural mechanics. For the incompressible solver, the pressure is solved using an integrated Algebraic Multi-Grid algorithm and the velocity components/scalars are computed by conjugate gradient methods or Gauss-Seidel/Jacobi. The original version of the code is written in C for pre-/post-processing, IO handling, parallelisation handling, linear solvers and gradient computation, and Fortran 95 for some of the physics-related implementation. Python is used to manage the simulations. MPI is used on distributed memory machines and OpenMP pragmas have been added to the most costly parts of the code to be used on shared memory architectures. The version used in this work relies on external libraries (AMGx - PETSc) to take advantage of potential GPU acceleration. The equations are solved iteratively using time-marching algorithms, and most of the time spent during a time step is due to the computation of the velocity-pressure coupling, for simple physics. For this reason, the two test cases chosen for the benchmark suite have been designed to assess the velocity-pressure coupling computation, and rely on the same configuration, the 3-D lid-driven cavity, using tetrahedral cell meshes. The first case mesh contains over 13 million cells. The second test case is modular in the sense that mesh multiplication can be used to increase on-the-fly the mesh size. ## Building Code_Saturne v7.0.0 The version 7.0.0 of Code_Saturne is to be found [here](https://www.code-saturne.org/cms/sites/default/files/releases/code_saturne-7.0.0.tar.gz). A simple installer [_InstallHPC.sh_](https://repository.prace-ri.eu/git/UEABS/ueabs/-/blob/r2.2-dev/code_saturne/InstallHPC.sh) is made available for this version. An example of the last lines of the installer (meant for the GNU compiler & MPI-OpenMP in this example) reads:\ $KERSRC/configure \\ \ --disable-shared \\ \ --disable-nls \\ \ --without-modules \\ \ --disable-gui \\ \ --enable-long-gnum \\ \ --disable-mei \\ \ --enable-debug \\ \ --prefix=$KEROPT \\ \ CC="mpicc" CFLAGS="-O3" FC="mpif90" FCFLAGS="-O3" CXX="mpicxx" CXXFLAGS="-O3" \ \# \ make -j 8 \ make install CC, FC, CFLAGS, FCFLAGS, LDFLAGS and LIBS might have to be tailored for your machine, compilers, MPI installation, etc. More information concerning the options can be found by typing: ./configure --help Assuming that CS_7.0.0_PRACE_UEABS is the current directory, the tarball is untarred in there as: \ tar zxvf code_saturne-7.0.0.tar.gz and the code is then installed as: cd CS_7.0.0_PRACE_UEABS \ ./InstallHPC.sh If the installation is successful the command **code_saturne** should return, when typing:\ YOUR_PATH/CS_7.0.0_PRACE_UEABS/code_saturne-7.0.0/arch/Linux/bin/code_saturne Usage: ./code_saturne Topics: \ help \ studymanager \ smgr \ bdiff \ bdump \ compile \ config \ cplgui \ create \ gui \ parametric \ studymanagergui \ smgrgui \ trackcvg \ update \ up \ info \ run \ submit \ symbol2line Options: \ -h, --help show this help message and exit ## Preparing a simulation. Two archives are used, namely [**CS_7.0.0_PRACE_UEABS_CAVITY_13M.tar.gz**](https://repository.prace-ri.eu/ueabs/Code_Saturne/2.2/CS_7.0.0_PRACE_UEABS_CAVITY_13M.tar.gz) and [**CS_7.0.0_PRACE_UEABS_CAVITY_XXXM.tar.gz**](https://repository.prace-ri.eu/ueabs/Code_Saturne/2.2/CS_7.0.0_PRACE_UEABS_CAVITY_XXXM.tar.gz) that contain the information required to run both test cases, with the mesh_input.csm file (for the mesh) and the usersubroutines in _src_. Taking the example of CAVITY_13M, from the working directory WORKDIR (different from CS_7.0.0_PRACE_UEABS), a ‘study’ has to be created (CAVITY_13M, for instance) as well as a ‘case’ (MACHINE, for instance) as: YOUR_PATH/CS_7.0.0_PRACE_UEABS/code_saturne-7.0.0/arch/Linux/bin/code_saturne create --study CAVITY_13M --case MACHINE --copy-ref The directory **CAVITY_13M** contains 3 directories, MACHINE, MESH and POST. The directory **MACHINE** contains 3 directories, DATA, RESU and SRC. The file mesh_input.csm should be copied into the MESH directory. The user subroutines (cs_user* files) contained in _src_ should be copied into SRC. The file _cs_user_scripts.py_ is used to manage the simulation. It has to be copied to DATA as: \ cd DATA \ cp REFERENCE/cs_user_scripts.py . \ At Line 89 of this file, you need to change from None to the local path of the mesh, i.e. "../MESH/mesh_input.csm” To finalise the preparation go to the folder MACHINE and type: \ YOUR_PATH/CS_7.0.0_PRACE_UEABS/code_saturne-7.0.0/arch/Linux/bin/code_saturne run --initialize This should create a folder RESU/YYYYMMDD-HHMM, which should contain the following flles: - compile.log - cs_solver - cs_user_scripts.py - listing - mesh_input.csm - run.cfg - run_solver - setup.xml - src - summary ## Running Code_Saturne v7.0.0 The name of the executable is ./cs_solver and, the code should be run as mpirun/mpiexec/poe/aprun ./cs_solver ## Example of timing A script is used to compute the average time per time step, e.g. [_CS_collect_timing.sh_](https://repository.prace-ri.eu/git/UEABS/ueabs/-/blob/r2.2-dev/code_saturne/CS_collect_timing.sh), which returns: Averaged timing for the 97 entries: 2.82014432989690721649 for the case of the CAVITY_13M, run on 2 nodes of a Cray - AMD (Rome). ## Larger cases The same steps are carried for the larger cases using the CS_7.0.0_PRACE_UEABS_CAVITY_XXXM.tar.gz file. These cases are built by mesh multiplication (also called global refinement) of the mesh used for CAVITY_13M. If 1 (resp. 2 or 3) level(s) of refinement is/are used, the mesh is over 111M (resp. 889M or 7112M) cells large. The third mesh (level 3) is definitely suitable to run using over 100,000 MPI tasks.\ To make sure that the simulations are stable, the time step is adjusted depending on the refinement level used. The number of levels of refinement is set at Line 152 of the _cs_user_mesh.c_ file, by chosing tot_nb_mm as 1, 2 or 3.\ The time step is set at Line 248 of the _cs_user_parameter.f90_ file, by chosing 0.01d0 / 3.d0 (level 1), 0.01d0 / 9.d0 (level 2) or 0.01d0 / 27.d0. \ The table below recalls the correct settings. | | At Line 152 of _cs_user_mesh.c_ | At Line 248 of _cs_user_parameter.f90_ | | ------ | ------ | ------ | | Level 1 | tot_nb_mm = 1 | dtref = 0.01d0 / 3.d0 | | Level 2 | tot_nb_mm = 2 | dtref = 0.01d0 / 9.d0 | | Level 3 | tot_nb_mm = 3 | dtref = 0.01d0 / 27.d0 |