# NEMO ## Summary Version 1.1 ## Purpose of Benchmark NEMO (Nucleus for European Modelling of the Ocean) is a mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by a European consortium. It is intended to be a tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process). Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases. ## Characteristics of Benchmark The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimised for vector computers and parallelised by domain decomposition with MPI. It supports modern C/C++ and Fortran compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Form) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance. ## Mechanics of Building Benchmark ### Building XIOS 1. Download the XIOS source code: ``` svn co https://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 ``` 2. There are available known architectures which can be seen with the following command: ``` ./make_xios --avail ``` If target architecture is a known one, it can be built by the following command: ``` ./make_xios --arch X64_CURIE ``` Otherwise `arch-local.env`, `arch-local.fcm`, `arch-local.path` files should be placed according to target architecture. Then build by: ``` ./make_xios --arch local ``` Files for the PRACE Tier-0 systems are available under [architecture_files](architecture_files) folder. These files should be used as starting point, that is updates might be required according to system upgrades etc. Note that XIOS requires `Netcdf4`. Please load the appropriate `HDF5` and `NetCDF4` modules. If path to these models are not loaded, you might have to change the path in the configuration file. ### Building NEMO 1. Download the XIOS source code: ``` svn co https://forge.ipsl.jussieu.fr/nemo/svn/NEMO/releases/release-4.0 ``` 2. Copy and setup the appropriate architecture file in the arch folder. Files for the PRACE Tier-0 systems are available under [architecture_files](architecture_files) folder. These files should be used as starting point, that is updates might be required according to system upgrades etc. The following changes are recommended for the GNU compilers: ``` a. add the `-lnetcdff` and `-lstdc++` flags to NetCDF flags b. using `mpif90` which is a MPI binding of `gfortran-4.9` c. add `-cpp` and `-ffree-line-length-none` to Fortran flags ``` 3. Apply the patch as described here to measure step time : ``` https://software.intel.com/en-us/articles/building-and-running-nemo-on-xeon-processors ``` You may also use [nemogcm.F90](nemogcm.F90) by replacing it with `src/OCE/nemogcm.F90` 4. Add `GYRE_testing OCE TOP` line to `refs_cfg.txt` file under `cfgs` folder. Then go to cfgs folder and: ``` mkdir GYRE_testing rsync -arv GYRE_PISCES/* GYRE_testing/ mv GYRE_testing/cpp_GYRE_PISCES.fcm GYRE_testing/cpp_GYRE_testing.fcm sed -i 's/key_top/key_nosignedzero/g' GYRE_testing/cpp_GYRE_testing.fcm ``` 5. Then build the executable with the following command ``` ./makenemo -m MY_CONFIG -r GYRE_testing ``` ## Mechanics of Running Benchmark ### Prepare input files cd GYRE_testing/EXP00 sed -i '/using_server/s/false/true/' iodef.xml sed -i '/ln_bench/s/false/true/' namelist_cfg ### Run the experiment interactively mpirun -n 4 nemo : -n 2 $PATH_TO_XIOS/bin/xios_server.exe ### GYRE configuration with higher resolution Modify configuration (for example for the test case A): ``` rm -f time.step solver.stat output.namelist.dyn ocean.output slurm-* GYRE_* sed -i -r \ -e 's/^( *nn_itend *=).*/\1 101/' \ -e 's/^( *nn_write *=).*/\1 4320/' \ -e 's/^( *nn_GYRE *=).*/\1 48/' \ -e 's/^( *rn_rdt *=).*/\1 1200/' \ namelist_cfg ``` ## Verification of Results The GYRE configuration is set through the `namelist_cfg` file. The horizontal resolution is determined by setting `nn_GYRE` as follows: ``` Jpiglo = 30 × nn_GYRE + 2 Jpjglo = 20 × nn_GYRE + 2 ``` In this configuration, we use a default value of 30 ocean levels, depicted by `jpkglo=31`. The GYRE configuration is an ideal case for benchmark tests as it is very simple to increase the resolution and perform both weak and strong scalability experiment using the same input files. We use two configurations as follows: Test Case A: ``` nn_GYRE = 48 suitable up to 1000 cores Number of Time steps: 101 Time step size: 20 mins Number of seconds per time step: 1200 ``` Test Case B: ``` nn_GYRE = 192 suitable up to 20,000 cores. Number of time step: 101 Time step size(real): 20 mins Number of seconds per time step: 1200 ``` We report the performance in terms of total time to solution as well as total consumed energy to solution whenever possible. This helps us to compare systems in a standard manner across all combinations of system architectures. NEMO supports both attached and detached mode of the IO server. In the attached mode all cores perform both computation and IO, whereas in the detached mode each core performs either computation or IO. It is reported that NEMO performs better with detached mode for especially large number of cores. Therefore, we performed benchmarks for both attached and detached modes. We utilise 15:1 ratio for the detached mode. That is, we divide 1024 cores as 960 compute cores and 64 IO cores for Test Case A, whereas we divide 10240 cores as 9600 compute cores and 640 IO cores for Test Case B. Performance comparison between Test Cases A and B run on 1024 and 10240 processors, respectively, can be considered as something between weak and strong scaling. That is, number of processors are increased ten times, however the increase in the mesh size is approximately 16 times, when we go from Test Case A to B. We use total time reported by XIOS server. But also to measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file for each step which also cumulatively adds the step time until the second last step. We then divide the total cumulative time by the number of time steps to average out any overhead. ## Sources