NEMO (Nucleus for European Modelling of the Ocean) is a mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by a European consortium. It is intended to be a tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process).
Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases.
## Characteristics of Benchmark
The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimized for vector computers and parallelized by domain decomposition with MPI. It supports modern C/C++ and FORTRAN compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Format) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance.
The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimised for vector computers and parallelised by domain decomposition with MPI. It supports modern C/C++ and Fortran compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Form) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance.
## Mechanics of Building Benchmark
...
...
@@ -117,20 +116,39 @@ Test Case B:
Number of seconds per time step: 1200
```
We performed scalability test on 512 cores and 1024 cores for test case A. We performed scalability test for 4096 cores, 8192 cores and 16384 cores for test case B.
Both these test cases can give us quite good understanding of node performance and interconnect behavior.
We report the performance in terms of total time to solution as well as total consumed energy to solution whenever possible.
This helps us to compare systems in a standard manner across all combinations of system architectures.
NEMO supports both attached and detached mode of the IO server. In the attached mode all cores perform both computation and IO,
whereas in the detached mode each core performs either computation or IO.
It is reported that NEMO performs better with detached mode for especially large number of cores.
Therefore, we performed benchmarks for both attached and detached modes.
We utilise 15:1 ratio for the detached mode. That is, we divide 1024 cores as 960 compute cores and 64 IO cores for Test Case A,
whereas we divide 10240 cores as 9600 compute cores and 640 IO cores for Test Case B.
Performance comparison between Test Cases A and B run on 1024 and 10240 processors, respectively,
can be considered as something between weak and strong scaling.
That is, number of processors are increased ten times, however the increase in the mesh size is approximately 16 times,
when we go from Test Case A to B.
We use total time reported by XIOS server.
But also to measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file
for each step which also cumulatively adds the step time until the second last step.
We then divide the total cumulative time by the number of time steps to average out any overhead.
<!--We performed scalability test on 512 cores and 1024 cores for test case A. We performed scalability test for 4096 cores, 8192 cores and 16384 cores for test case B.
Both these test cases can give us quite good understanding of node performance and interconnect behavior. -->
<!--We switch off the generation of mesh files by setting the `flag nn_mesh = 0` in the `namelist_ref` file.
Also `using_server = false` is defined in `io_server` file.-->
We report the performance in step time which is the total computational time averaged over the number of time steps for different test cases.
<!--We report the performance in step time which is the total computational time averaged over the number of time steps for different test cases.
This helps us to compare systems in a standard manner across all combinations of system architectures.
The other main reason for reporting time per computational time step is to make sure that results are more reproducible and comparable.
Since NEMO supports both weak and strong scalability,
test case A and test case B both can be scaled down to run on smaller number of processors while keeping the memory per processor constant achieving similar
results for step time. To measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file
for each step which also cumulatively adds the step time until the second last step.
We then divide the total cumulative time by the number of time steps to average out any overhead.
It is also possible to use total time reported by XIOS server.