Commit 02de1278 authored by Ozan Karsavuran's avatar Ozan Karsavuran
Browse files

Updated readme for NEMO and updated architecture files

parent 4b35f309
...@@ -10,10 +10,9 @@ ...@@ -10,10 +10,9 @@
NEMO (Nucleus for European Modelling of the Ocean) is a mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by a European consortium. It is intended to be a tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process). NEMO (Nucleus for European Modelling of the Ocean) is a mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by a European consortium. It is intended to be a tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process).
Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases. Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases.
## Characteristics of Benchmark ## Characteristics of Benchmark
The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimized for vector computers and parallelized by domain decomposition with MPI. It supports modern C/C++ and FORTRAN compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Format) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance. The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimised for vector computers and parallelised by domain decomposition with MPI. It supports modern C/C++ and Fortran compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Form) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance.
## Mechanics of Building Benchmark ## Mechanics of Building Benchmark
...@@ -117,20 +116,39 @@ Test Case B: ...@@ -117,20 +116,39 @@ Test Case B:
Number of seconds per time step: 1200 Number of seconds per time step: 1200
``` ```
We performed scalability test on 512 cores and 1024 cores for test case A. We performed scalability test for 4096 cores, 8192 cores and 16384 cores for test case B.
Both these test cases can give us quite good understanding of node performance and interconnect behavior. We report the performance in terms of total time to solution as well as total consumed energy to solution whenever possible.
This helps us to compare systems in a standard manner across all combinations of system architectures.
NEMO supports both attached and detached mode of the IO server. In the attached mode all cores perform both computation and IO,
whereas in the detached mode each core performs either computation or IO.
It is reported that NEMO performs better with detached mode for especially large number of cores.
Therefore, we performed benchmarks for both attached and detached modes.
We utilise 15:1 ratio for the detached mode. That is, we divide 1024 cores as 960 compute cores and 64 IO cores for Test Case A,
whereas we divide 10240 cores as 9600 compute cores and 640 IO cores for Test Case B.
Performance comparison between Test Cases A and B run on 1024 and 10240 processors, respectively,
can be considered as something between weak and strong scaling.
That is, number of processors are increased ten times, however the increase in the mesh size is approximately 16 times,
when we go from Test Case A to B.
We use total time reported by XIOS server.
But also to measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file
for each step which also cumulatively adds the step time until the second last step.
We then divide the total cumulative time by the number of time steps to average out any overhead.
<!--We performed scalability test on 512 cores and 1024 cores for test case A. We performed scalability test for 4096 cores, 8192 cores and 16384 cores for test case B.
Both these test cases can give us quite good understanding of node performance and interconnect behavior. -->
<!--We switch off the generation of mesh files by setting the `flag nn_mesh = 0` in the `namelist_ref` file. <!--We switch off the generation of mesh files by setting the `flag nn_mesh = 0` in the `namelist_ref` file.
Also `using_server = false` is defined in `io_server` file.--> Also `using_server = false` is defined in `io_server` file.-->
We report the performance in step time which is the total computational time averaged over the number of time steps for different test cases. <!--We report the performance in step time which is the total computational time averaged over the number of time steps for different test cases.
This helps us to compare systems in a standard manner across all combinations of system architectures. This helps us to compare systems in a standard manner across all combinations of system architectures.
The other main reason for reporting time per computational time step is to make sure that results are more reproducible and comparable. The other main reason for reporting time per computational time step is to make sure that results are more reproducible and comparable.
Since NEMO supports both weak and strong scalability, Since NEMO supports both weak and strong scalability,
test case A and test case B both can be scaled down to run on smaller number of processors while keeping the memory per processor constant achieving similar test case A and test case B both can be scaled down to run on smaller number of processors while keeping the memory per processor constant achieving similar
results for step time. To measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file results for step time.
for each step which also cumulatively adds the step time until the second last step. -->
We then divide the total cumulative time by the number of time steps to average out any overhead.
It is also possible to use total time reported by XIOS server.
## Sources ## Sources
<https://forge.ipsl.jussieu.fr/nemo/chrome/site/doc/NEMO/guide/html/install.html> <https://forge.ipsl.jussieu.fr/nemo/chrome/site/doc/NEMO/guide/html/install.html>
...@@ -139,5 +157,3 @@ It is also possible to use total time reported by XIOS server. ...@@ -139,5 +157,3 @@ It is also possible to use total time reported by XIOS server.
<https://nemo-related.readthedocs.io/en/latest/compilation_notes/nemo37.html> <https://nemo-related.readthedocs.io/en/latest/compilation_notes/nemo37.html>
...@@ -35,8 +35,8 @@ ...@@ -35,8 +35,8 @@
%NCDF_HOME2 /opt/hlrs/spack/rev-004_2020-06-17/netcdf-fortran/4.5.2-gcc-9.2.0-lxinqb3c/ %NCDF_HOME2 /opt/hlrs/spack/rev-004_2020-06-17/netcdf-fortran/4.5.2-gcc-9.2.0-lxinqb3c/
%HDF5_HOME /opt/hlrs/spack/rev-004_2020-06-17/hdf5/1.10.5-gcc-9.2.0-fsds2dq4/ %HDF5_HOME /opt/hlrs/spack/rev-004_2020-06-17/hdf5/1.10.5-gcc-9.2.0-fsds2dq4/
%XIOS_HOME /zhome/academic/HLRS/pri/iprceayk/data/NEMO/NEMO_F/xios-2.5
%XIOS_HOME /lustre/cray/ws9/6/ws/iprceayk-nemo/xios-2.5/ ###%XIOS_HOME /lustre/cray/ws9/6/ws/iprceayk-nemo/xios-2.5/
%OASIS_HOME /not/defined %OASIS_HOME /not/defined
......
# Curie SKYLAKE at TGCC
#
# NCDF_HOME root directory containing lib and include subdirectories for netcdf4
# HDF5_HOME root directory containing lib and include subdirectories for HDF5
# XIOS_HOME root directory containing lib for XIOS
# OASIS_HOME root directory containing lib for OASIS
#
# NCDF_INC netcdf4 include file
# NCDF_LIB netcdf4 library
# XIOS_INC xios include file (taken into accound only if key_iomput is activated)
# XIOS_LIB xios library (taken into accound only if key_iomput is activated)
# OASIS_INC oasis include file (taken into accound only if key_oasis3 is activated)
# OASIS_LIB oasis library (taken into accound only if key_oasis3 is activated)
#
# FC Fortran compiler command
# FCFLAGS Fortran compiler flags
# FFLAGS Fortran 77 compiler flags
# LD linker
# LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries
# FPPFLAGS pre-processing flags
# AR assembler
# ARFLAGS assembler flags
# MK make
# USER_INC complete list of include files
# USER_LIB complete list of libraries to pass to the linker
# CC C compiler used to compile conv for AGRIF
# CFLAGS compiler flags used with CC
#
# Note that:
# - unix variables "$..." are accpeted and will be evaluated before calling fcm.
# - fcm variables are starting with a % (and not a $)
#
%NCDF_HOME /ccc/products/netcdf-c-4.6.0/intel--20.0.0__openmpi--4.0.1/hdf5__parallel
%NCDF_HOME2 /ccc/products/netcdf-fortran-4.4.4/intel--20.0.0__openmpi--4.0.1/hdf5__parallel
%HDF5_HOME /ccc/products/hdf5-1.8.20/intel--20.0.0__openmpi--4.0.1/parallel
%XIOS_HOME /ccc/cont005/home/uniankar/aykanatc/work/NEMO_F/SKY/xios-2.5
%OASIS_HOME /not/defined
#%CURL .
%HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5
%GCCLIB .
%NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include
%NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl
##-lgpfs
%XIOS_INC -I%XIOS_HOME/inc
%XIOS_LIB -L%XIOS_HOME/lib -L%GCCLIB -lxios -lstdc++
%OASIS_INC -I%OASIS_HOME/build/lib/mct -I%OASIS_HOME/build/lib/psmile.MPI1
%OASIS_LIB -L%OASIS_HOME/lib -lpsmile.MPI1 -lmct -lmpeu -lscrip
%CPP icc -E
%FC mpifort
%FCFLAGS -O3 -r8 -funroll-all-loops -traceback
%FFLAGS %FCFLAGS
%LD mpifort
%LDFLAGS -lstdc++ -lifcore -O3 -traceback
%FPPFLAGS -P -C -traditional
%AR ar
%ARFLAGS -r
%MK make
%USER_INC %XIOS_INC %OASIS_INC %NCDF_INC
%USER_LIB %XIOS_LIB %OASIS_LIB %NCDF_LIB
%CC cc
%CFLAGS -O0
##%CPP cpp
##%FC mpif90 -c -cpp
##%FCFLAGS -i4 -r8 -O3 -fp-model precise -xCORE-AVX512 -fno-alias
##%FFLAGS %FCFLAGS
##%LD mpif90
##%LDFLAGS
##%FPPFLAGS -P -traditional
##%AR ar
##%ARFLAGS rs
##%MK gmake
##%USER_INC %XIOS_INC %OASIS_INC %NCDF_INC
##%USER_LIB %XIOS_LIB %OASIS_LIB %NCDF_LIB
##%CC cc
##%CFLAGS -O0
...@@ -31,15 +31,14 @@ ...@@ -31,15 +31,14 @@
# - fcm variables are starting with a % (and not a $) # - fcm variables are starting with a % (and not a $)
# #
%NCDF_HOME /gpfs/software/juwels/stages/2019a/software/netCDF/4.6.3-ipsmpi-2019a.1/ %NCDF_HOME /p/software/juwels/stages/2020/software/netCDF/4.7.4-ipsmpi-2021
%NCDF_HOME2 /gpfs/software/juwels/stages/2019a/software/netCDF-Fortran/4.4.5-ipsmpi-2019a.1/ %NCDF_HOME2 /p/software/juwels/stages/2020/software/netCDF-Fortran/4.5.3-ipsmpi-2021
%HDF5_HOME /gpfs/software/juwels/stages/2019a/software/HDF5/1.10.5-ipsmpi-2019a.1/ %HDF5_HOME /p/software/juwels/stages/2020/software/HDF5/1.10.6-ipsmpi-2021
%XIOS_HOME /p/project/prpb86/nemo2/xios-2.5/ %XIOS_HOME /p/home/jusers/aykanat1/juwels/data/prpb86/NEMO_F/xios-2.5/
%OASIS_HOME /not/defined %OASIS_HOME /not/defined
%CURL /gpfs/software/juwels/stages/2019a/software/cURL/7.64.1-GCCcore-8.3.0/lib/ %CURL /p/software/juwels/stages/2020/software/cURL/7.71.1-GCCcore-10.3.0/lib/
%HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5 %HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5
%GCCLIB %GCCLIB .
%NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include %NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include
%NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl -lgpfs %NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl -lgpfs
......
...@@ -35,11 +35,19 @@ ...@@ -35,11 +35,19 @@
#%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2--spectrum_mpi--10.4.0/hpc-sdk--2021--binary #%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2--spectrum_mpi--10.4.0/hpc-sdk--2021--binary
#%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0--spectrum_mpi--10.3.1/pgi--19.10--binary #%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0--spectrum_mpi--10.3.1/pgi--19.10--binary
%NCDF_HOME /cineca/prod/opt/libraries/netcdf/4.7.3/gnu--8.4.0 #%NCDF_HOME /cineca/prod/opt/libraries/netcdf/4.7.3/gnu--8.4.0
%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2/gnu--8.4.0 #%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2/gnu--8.4.0
%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0/gnu--8.4.0 #%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0/gnu--8.4.0
#%XIOS_HOME /m100/home/userexternal/mkarsavu/data/nemo_test/xios-2.5
%NCDF_HOME /m100_work/PROJECTS/spack/spack-0.14/install/linux-rhel8-power9le/gcc-8.4.0/netcdf-c-4.7.3-gygambvobvqmkmstxe4pf4fjv6mjjc7m
%NCDF_HOME2 /m100_work/PROJECTS/spack/spack-0.14/install/linux-rhel7-power9le/gcc-8.4.0/netcdf-fortran-4.5.2-tbo5mgy3yxinef4ap7rirsmfzdcvhucf
%HDF5_HOME /m100_work/PROJECTS/spack/spack-0.14/install/linux-rhel8-power9le/gcc-8.4.0/hdf5-1.12.0-5a3psyfeiuv6d5hrn4mrgcbxttp6nqze
%XIOS_HOME /m100/home/userexternal/mkarsavu/data/NEMO_F/xios-2.5
%XIOS_HOME /m100/home/userexternal/mkarsavu/data/nemo_test/xios-2.5
%OASIS_HOME /not/defined %OASIS_HOME /not/defined
%HDF5_LIB -L%HDF5_HOME/lib -lhdf5_hl -lhdf5 %HDF5_LIB -L%HDF5_HOME/lib -lhdf5_hl -lhdf5
......
# generic ifort compiler options for MareNostrum4
#
# NCDF_HOME root directory containing lib and include subdirectories for netcdf4
# HDF5_HOME root directory containing lib and include subdirectories for HDF5
# XIOS_HOME root directory containing lib for XIOS
# OASIS_HOME root directory containing lib for OASIS
#
# NCDF_INC netcdf4 include file
# NCDF_LIB netcdf4 library
# XIOS_INC xios include file (taken into accound only if key_iomput is activated)
# XIOS_LIB xios library (taken into accound only if key_iomput is activated)
# OASIS_INC oasis include file (taken into accound only if key_oasis3 is activated)
# OASIS_LIB oasis library (taken into accound only if key_oasis3 is activated)
#
# FC Fortran compiler command
# FCFLAGS Fortran compiler flags
# FFLAGS Fortran 77 compiler flags
# LD linker
# LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries
# FPPFLAGS pre-processing flags
# AR assembler
# ARFLAGS assembler flags
# MK make
# USER_INC complete list of include files
# USER_LIB complete list of libraries to pass to the linker
# CC C compiler used to compile conv for AGRIF
# CFLAGS compiler flags used with CC
#
# Note that:
# - unix variables "$..." are accpeted and will be evaluated before calling fcm.
# - fcm variables are starting with a % (and not a $)
#
%NCDF_HOME /apps/NETCDF/4.4.1.1/INTEL/IMPI/
%NCDF_HOME2 /apps/NETCDF/4.4.1.1/INTEL/IMPI/
%HDF5_HOME /apps/HDF5/1.8.19/INTEL/IMPI/
%XIOS_HOME /home/pr1ena00/pr1ena01/data/NEMO_F/NEMO_F/xios-2.5
%OASIS_HOME /not/defined
%CURL .
#/gpfs/software/juwels/stages/2019a/software/cURL/7.64.1-GCCcore-8.3.0/lib/
%HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5
%GCCLIB .
%NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include
%NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl -lgpfs
%XIOS_INC -I%XIOS_HOME/inc
%XIOS_LIB -L%XIOS_HOME/lib -L%GCCLIB -lxios -lstdc++
%OASIS_INC -I%OASIS_HOME/build/lib/mct -I%OASIS_HOME/build/lib/psmile.MPI1
%OASIS_LIB -L%OASIS_HOME/lib -lpsmile.MPI1 -lmct -lmpeu -lscrip
%CPP icc -E -xCORE-AVX512 -mtune=skylake
%FC mpiifort
%FCFLAGS -O3 -r8 -funroll-all-loops -traceback
%FFLAGS %FCFLAGS
%LD %FC
%LDFLAGS -lstdc++ -lifcore -O3 -traceback
%FPPFLAGS -P -C -traditional
%AR ar
%ARFLAGS -r
%MK make
%USER_INC %XIOS_INC %OASIS_INC %NCDF_INC
%USER_LIB %XIOS_LIB %OASIS_LIB %NCDF_LIB
%CC cc
%CFLAGS -O0
...@@ -31,15 +31,16 @@ ...@@ -31,15 +31,16 @@
# - fcm variables are starting with a % (and not a $) # - fcm variables are starting with a % (and not a $)
# #
%NCDF_HOME /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/netcdf/4.6.1-intel-rdopmwr/ %NCDF_HOME /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so
%NCDF_HOME2 /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/netcdf-fortran/4.4.4-intel-mq54rwz/ %NCDF_HOME2 /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so
%HDF5_HOME /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/hdf5/1.10.2-intel-726msh6/ %HDF5_HOME /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so
%XIOS_HOME /hppfs/work/pn68so/di67wat/NEMO/xios-2.5/ %XIOS_HOME /dss/dsshome1/03/di67wat/data/NEMO_F/NEMO_F/xios-2.5/
%OASIS_HOME /not/defined %OASIS_HOME /not/defined
%CURL /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/curl/7.60.0-gcc-u7vewcb/lib/ %CURL /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/x86_64/curl/7.68.0-gcc-b2wrnof/lib/
%HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5 %HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5
%GCCLIB %GCCLIB .
%NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include %NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include
......
module load GCC/8.3.0 module load GCC
module load PGI/19.10-GCC-8.3.0 ##module load PGI/19.10-GCC-8.3.0
module load Intel/2019.5.281-GCC-8.3.0 module load Intel
module load ParaStationMPI/5.4 module load ParaStationMPI
module load HDF5/1.10.5 module load HDF5
module load netCDF/4.6.3 module load netCDF
module load netCDF-Fortran/4.4.5 module load netCDF-Fortran
module load cURL module load cURL
module load Perl module load Perl
module load perl
module load hdf5
module load netcdf
################################################################################
################### Projet XIOS ###################
################################################################################
%CCOMPILER mpicc
%FCOMPILER mpif90
%LINKER mpif90 -nofor-main
%BASE_CFLAGS -ansi -w -xCORE-AVX512 -mtune=skylake
%PROD_CFLAGS -O3 -DBOOST_DISABLE_ASSERTS
%DEV_CFLAGS -g -O2
%DEBUG_CFLAGS -g
%BASE_FFLAGS -D__NONE__ -ffree-line-length-none
%PROD_FFLAGS -O3
%DEV_FFLAGS -g -O2
%DEBUG_FFLAGS -g
%BASE_INC -D__NONE__
%BASE_LD -lstdc++
%CPP cpp
%FPP cpp -P
%MAKE make
NETCDF_INCDIR="-I$NETCDF_INC_DIR -I$NETCDFF_INC_DIR -I/apps/NETCDF/4.4.1.1/INTEL/IMPI/include/"
NETCDF_LIBDIR="-Wl,--allow-multiple-definition -L$NETCDF_LIB_DIR -L$NETCDFF_LIB_DIR -L/apps/NETCDF/4.4.1.1/INTEL/IMPI/lib/"
NETCDF_LIB="-lnetcdff -lnetcdf"
MPI_INCDIR=""
MPI_LIBDIR=""
MPI_LIB=""
HDF5_INCDIR="-I$HDF5_INC_DIR -I/apps/HDF5/1.8.19/INTEL/IMPI/include/"
HDF5_LIBDIR="-L$HDF5_LIB_DIR -L/apps/HDF5/1.8.19/INTEL/IMPI/lib/"
HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl"
BOOST_INCDIR="-I $BOOST_INC_DIR"
BOOST_LIBDIR="-L $BOOST_LIB_DIR"
BOOST_LIB=""
OASIS_INCDIR="-I$PWD/../../oasis3-mct/BLD/build/lib/psmile.MPI1"
OASIS_LIBDIR="-L$PWD/../../oasis3-mct/BLD/lib"
OASIS_LIB="-lpsmile.MPI1 -lscrip -lmct -lmpeu"
module load slurm_setup module load slurm_setup
module load hdf5 module load netcdf-hdf5-all/4.7_hdf5-1.10-intel19-impi
module load netcdf ###module load netcdf-hdf5-all/
module load netcdf-fortran
##module load hdf5
##module load netcdf
##module load netcdf-fortran
...@@ -8,7 +8,7 @@ MPI_LIB="" ...@@ -8,7 +8,7 @@ MPI_LIB=""
HDF5_INCDIR="-I $HDF5_INC_DIR" HDF5_INCDIR="-I $HDF5_INC_DIR"
HDF5_LIBDIR="-L $HDF5_LIB_DIR" HDF5_LIBDIR="-L $HDF5_LIB_DIR"
HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl" HDF5_LIB="-L/dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/x86_64/curl/7.68.0-gcc-b2wrnof/lib/ -lhdf5_hl -lhdf5 -lz -lcurl"
BOOST_INCDIR="-I $BOOST_INC_DIR" BOOST_INCDIR="-I $BOOST_INC_DIR"
BOOST_LIBDIR="-L $BOOST_LIB_DIR" BOOST_LIBDIR="-L $BOOST_LIB_DIR"
......
module unload netcdf-c netcdf-fortran hdf5 flavor perl hdf5 boost blitz mpi
module load mpi/openmpi/4.0.5.2
module load flavor/hdf5/parallel
module load netcdf-fortran/4.4.4
module load hdf5/1.8.20
module load boost
module load blitz
module load feature/bridge/heterogenous_mpmd
################################################################################
################### Projet XIOS ###################
################################################################################
%CCOMPILER mpicc
%FCOMPILER mpif90
%LINKER mpif90 -nofor-main
%BASE_CFLAGS -diag-disable 1125 -diag-disable 279 -D BOOST_NO_CXX11_DEFAULTED_FUNCTIONS -D BOOST_NO_CXX11_DELETED_FUNCTIONS
%PROD_CFLAGS -O3 -D BOOST_DISABLE_ASSERTS
#%DEV_CFLAGS -g -traceback
%DEV_CFLAGS -g
%DEBUG_CFLAGS -DBZ_DEBUG -g -traceback -fno-inline
%BASE_FFLAGS -D__NONE__
%PROD_FFLAGS -O3
#%DEV_FFLAGS -g -traceback
%DEV_FFLAGS -g
%DEBUG_FFLAGS -g -traceback
%BASE_INC -D__NONE__
%BASE_LD -lstdc++
%CPP mpicc -EP
%FPP cpp -P
%MAKE gmake
NETCDF_INCDIR="-I $NETCDFC_INCDIR -I $NETCDFFORTRAN_INCDIR"
NETCDF_LIBDIR="-L $NETCDFC_LIBDIR -L $NETCDFFORTRAN_LIBDIR"
NETCDF_LIB="-lnetcdf -lnetcdff"
MPI_INCDIR=""
MPI_LIBDIR=""
MPI_LIB=""
HDF5_INCDIR="-I$HDF5_INCDIR"
HDF5_LIBDIR="-L$HDF5_LIBDIR"
HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl"
BOOST_INCDIR="-I $BOOST_INCDIR"
BOOST_LIBDIR="-L $BOOST_LIBDIR"
BOOST_LIB=""
BLITZ_INCDIR="-I $BLITZ_INCDIR"
BLITZ_LIBDIR="-L $BLITZ_LIBDIR"
BLITZ_LIB=""
OASIS_INCDIR="-I$PWD/../../oasis3-mct/BLD/build/lib/psmile.MPI1"
OASIS_LIBDIR="-L$PWD/../../oasis3-mct/BLD/lib"
OASIS_LIB="-lpsmile.MPI1 -lscrip -lmct -lmpeu"
#only for MEMTRACK debuging : developper only
ADDR2LINE_LIBDIR="-L${WORKDIR}/ADDR2LINE_LIB"
ADDR2LINE_LIB="-laddr2line"
...@@ -224,7 +224,7 @@ CONTAINS ...@@ -224,7 +224,7 @@ CONTAINS
!CALL MPI_REDUCE(step1time+tot_time,galltime, 1, mpi_double_precision, MPI_MAX, 0, mpi_comm_world,ierror) !CALL MPI_REDUCE(step1time+tot_time,galltime, 1, mpi_double_precision, MPI_MAX, 0, mpi_comm_world,ierror)
!IF (rank == 0 ) print *, "BENCH DONE ",istp," " ,gstep1time," ", gssteptime , " " , gtot_time ," ",gelapsed_time, " ",galltime," s." !IF (rank == 0 ) print *, "BENCH DONE ",istp," " ,gstep1time," ", gssteptime , " " , gtot_time ," ",gelapsed_time, " ",galltime," s."
print *, "BENCH DONE ",istp," " ,step1time," ", smstime , " " , tot_time ," ",elapsed_time, " ",step1time+tot_time," s." print *, "BENCH DONE\t",istp,"\t" ,step1time,"\t", smstime , "\t" , tot_time ,"\t",elapsed_time
! !
ELSE !== diurnal SST time-steeping only ==! ELSE !== diurnal SST time-steeping only ==!
! !
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment