diff --git a/nemo/README.md b/nemo/README.md index 9d9bb326a04631d56f1a7fa2cb1123a036c22949..377a615713b46e12919f184ed0209caf9906c50e 100644 --- a/nemo/README.md +++ b/nemo/README.md @@ -10,10 +10,9 @@ NEMO (Nucleus for European Modelling of the Ocean) is a mathematical modelling framework for research activities and prediction services in ocean and climate sciences developed by a European consortium. It is intended to be a tool for studying the ocean and its interaction with the other components of the earth climate system over a large number of space and time scales. It comprises of the core engines namely OPA (ocean dynamics and thermodynamics), SI3 (sea ice dynamics and thermodynamics), TOP (oceanic tracers) and PISCES (biogeochemical process). Prognostic variables in NEMO are the three-dimensional velocity field, a linear or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step z-coordinate, or s-coordinate, or a mixture of the two. The distribution of variables is a three-dimensional Arakawa C-type grid for most of the cases. - ## Characteristics of Benchmark -The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimized for vector computers and parallelized by domain decomposition with MPI. It supports modern C/C++ and FORTRAN compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Format) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance. +The model is implemented in Fortran 90, with pre-processing (C-pre-processor). It is optimised for vector computers and parallelised by domain decomposition with MPI. It supports modern C/C++ and Fortran compilers. All input and output is done with third party software called XIOS with a dependency on NetCDF (Network Common Data Form) and HDF5. It is highly scalable and a perfect application for measuring supercomputing performances in terms of compute capacity, memory subsystem, I/O and interconnect performance. ## Mechanics of Building Benchmark @@ -117,20 +116,39 @@ Test Case B: Number of seconds per time step: 1200 ``` -We performed scalability test on 512 cores and 1024 cores for test case A. We performed scalability test for 4096 cores, 8192 cores and 16384 cores for test case B. -Both these test cases can give us quite good understanding of node performance and interconnect behavior. + +We report the performance in terms of total time to solution as well as total consumed energy to solution whenever possible. +This helps us to compare systems in a standard manner across all combinations of system architectures. + +NEMO supports both attached and detached mode of the IO server. In the attached mode all cores perform both computation and IO, +whereas in the detached mode each core performs either computation or IO. +It is reported that NEMO performs better with detached mode for especially large number of cores. +Therefore, we performed benchmarks for both attached and detached modes. +We utilise 15:1 ratio for the detached mode. That is, we divide 1024 cores as 960 compute cores and 64 IO cores for Test Case A, +whereas we divide 10240 cores as 9600 compute cores and 640 IO cores for Test Case B. + +Performance comparison between Test Cases A and B run on 1024 and 10240 processors, respectively, +can be considered as something between weak and strong scaling. +That is, number of processors are increased ten times, however the increase in the mesh size is approximately 16 times, +when we go from Test Case A to B. + +We use total time reported by XIOS server. +But also to measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file +for each step which also cumulatively adds the step time until the second last step. +We then divide the total cumulative time by the number of time steps to average out any overhead. + +<!--We performed scalability test on 512 cores and 1024 cores for test case A. We performed scalability test for 4096 cores, 8192 cores and 16384 cores for test case B. +Both these test cases can give us quite good understanding of node performance and interconnect behavior. --> <!--We switch off the generation of mesh files by setting the `flag nn_mesh = 0` in the `namelist_ref` file. Also `using_server = false` is defined in `io_server` file.--> -We report the performance in step time which is the total computational time averaged over the number of time steps for different test cases. +<!--We report the performance in step time which is the total computational time averaged over the number of time steps for different test cases. This helps us to compare systems in a standard manner across all combinations of system architectures. The other main reason for reporting time per computational time step is to make sure that results are more reproducible and comparable. Since NEMO supports both weak and strong scalability, test case A and test case B both can be scaled down to run on smaller number of processors while keeping the memory per processor constant achieving similar -results for step time. To measure the step time, we inserted a patch which includes the `MPI_Wtime()` functional call in [nemogcm.F90](nemogcm.F90) file -for each step which also cumulatively adds the step time until the second last step. -We then divide the total cumulative time by the number of time steps to average out any overhead. -It is also possible to use total time reported by XIOS server. +results for step time. +--> ## Sources <https://forge.ipsl.jussieu.fr/nemo/chrome/site/doc/NEMO/guide/html/install.html> @@ -139,5 +157,3 @@ It is also possible to use total time reported by XIOS server. <https://nemo-related.readthedocs.io/en/latest/compilation_notes/nemo37.html> - - diff --git a/nemo/architecture_files/NEMO/arch-HAWK.fcm b/nemo/architecture_files/NEMO/arch-HAWK.fcm index b73878cf171a63a319bdeb7723a366e48440675d..9ba73925da5b541a02f0a97f0f651692bdeac114 100644 --- a/nemo/architecture_files/NEMO/arch-HAWK.fcm +++ b/nemo/architecture_files/NEMO/arch-HAWK.fcm @@ -35,8 +35,8 @@ %NCDF_HOME2 /opt/hlrs/spack/rev-004_2020-06-17/netcdf-fortran/4.5.2-gcc-9.2.0-lxinqb3c/ %HDF5_HOME /opt/hlrs/spack/rev-004_2020-06-17/hdf5/1.10.5-gcc-9.2.0-fsds2dq4/ - -%XIOS_HOME /lustre/cray/ws9/6/ws/iprceayk-nemo/xios-2.5/ +%XIOS_HOME /zhome/academic/HLRS/pri/iprceayk/data/NEMO/NEMO_F/xios-2.5 +###%XIOS_HOME /lustre/cray/ws9/6/ws/iprceayk-nemo/xios-2.5/ %OASIS_HOME /not/defined diff --git a/nemo/architecture_files/NEMO/arch-Irene.fcm b/nemo/architecture_files/NEMO/arch-Irene.fcm new file mode 100644 index 0000000000000000000000000000000000000000..168fa1c9b806027bc4f2aa4eec3e836cd9af8653 --- /dev/null +++ b/nemo/architecture_files/NEMO/arch-Irene.fcm @@ -0,0 +1,91 @@ +# Curie SKYLAKE at TGCC +# +# NCDF_HOME root directory containing lib and include subdirectories for netcdf4 +# HDF5_HOME root directory containing lib and include subdirectories for HDF5 +# XIOS_HOME root directory containing lib for XIOS +# OASIS_HOME root directory containing lib for OASIS +# +# NCDF_INC netcdf4 include file +# NCDF_LIB netcdf4 library +# XIOS_INC xios include file (taken into accound only if key_iomput is activated) +# XIOS_LIB xios library (taken into accound only if key_iomput is activated) +# OASIS_INC oasis include file (taken into accound only if key_oasis3 is activated) +# OASIS_LIB oasis library (taken into accound only if key_oasis3 is activated) +# +# FC Fortran compiler command +# FCFLAGS Fortran compiler flags +# FFLAGS Fortran 77 compiler flags +# LD linker +# LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries +# FPPFLAGS pre-processing flags +# AR assembler +# ARFLAGS assembler flags +# MK make +# USER_INC complete list of include files +# USER_LIB complete list of libraries to pass to the linker +# CC C compiler used to compile conv for AGRIF +# CFLAGS compiler flags used with CC +# +# Note that: +# - unix variables "$..." are accpeted and will be evaluated before calling fcm. +# - fcm variables are starting with a % (and not a $) +# + +%NCDF_HOME /ccc/products/netcdf-c-4.6.0/intel--20.0.0__openmpi--4.0.1/hdf5__parallel +%NCDF_HOME2 /ccc/products/netcdf-fortran-4.4.4/intel--20.0.0__openmpi--4.0.1/hdf5__parallel +%HDF5_HOME /ccc/products/hdf5-1.8.20/intel--20.0.0__openmpi--4.0.1/parallel +%XIOS_HOME /ccc/cont005/home/uniankar/aykanatc/work/NEMO_F/SKY/xios-2.5 +%OASIS_HOME /not/defined +#%CURL . + +%HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5 +%GCCLIB . + +%NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include +%NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl + +##-lgpfs + +%XIOS_INC -I%XIOS_HOME/inc +%XIOS_LIB -L%XIOS_HOME/lib -L%GCCLIB -lxios -lstdc++ + +%OASIS_INC -I%OASIS_HOME/build/lib/mct -I%OASIS_HOME/build/lib/psmile.MPI1 +%OASIS_LIB -L%OASIS_HOME/lib -lpsmile.MPI1 -lmct -lmpeu -lscrip + + +%CPP icc -E +%FC mpifort +%FCFLAGS -O3 -r8 -funroll-all-loops -traceback + +%FFLAGS %FCFLAGS +%LD mpifort +%LDFLAGS -lstdc++ -lifcore -O3 -traceback +%FPPFLAGS -P -C -traditional + +%AR ar +%ARFLAGS -r + +%MK make +%USER_INC %XIOS_INC %OASIS_INC %NCDF_INC +%USER_LIB %XIOS_LIB %OASIS_LIB %NCDF_LIB + +%CC cc +%CFLAGS -O0 + + + +##%CPP cpp +##%FC mpif90 -c -cpp +##%FCFLAGS -i4 -r8 -O3 -fp-model precise -xCORE-AVX512 -fno-alias +##%FFLAGS %FCFLAGS +##%LD mpif90 +##%LDFLAGS +##%FPPFLAGS -P -traditional +##%AR ar +##%ARFLAGS rs +##%MK gmake +##%USER_INC %XIOS_INC %OASIS_INC %NCDF_INC +##%USER_LIB %XIOS_LIB %OASIS_LIB %NCDF_LIB + +##%CC cc +##%CFLAGS -O0 diff --git a/nemo/architecture_files/NEMO/arch-JUWELS.fcm b/nemo/architecture_files/NEMO/arch-JUWELS.fcm index 7ebd788d1faf39e078ea4fd6ec72cfcbd84d0b90..b92179981d35fd66ed2a8ac179e96df42666fe8c 100644 --- a/nemo/architecture_files/NEMO/arch-JUWELS.fcm +++ b/nemo/architecture_files/NEMO/arch-JUWELS.fcm @@ -31,15 +31,14 @@ # - fcm variables are starting with a % (and not a $) # -%NCDF_HOME /gpfs/software/juwels/stages/2019a/software/netCDF/4.6.3-ipsmpi-2019a.1/ -%NCDF_HOME2 /gpfs/software/juwels/stages/2019a/software/netCDF-Fortran/4.4.5-ipsmpi-2019a.1/ -%HDF5_HOME /gpfs/software/juwels/stages/2019a/software/HDF5/1.10.5-ipsmpi-2019a.1/ -%XIOS_HOME /p/project/prpb86/nemo2/xios-2.5/ +%NCDF_HOME /p/software/juwels/stages/2020/software/netCDF/4.7.4-ipsmpi-2021 +%NCDF_HOME2 /p/software/juwels/stages/2020/software/netCDF-Fortran/4.5.3-ipsmpi-2021 +%HDF5_HOME /p/software/juwels/stages/2020/software/HDF5/1.10.6-ipsmpi-2021 +%XIOS_HOME /p/home/jusers/aykanat1/juwels/data/prpb86/NEMO_F/xios-2.5/ %OASIS_HOME /not/defined -%CURL /gpfs/software/juwels/stages/2019a/software/cURL/7.64.1-GCCcore-8.3.0/lib/ - +%CURL /p/software/juwels/stages/2020/software/cURL/7.71.1-GCCcore-10.3.0/lib/ %HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5 -%GCCLIB +%GCCLIB . %NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include %NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl -lgpfs diff --git a/nemo/architecture_files/NEMO/arch-M100.fcm b/nemo/architecture_files/NEMO/arch-M100.fcm index cef39235e5682cb4e290fa93e629edb1395a271b..e545ac9f63d2a9a62be7a1a821e582c9c4d57ada 100644 --- a/nemo/architecture_files/NEMO/arch-M100.fcm +++ b/nemo/architecture_files/NEMO/arch-M100.fcm @@ -35,11 +35,19 @@ #%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2--spectrum_mpi--10.4.0/hpc-sdk--2021--binary #%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0--spectrum_mpi--10.3.1/pgi--19.10--binary -%NCDF_HOME /cineca/prod/opt/libraries/netcdf/4.7.3/gnu--8.4.0 -%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2/gnu--8.4.0 -%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0/gnu--8.4.0 +#%NCDF_HOME /cineca/prod/opt/libraries/netcdf/4.7.3/gnu--8.4.0 +#%NCDF_HOME2 /cineca/prod/opt/libraries/netcdff/4.5.2/gnu--8.4.0 +#%HDF5_HOME /cineca/prod/opt/libraries/hdf5/1.12.0/gnu--8.4.0 +#%XIOS_HOME /m100/home/userexternal/mkarsavu/data/nemo_test/xios-2.5 + + +%NCDF_HOME /m100_work/PROJECTS/spack/spack-0.14/install/linux-rhel8-power9le/gcc-8.4.0/netcdf-c-4.7.3-gygambvobvqmkmstxe4pf4fjv6mjjc7m +%NCDF_HOME2 /m100_work/PROJECTS/spack/spack-0.14/install/linux-rhel7-power9le/gcc-8.4.0/netcdf-fortran-4.5.2-tbo5mgy3yxinef4ap7rirsmfzdcvhucf +%HDF5_HOME /m100_work/PROJECTS/spack/spack-0.14/install/linux-rhel8-power9le/gcc-8.4.0/hdf5-1.12.0-5a3psyfeiuv6d5hrn4mrgcbxttp6nqze +%XIOS_HOME /m100/home/userexternal/mkarsavu/data/NEMO_F/xios-2.5 + + -%XIOS_HOME /m100/home/userexternal/mkarsavu/data/nemo_test/xios-2.5 %OASIS_HOME /not/defined %HDF5_LIB -L%HDF5_HOME/lib -lhdf5_hl -lhdf5 diff --git a/nemo/architecture_files/NEMO/arch-Mare.fcm b/nemo/architecture_files/NEMO/arch-Mare.fcm new file mode 100644 index 0000000000000000000000000000000000000000..b46dc9d18a6b15480a24cc5fb09c227eabc24a7f --- /dev/null +++ b/nemo/architecture_files/NEMO/arch-Mare.fcm @@ -0,0 +1,79 @@ +# generic ifort compiler options for MareNostrum4 +# +# NCDF_HOME root directory containing lib and include subdirectories for netcdf4 +# HDF5_HOME root directory containing lib and include subdirectories for HDF5 +# XIOS_HOME root directory containing lib for XIOS +# OASIS_HOME root directory containing lib for OASIS +# +# NCDF_INC netcdf4 include file +# NCDF_LIB netcdf4 library +# XIOS_INC xios include file (taken into accound only if key_iomput is activated) +# XIOS_LIB xios library (taken into accound only if key_iomput is activated) +# OASIS_INC oasis include file (taken into accound only if key_oasis3 is activated) +# OASIS_LIB oasis library (taken into accound only if key_oasis3 is activated) +# +# FC Fortran compiler command +# FCFLAGS Fortran compiler flags +# FFLAGS Fortran 77 compiler flags +# LD linker +# LDFLAGS linker flags, e.g. -L<lib dir> if you have libraries +# FPPFLAGS pre-processing flags +# AR assembler +# ARFLAGS assembler flags +# MK make +# USER_INC complete list of include files +# USER_LIB complete list of libraries to pass to the linker +# CC C compiler used to compile conv for AGRIF +# CFLAGS compiler flags used with CC +# +# Note that: +# - unix variables "$..." are accpeted and will be evaluated before calling fcm. +# - fcm variables are starting with a % (and not a $) +# + +%NCDF_HOME /apps/NETCDF/4.4.1.1/INTEL/IMPI/ +%NCDF_HOME2 /apps/NETCDF/4.4.1.1/INTEL/IMPI/ +%HDF5_HOME /apps/HDF5/1.8.19/INTEL/IMPI/ +%XIOS_HOME /home/pr1ena00/pr1ena01/data/NEMO_F/NEMO_F/xios-2.5 + + + + +%OASIS_HOME /not/defined +%CURL . +#/gpfs/software/juwels/stages/2019a/software/cURL/7.64.1-GCCcore-8.3.0/lib/ + + +%HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5 +%GCCLIB . + +%NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include +%NCDF_LIB -L%NCDF_HOME/lib %HDF5_LIB -L%CURL -L%NCDF_HOME2/lib -L%GCCLIB -lnetcdff -lnetcdf -lstdc++ -lz -lcurl -lgpfs + +%XIOS_INC -I%XIOS_HOME/inc +%XIOS_LIB -L%XIOS_HOME/lib -L%GCCLIB -lxios -lstdc++ + +%OASIS_INC -I%OASIS_HOME/build/lib/mct -I%OASIS_HOME/build/lib/psmile.MPI1 +%OASIS_LIB -L%OASIS_HOME/lib -lpsmile.MPI1 -lmct -lmpeu -lscrip + +%CPP icc -E -xCORE-AVX512 -mtune=skylake +%FC mpiifort +%FCFLAGS -O3 -r8 -funroll-all-loops -traceback + +%FFLAGS %FCFLAGS +%LD %FC +%LDFLAGS -lstdc++ -lifcore -O3 -traceback +%FPPFLAGS -P -C -traditional + +%AR ar +%ARFLAGS -r + +%MK make +%USER_INC %XIOS_INC %OASIS_INC %NCDF_INC +%USER_LIB %XIOS_LIB %OASIS_LIB %NCDF_LIB + + + +%CC cc +%CFLAGS -O0 + diff --git a/nemo/architecture_files/NEMO/arch-SuperMUC.fcm b/nemo/architecture_files/NEMO/arch-SuperMUC.fcm index e072dca525904c903310499e8381ca6dc67228e0..7fa921884450b84754dfbc382a198e0ed66c99fa 100644 --- a/nemo/architecture_files/NEMO/arch-SuperMUC.fcm +++ b/nemo/architecture_files/NEMO/arch-SuperMUC.fcm @@ -31,15 +31,16 @@ # - fcm variables are starting with a % (and not a $) # -%NCDF_HOME /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/netcdf/4.6.1-intel-rdopmwr/ -%NCDF_HOME2 /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/netcdf-fortran/4.4.4-intel-mq54rwz/ -%HDF5_HOME /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/hdf5/1.10.2-intel-726msh6/ -%XIOS_HOME /hppfs/work/pn68so/di67wat/NEMO/xios-2.5/ +%NCDF_HOME /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so +%NCDF_HOME2 /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so +%HDF5_HOME /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so +%XIOS_HOME /dss/dsshome1/03/di67wat/data/NEMO_F/NEMO_F/xios-2.5/ %OASIS_HOME /not/defined -%CURL /dss/dsshome1/lrz/sys/spack/release/19.2/opt/x86_avx512/curl/7.60.0-gcc-u7vewcb/lib/ +%CURL /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/x86_64/curl/7.68.0-gcc-b2wrnof/lib/ + %HDF5_LIB -L%HDF5_HOME/lib -L%CURL -lhdf5_hl -lhdf5 -%GCCLIB +%GCCLIB . %NCDF_INC -I%NCDF_HOME/include -I%NCDF_HOME2/include -I%HDF5_HOME/include diff --git a/nemo/architecture_files/XIOS/arch-JUWELS.env b/nemo/architecture_files/XIOS/arch-JUWELS.env index 1d32bbef4691ecd8f4c8e3388f7ee12cab6ad919..ede260e30193aac3947ecece22f93d51a642ea47 100644 --- a/nemo/architecture_files/XIOS/arch-JUWELS.env +++ b/nemo/architecture_files/XIOS/arch-JUWELS.env @@ -1,9 +1,10 @@ -module load GCC/8.3.0 -module load PGI/19.10-GCC-8.3.0 -module load Intel/2019.5.281-GCC-8.3.0 -module load ParaStationMPI/5.4 -module load HDF5/1.10.5 -module load netCDF/4.6.3 -module load netCDF-Fortran/4.4.5 +module load GCC +##module load PGI/19.10-GCC-8.3.0 +module load Intel +module load ParaStationMPI +module load HDF5 +module load netCDF +module load netCDF-Fortran module load cURL module load Perl + diff --git a/nemo/architecture_files/XIOS/arch-Mare.env b/nemo/architecture_files/XIOS/arch-Mare.env new file mode 100644 index 0000000000000000000000000000000000000000..c051cf62ea4b1f0beff581b2ef0cbb76f1d2bad8 --- /dev/null +++ b/nemo/architecture_files/XIOS/arch-Mare.env @@ -0,0 +1,3 @@ +module load perl +module load hdf5 +module load netcdf diff --git a/nemo/architecture_files/XIOS/arch-Mare.fcm b/nemo/architecture_files/XIOS/arch-Mare.fcm new file mode 100644 index 0000000000000000000000000000000000000000..9bf32988feb649f480db08e443349397718a3f93 --- /dev/null +++ b/nemo/architecture_files/XIOS/arch-Mare.fcm @@ -0,0 +1,24 @@ +################################################################################ +################### Projet XIOS ################### +################################################################################ + +%CCOMPILER mpicc +%FCOMPILER mpif90 +%LINKER mpif90 -nofor-main + +%BASE_CFLAGS -ansi -w -xCORE-AVX512 -mtune=skylake +%PROD_CFLAGS -O3 -DBOOST_DISABLE_ASSERTS +%DEV_CFLAGS -g -O2 +%DEBUG_CFLAGS -g + +%BASE_FFLAGS -D__NONE__ -ffree-line-length-none +%PROD_FFLAGS -O3 +%DEV_FFLAGS -g -O2 +%DEBUG_FFLAGS -g + +%BASE_INC -D__NONE__ +%BASE_LD -lstdc++ + +%CPP cpp +%FPP cpp -P +%MAKE make diff --git a/nemo/architecture_files/XIOS/arch-Mare.path b/nemo/architecture_files/XIOS/arch-Mare.path new file mode 100644 index 0000000000000000000000000000000000000000..d2f8e317e27d462e8fcabcec2fb3f49a96fb25e4 --- /dev/null +++ b/nemo/architecture_files/XIOS/arch-Mare.path @@ -0,0 +1,21 @@ +NETCDF_INCDIR="-I$NETCDF_INC_DIR -I$NETCDFF_INC_DIR -I/apps/NETCDF/4.4.1.1/INTEL/IMPI/include/" +NETCDF_LIBDIR="-Wl,--allow-multiple-definition -L$NETCDF_LIB_DIR -L$NETCDFF_LIB_DIR -L/apps/NETCDF/4.4.1.1/INTEL/IMPI/lib/" +NETCDF_LIB="-lnetcdff -lnetcdf" + +MPI_INCDIR="" +MPI_LIBDIR="" +MPI_LIB="" + +HDF5_INCDIR="-I$HDF5_INC_DIR -I/apps/HDF5/1.8.19/INTEL/IMPI/include/" +HDF5_LIBDIR="-L$HDF5_LIB_DIR -L/apps/HDF5/1.8.19/INTEL/IMPI/lib/" +HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl" + +BOOST_INCDIR="-I $BOOST_INC_DIR" +BOOST_LIBDIR="-L $BOOST_LIB_DIR" +BOOST_LIB="" + +OASIS_INCDIR="-I$PWD/../../oasis3-mct/BLD/build/lib/psmile.MPI1" +OASIS_LIBDIR="-L$PWD/../../oasis3-mct/BLD/lib" +OASIS_LIB="-lpsmile.MPI1 -lscrip -lmct -lmpeu" + + diff --git a/nemo/architecture_files/XIOS/arch-SuperMUC.env b/nemo/architecture_files/XIOS/arch-SuperMUC.env index d6ee46303436e187dcc847876abcdcecf9a7176e..ebb2e5121469d20c89f1ef45add3ed4aebcae875 100644 --- a/nemo/architecture_files/XIOS/arch-SuperMUC.env +++ b/nemo/architecture_files/XIOS/arch-SuperMUC.env @@ -1,4 +1,7 @@ module load slurm_setup -module load hdf5 -module load netcdf -module load netcdf-fortran +module load netcdf-hdf5-all/4.7_hdf5-1.10-intel19-impi +###module load netcdf-hdf5-all/ + +##module load hdf5 +##module load netcdf +##module load netcdf-fortran diff --git a/nemo/architecture_files/XIOS/arch-SuperMUC.path b/nemo/architecture_files/XIOS/arch-SuperMUC.path index 10aba70fac68db891c2d0e647600cb351ddcba58..7abe2079ef65bf57608cd723c49684031487c51f 100644 --- a/nemo/architecture_files/XIOS/arch-SuperMUC.path +++ b/nemo/architecture_files/XIOS/arch-SuperMUC.path @@ -8,7 +8,7 @@ MPI_LIB="" HDF5_INCDIR="-I $HDF5_INC_DIR" HDF5_LIBDIR="-L $HDF5_LIB_DIR" -HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl" +HDF5_LIB="-L/dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/x86_64/curl/7.68.0-gcc-b2wrnof/lib/ -lhdf5_hl -lhdf5 -lz -lcurl" BOOST_INCDIR="-I $BOOST_INC_DIR" BOOST_LIBDIR="-L $BOOST_LIB_DIR" diff --git a/nemo/architecture_files/XIOS/arch-X64_IRENE.env b/nemo/architecture_files/XIOS/arch-X64_IRENE.env new file mode 100644 index 0000000000000000000000000000000000000000..a563785325f9f995079a5fd53bf49e5b72f783a4 --- /dev/null +++ b/nemo/architecture_files/XIOS/arch-X64_IRENE.env @@ -0,0 +1,8 @@ +module unload netcdf-c netcdf-fortran hdf5 flavor perl hdf5 boost blitz mpi +module load mpi/openmpi/4.0.5.2 +module load flavor/hdf5/parallel +module load netcdf-fortran/4.4.4 +module load hdf5/1.8.20 +module load boost +module load blitz +module load feature/bridge/heterogenous_mpmd diff --git a/nemo/architecture_files/XIOS/arch-X64_IRENE.fcm b/nemo/architecture_files/XIOS/arch-X64_IRENE.fcm new file mode 100644 index 0000000000000000000000000000000000000000..5b3fe5b0d88af53dd423ac68a1fb60dfe4dc3aa3 --- /dev/null +++ b/nemo/architecture_files/XIOS/arch-X64_IRENE.fcm @@ -0,0 +1,26 @@ +################################################################################ +################### Projet XIOS ################### +################################################################################ + +%CCOMPILER mpicc +%FCOMPILER mpif90 +%LINKER mpif90 -nofor-main + +%BASE_CFLAGS -diag-disable 1125 -diag-disable 279 -D BOOST_NO_CXX11_DEFAULTED_FUNCTIONS -D BOOST_NO_CXX11_DELETED_FUNCTIONS +%PROD_CFLAGS -O3 -D BOOST_DISABLE_ASSERTS +#%DEV_CFLAGS -g -traceback +%DEV_CFLAGS -g +%DEBUG_CFLAGS -DBZ_DEBUG -g -traceback -fno-inline + +%BASE_FFLAGS -D__NONE__ +%PROD_FFLAGS -O3 +#%DEV_FFLAGS -g -traceback +%DEV_FFLAGS -g +%DEBUG_FFLAGS -g -traceback + +%BASE_INC -D__NONE__ +%BASE_LD -lstdc++ + +%CPP mpicc -EP +%FPP cpp -P +%MAKE gmake diff --git a/nemo/architecture_files/XIOS/arch-X64_IRENE.path b/nemo/architecture_files/XIOS/arch-X64_IRENE.path new file mode 100644 index 0000000000000000000000000000000000000000..4e9e3a2b99cccdbe51bd62d0e35b21965afd482d --- /dev/null +++ b/nemo/architecture_files/XIOS/arch-X64_IRENE.path @@ -0,0 +1,28 @@ +NETCDF_INCDIR="-I $NETCDFC_INCDIR -I $NETCDFFORTRAN_INCDIR" +NETCDF_LIBDIR="-L $NETCDFC_LIBDIR -L $NETCDFFORTRAN_LIBDIR" +NETCDF_LIB="-lnetcdf -lnetcdff" + +MPI_INCDIR="" +MPI_LIBDIR="" +MPI_LIB="" + +HDF5_INCDIR="-I$HDF5_INCDIR" +HDF5_LIBDIR="-L$HDF5_LIBDIR" +HDF5_LIB="-lhdf5_hl -lhdf5 -lz -lcurl" + +BOOST_INCDIR="-I $BOOST_INCDIR" +BOOST_LIBDIR="-L $BOOST_LIBDIR" +BOOST_LIB="" + +BLITZ_INCDIR="-I $BLITZ_INCDIR" +BLITZ_LIBDIR="-L $BLITZ_LIBDIR" +BLITZ_LIB="" + +OASIS_INCDIR="-I$PWD/../../oasis3-mct/BLD/build/lib/psmile.MPI1" +OASIS_LIBDIR="-L$PWD/../../oasis3-mct/BLD/lib" +OASIS_LIB="-lpsmile.MPI1 -lscrip -lmct -lmpeu" + + +#only for MEMTRACK debuging : developper only +ADDR2LINE_LIBDIR="-L${WORKDIR}/ADDR2LINE_LIB" +ADDR2LINE_LIB="-laddr2line" diff --git a/nemo/nemogcm.F90 b/nemo/nemogcm.F90 index a67f1e37e6c22a82f8a9540c3dcf04053b29f974..b039a3422b65b42e132ae7aa6e72efa57d25f982 100644 --- a/nemo/nemogcm.F90 +++ b/nemo/nemogcm.F90 @@ -224,7 +224,7 @@ CONTAINS !CALL MPI_REDUCE(step1time+tot_time,galltime, 1, mpi_double_precision, MPI_MAX, 0, mpi_comm_world,ierror) !IF (rank == 0 ) print *, "BENCH DONE ",istp," " ,gstep1time," ", gssteptime , " " , gtot_time ," ",gelapsed_time, " ",galltime," s." - print *, "BENCH DONE ",istp," " ,step1time," ", smstime , " " , tot_time ," ",elapsed_time, " ",step1time+tot_time," s." + print *, "BENCH DONE\t",istp,"\t" ,step1time,"\t", smstime , "\t" , tot_time ,"\t",elapsed_time ! ELSE !== diurnal SST time-steeping only ==! !