diff --git a/README.md b/README.md index 178e3a0a411724810982ea973249ae05e2329a30..213b5a379cc0ef8813461294b2b5cb2f8a30260e 100644 --- a/README.md +++ b/README.md @@ -45,10 +45,10 @@ The application codes that constitute the UEABS are: The Alya System is a Computational Mechanics code capable of solving different physics, each one with its own modelization characteristics, in a coupled way. Among the problems it solves are: convection-diffusion reactions, incompressible flows, compressible flows, turbulence, bi-phasic flows and free surface, excitable media, acoustics, thermal flow, quantum mechanics (DFT) and solid mechanics (large strain). ALYA is written in Fortran 90/95 and parallelized using MPI and OpenMP. - Web site: https://www.bsc.es/computer-applications/alya-system -- Code download: https://repository.prace-ri.eu/ueabs/ALYA/1.1/alya3226.tar.gz +- Code download: https://repository.prace-ri.eu/ueabs/ALYA/2.1/Alya.tar.gz - Build instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/alya/ALYA_Build_README.txt -- Test Case A: https://repository.prace-ri.eu/ueabs/ALYA/1.3/ALYA_TestCaseA.tar.bz2 -- Test Case B: https://repository.prace-ri.eu/ueabs/ALYA/1.3/ALYA_TestCaseB.tar.bz2 +- Test Case A: https://repository.prace-ri.eu/ueabs/ALYA/2.1/TestCaseA.tar.gz +- Test Case B: https://repository.prace-ri.eu/ueabs/ALYA/2.1/TestCaseB.tar.gz - Run instructions: https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r1.3/alya/ALYA_Run_README.txt # Code_Saturne diff --git a/alya/ALYA_Build_README.txt b/alya/ALYA_Build_README.txt index bd971b53b3abf7fcabc62a991e71da3fe0c51d52..7b2e2713583a0fb191ccc49221d87137a3cd28cf 100644 --- a/alya/ALYA_Build_README.txt +++ b/alya/ALYA_Build_README.txt @@ -1,8 +1,10 @@ -In order to build ALYA (Alya.x), please follow these steps: +Alya builds the makefile from the compilation options defined in config.in. In order to build ALYA (Alya.x), please follow these steps: -- Go to the directory: Executables/unix -- Build the Metis library (libmetis.a) using "make metis4" -- Adapt the file: configure.in to your own MPI wrappers and paths (examples on the configure.in folder) -- Execute: - ./configure -x nastin parall - make + - Goto to directory: Executables/unix + - Edit config.in (some default config.in files can be found in directory configure.in): + - Select your own MPI wrappers and paths + - Select size of integers. Default is 4 bytes, For 8 bytes, select -DI8 + - Choose your metis version, metis-4.0 or metis-5.1.0_i8 for 8-bytes integers + - Configure Alya: ./configure -x nastin parall + - Compile metis: make metis4 or make metis5 + - Compile Alya: make diff --git a/alya/ALYA_Run_README.txt b/alya/ALYA_Run_README.txt index ee01b98e941d6fecf9730303cf9735ff78a22b5a..942bc31ba33f54e3bdfc8da5037144deb98fb0cd 100644 --- a/alya/ALYA_Run_README.txt +++ b/alya/ALYA_Run_README.txt @@ -1,59 +1,59 @@ -Data sets ---------- - -The parameters used in the datasets try to represent at best typical industrial runs in order to obtain representative speedups. For example, the iterative solvers are never converged to machine accuracy, but only as a percentage of the initial residual. - -The different datasets are: - -SPHERE_16.7M ... 16.7M sphere mesh -SPHERE_132M .... 132M sphere mesh - -How to execute Alya with a given dataset ----------------------------------------- - -In order to run ALYA, you need at least the following input files per execution: - -X.dom.dat -X.ker.dat -X.nsi.dat -X.dat - -In our case X=sphere - -To execute a simulation, you must be inside the input directory and you should submit a job like: - -mpirun Alya.x sphere - -How to measure the speedup --------------------------- - -There are many ways to compute the scalability of Nastin module. - -1. For the complete cycle including: element assembly + boundary assembly + subgrid scale assembly + solvers, etc. - -2. For single kernels: element assembly, boundary assembly, subgrid scale assembly, solvers - -3. Using overall times - - -1. In *.nsi.cvg file, column "30. Elapsed CPU time" - - -2. Single kernels. Here, average and maximum times are indicated in *.nsi.cvg at each iteration of each time step: - -Element assembly: 19. Ass. ave cpu time 20. Ass. max cpu time - -Boundary assembly: 33. Bou. ave cpu time 34. Bou. max cpu time - -Subgrid scale assembly: 31. SGS ave cpu time 32. SGS max cpu time - -Iterative solvers: 21. Sol. ave cpu time 22. Sol. max cpu time - +Data sets +--------- + +The parameters used in the datasets try to represent at best typical industrial runs in order to obtain representative speedups. For example, the iterative solvers are never converged to machine accuracy, but only as a percentage of the initial residual. + +The different datasets are: + +SPHERE_16.7M ... 16.7M sphere mesh +SPHERE_132M .... 132M sphere mesh + +How to execute Alya with a given dataset +---------------------------------------- + +In order to run ALYA, you need at least the following input files per execution: + +X.dom.dat +X.ker.dat +X.nsi.dat +X.dat + +In our case X=sphere + +To execute a simulation, you must be inside the input directory and you should submit a job like: + +mpirun Alya.x sphere + +How to measure the speedup +-------------------------- + +There are many ways to compute the scalability of Nastin module. + +1. For the complete cycle including: element assembly + boundary assembly + subgrid scale assembly + solvers, etc. + +2. For single kernels: element assembly, boundary assembly, subgrid scale assembly, solvers + +3. Using overall times + + +1. In *.nsi.cvg file, column "30. Elapsed CPU time" + + +2. Single kernels. Here, average and maximum times are indicated in *.nsi.cvg at each iteration of each time step: + +Element assembly: 19. Ass. ave cpu time 20. Ass. max cpu time + +Boundary assembly: 33. Bou. ave cpu time 34. Bou. max cpu time + +Subgrid scale assembly: 31. SGS ave cpu time 32. SGS max cpu time + +Iterative solvers: 21. Sol. ave cpu time 22. Sol. max cpu time + Note that in the case of using Runge-Kutta time integration (the case of the sphere), the element and boundary assembly times are this of the last assembly of current time step (out of three for third order). - -3. At the end of *.log file, total timings are shown for all modules. In this case we use the first value of the NASTIN MODULE. - -Contact -------- - -If you have any question regarding the runs, please feel free to contact Guillaume Houzeaux: guillaume.houzeaux@bsc.es + +3. At the end of *.log file, total timings are shown for all modules. In this case we use the first value of the NASTIN MODULE. + +Contact +------- + +If you have any question regarding the runs, please feel free to contact Guillaume Houzeaux: guillaume.houzeaux@bsc.es