Skip to content
Snippets Groups Projects
Commit db1b4481 authored by Miguel Avillez's avatar Miguel Avillez
Browse files

Update gadget/4.0/README.md

parent 3dd79bd0
Branches
Tags
No related merge requests found
......@@ -146,18 +146,20 @@ sbatch slurm_script.sh
where the slurm_script.sh has the form (for a run with 1024 cores):
```
#!/bin/bash -l
#SBATCH --time=01:00:00
#SBATCH --account=ACCOUNT
#SBATCH --job-name=collgal-01024
#SBATCH --output=g_collgal_%j.out
#SBATCH --error=g_collgal_%j.error
#!/bin/bash
#SBATCH --time=02:00:00
#SBATCH --account=XXXXX
#SBATCH --job-name=DM_L50-N512
#SBATCH --output=g_%j.out
#SBATCH --error=g_%j.error
#SBATCH --nodes=32
#SBATCH --ntasks=1056
#SBATCH --cpus-per-task=1
#SBATCH --ntasks-per-socket=17
#SBATCH --ntasks-per-node=33
#SBATCH --tasks-per-node=33
#SBATCH --exclusive
#SBATCH --partition=batch
#SBATCH --qos=prace
#SBATCH --workdir=.
echo
echo "Running on hosts: $SLURM_NODELIST"
......@@ -166,8 +168,9 @@ echo "Running on $SLURM_NPROCS processors."
echo "Current working directory is `pwd`"
echo
srun ./gadget4-exe param.txt
srun ./gadget4_exe param.txt
```
Where:
* gadget4-exe is the executable.
* param.txt is the input parameter file.
......@@ -181,10 +184,9 @@ must allocate 33 cores per compute node. For a run with 1024 cores in 32 nodes w
##### OUTPUT of a run with 1024 cores
```
Running on hosts: jwc03n[082-097,169-184]
Running on 32 nodes.
Running on 1056 processors.
Current working directory is Test-Case-A
Current working directory is XXXXXXXXX
Shared memory islands host a minimum of 33 and a maximum of 33 MPI ranks.
We shall use 32 MPI ranks in total for assisting one-sided communication (1 per shared memory node).
......@@ -197,20 +199,26 @@ We shall use 32 MPI ranks in total for assisting one-sided communication (1 per
This is Gadget, version 4.0.
Git commit 8ee7f358cf43a37955018f64404db191798a32a3, Tue Jun 15 15:10:36 2021 +0200
Code was compiled with the following compiler and flags:
...
Code was compiled with the following settings:
COOLING
DOUBLEPRECISION=1
GADGET2_HEADER
MULTIPOLE_ORDER=3
NSOFTCLASSES=2
NTYPES=6
POSITIONS_IN_64BIT
ASMTH=2.0
CREATE_GRID
DOUBLEPRECISION=2
FOF
IDS_32BIT
LEAN
NGENIC=512
NGENIC_2LPT
NSOFTCLASSES=1
NTYPES=2
PERIODIC
PMGRID=768
POSITIONS_IN_32BIT
POWERSPEC_ON_OUTPUT
RANDOMIZE_DOMAINCENTER
SELFGRAVITY
STARFORMATION
TREE_NUM_BEFORE_NODESPLIT=4
TREEPM_NOTIMESPLIT
Running on 1024 MPI tasks.
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment