Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
UEABS
ueabs
Commits
752b2f58
Commit
752b2f58
authored
Dec 10, 2018
by
Cedric Jourdain
🐵
Browse files
Add slurm script for Occigen
parent
dbc76fe4
Changes
2
Hide whitespace changes
Inline
Side-by-side
specfem3d/job_script/job_occigen_test_case_A.slurm
0 → 100644
View file @
752b2f58
#!/bin/bash
#SBATCH -J Test_case_A
#SBATCH --nodes=24
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=6
#SBATCH --time=00:30:00
#SBATCH --mem=118000
#SBATCH --output specfem_small_118G_%j.output
#SBATCH -C HSW24
set
-e
source
./env/env_occigen
cd
$install_dir
/specfem3d_globe
export
I_MPI_DOMAIN
=
auto
export
I_MPI_PIN_RESPECT_CPUSET
=
0
export
I_MPI_DEBUG
=
4
#Make sure that OMP_NUM_THREADS / KMP_HW_SUBSET = cpus-per-task
export
KMP_HW_SUBSET
=
2T
export
OMP_NUM_THREADS
=
12
ulimit
-s
unlimited
MESHER_EXE
=
./bin/xmeshfem3D
SOLVER_EXE
=
./bin/xspecfem3D
# backup files used for this simulation
cp
DATA/Par_file OUTPUT_FILES/
cp
DATA/STATIONS OUTPUT_FILES/
cp
DATA/CMTSOLUTION OUTPUT_FILES/
##
## mesh generation
##
sleep
2
echo
echo
`
date
`
echo
"starting MPI mesher"
echo
MPI_PROCESS
=
`
echo
"
$SLURM_NNODES
*
$SLURM_NTASKS_PER_NODE
"
| bc
-l
`
echo
"SLURM_NTASKS_PER_NODE = "
$SLURM_NTASKS_PER_NODE
echo
"SLURM_CPUS_PER_TASKS = "
$SLURM_CPUS_PER_TASK
echo
"SLURM_NNODES="
$SLURM_NNODES
echo
"MPI_PROCESS
$MPI_PROCESS
"
time
mpirun
-n
${
MPI_PROCESS
}
${
MESHER_EXE
}
echo
" mesher done:
`
date
`
"
echo
##
## forward simulation
##
sleep
2
echo
echo
`
date
`
echo
starting run
in
current directory
$PWD
echo
#unset FORT_BUFFERED
time
mpirun
-n
${
MPI_PROCESS
}
${
SOLVER_EXE
}
echo
"finished successfully"
echo
`
date
`
specfem3d/job_script/job_occigen_test_case_B.slurm
0 → 100644
View file @
752b2f58
#!/bin/bash
#SBATCH -J Large_case_B
#SBATCH --nodes=384
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=6
#SBATCH --time=00:29:59
#SBATCH --mem=110GB
#SBATCH --output specfem_large_NPROC_XI_16_NEX_XI_384_6omp_10min_%j.output
#SBATCH -C HSW24
set
-e
source
./env/env_occigen
cd
$install_dir
/specfem3d_globe
export
I_MPI_DOMAIN
=
auto
export
I_MPI_PIN_RESPECT_CPUSET
=
0
export
I_MPI_DEBUG
=
4
#Make sure that OMP_NUM_THREADS / KMP_HW_SUBSET = cpus-per-task
export
KMP_HW_SUBSET
=
2T
export
OMP_NUM_THREADS
=
12
ulimit
-s
unlimited
MESHER_EXE
=
./bin/xmeshfem3D
SOLVER_EXE
=
./bin/xspecfem3D
# backup files used for this simulation
cp
DATA/Par_file OUTPUT_FILES/
cp
DATA/STATIONS OUTPUT_FILES/
cp
DATA/CMTSOLUTION OUTPUT_FILES/
##
## mesh generation
##
sleep
2
echo
echo
`
date
`
echo
"starting MPI mesher"
echo
MPI_PROCESS
=
`
echo
"
$SLURM_NNODES
*
$SLURM_NTASKS_PER_NODE
"
| bc
-l
`
echo
"
$SLURM_NNODES
*
$SLURM_CPUS_PER_TASK
"
echo
"SLURM_NTASKS_PER_NODE = "
$SLURM_NTASKS_PER_NODE
echo
"SLURM_CPUS_PER_TASKS = "
$SLURM_CPUS_PER_TASK
echo
"SLURM_NNODES="
$SLURM_NNODES
echo
"MPI_PROCESS
$MPI_PROCESS
"
time
mpirun
-n
${
MPI_PROCESS
}
${
MESHER_EXE
}
echo
" mesher done:
`
date
`
"
echo
##
## forward simulation
##
sleep
2
echo
echo
`
date
`
echo
starting run
in
current directory
$PWD
echo
#unset FORT_BUFFERED
time
mpirun
-n
${
MPI_PROCESS
}
${
SOLVER_EXE
}
echo
"finished successfully"
echo
`
date
`
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment