Skip to content
GitLab
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
UEABS
ueabs
Commits
83e85d3d
Commit
83e85d3d
authored
Dec 18, 2018
by
Cedric Jourdain
Browse files
ADD job script for Daint
parent
3624b8ac
Changes
4
Show whitespace changes
Inline
Side-by-side
specfem3d/job_script/job_daint-cpu-only_test_case_A.slurm
0 → 100644
View file @
83e85d3d
#!/bin/bash -l
#SBATCH --job-name=specfem3D_test_case_A
#SBATCH --time=01:30:00
#SBATCH --nodes=24
#SBATCH --ntasks-per-core=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=6
#SBATCH --partition=normal
#SBATCH --constraint=gpu
#SBATCH --output=specfem3D_test_case_A_daint-cpu-%j.output
set
-e
source
../env/env_daint-cpu-only
cd
$install_dir
/TestCaseA/specfem3d_globe
export
OMP_NUM_THREADS
=
$SLURM_CPUS_PER_TASK
ulimit
-s
unlimited
MESHER_EXE
=
./bin/xmeshfem3D
SOLVER_EXE
=
./bin/xspecfem3D
# backup files used for this simulation
cp
DATA/Par_file OUTPUT_FILES/
cp
DATA/STATIONS OUTPUT_FILES/
cp
DATA/CMTSOLUTION OUTPUT_FILES/
##
## mesh generation
##
sleep
2
echo
echo
`
date
`
echo
"starting MPI mesher"
echo
MPI_PROCESS
=
`
echo
"
$SLURM_NNODES
*
$SLURM_NTASKS_PER_NODE
"
| bc
-l
`
echo
"SLURM_NTASKS_PER_NODE = "
$SLURM_NTASKS_PER_NODE
echo
"SLURM_CPUS_PER_TASKS = "
$SLURM_CPUS_PER_TASK
echo
"SLURM_NNODES="
$SLURM_NNODES
echo
"MPI_PROCESS
$MPI_PROCESS
"
time
srun
-n
${
MPI_PROCESS
}
${
MESHER_EXE
}
echo
" mesher done:
`
date
`
"
echo
##
## forward simulation
##
sleep
2
echo
echo
`
date
`
echo
starting run
in
current directory
$PWD
echo
time
srun
-n
${
MPI_PROCESS
}
${
SOLVER_EXE
}
echo
"finished successfully"
echo
`
date
`
specfem3d/job_script/job_daint-cpu-only_test_case_B.slurm
0 → 100644
View file @
83e85d3d
#!/bin/bash -l
#SBATCH --job-name=specfem3D_test_case_B
#SBATCH --time=00:30:00
#SBATCH --nodes=384
#SBATCH --ntasks-per-core=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=6
#SBATCH --partition=normal
#SBATCH --constraint=gpu
#SBATCH --output=specfem3D_test_case_B_daint-cpu-%j.output
set
-e
source
../env/env_daint-gpu
cd
$install_dir
/TestCaseB/specfem3d_globe
export
OMP_NUM_THREADS
=
$SLURM_CPUS_PER_TASK
ulimit
-s
unlimited
MESHER_EXE
=
./bin/xmeshfem3D
SOLVER_EXE
=
./bin/xspecfem3D
# backup files used for this simulation
cp
DATA/Par_file OUTPUT_FILES/
cp
DATA/STATIONS OUTPUT_FILES/
cp
DATA/CMTSOLUTION OUTPUT_FILES/
##
## mesh generation
##
sleep
2
echo
echo
`
date
`
echo
"starting MPI mesher"
echo
MPI_PROCESS
=
`
echo
"
$SLURM_NNODES
*
$SLURM_NTASKS_PER_NODE
"
| bc
-l
`
echo
"SLURM_NTASKS_PER_NODE = "
$SLURM_NTASKS_PER_NODE
echo
"SLURM_CPUS_PER_TASKS = "
$SLURM_CPUS_PER_TASK
echo
"SLURM_NNODES="
$SLURM_NNODES
echo
"MPI_PROCESS
$MPI_PROCESS
"
time
srun
-n
${
MPI_PROCESS
}
${
MESHER_EXE
}
echo
" mesher done:
`
date
`
"
echo
##
## forward simulation
##
sleep
2
echo
echo
`
date
`
echo
starting run
in
current directory
$PWD
echo
time
srun
-n
${
MPI_PROCESS
}
${
SOLVER_EXE
}
echo
"finished successfully"
echo
`
date
`
specfem3d/job_script/job_daint-gpu_test_case_A.slurm
0 → 100644
View file @
83e85d3d
#!/bin/bash -l
#SBATCH --job-name=specfem3D_test_case_A
#SBATCH --time=01:00:00
#SBATCH --nodes=24
#SBATCH --ntasks-per-core=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=6
#SBATCH --partition=normal
#SBATCH --constraint=gpu
#SBATCH --output=specfem3D_test_case_A_daint-gpu-%j.output
set
-e
source
../env/env_daint-gpu
cd
$install_dir
/TestCaseA/specfem3d_globe
export
OMP_NUM_THREADS
=
$SLURM_CPUS_PER_TASK
export
CRAY_CUDA_MPS
=
1
ulimit
-s
unlimited
MESHER_EXE
=
./bin/xmeshfem3D
SOLVER_EXE
=
./bin/xspecfem3D
# backup files used for this simulation
cp
DATA/Par_file OUTPUT_FILES/
cp
DATA/STATIONS OUTPUT_FILES/
cp
DATA/CMTSOLUTION OUTPUT_FILES/
##
## mesh generation
##
sleep
2
echo
echo
`
date
`
echo
"starting MPI mesher"
echo
MPI_PROCESS
=
`
echo
"
$SLURM_NNODES
*
$SLURM_NTASKS_PER_NODE
"
| bc
-l
`
echo
"SLURM_NTASKS_PER_NODE = "
$SLURM_NTASKS_PER_NODE
echo
"SLURM_CPUS_PER_TASKS = "
$SLURM_CPUS_PER_TASK
echo
"SLURM_NNODES="
$SLURM_NNODES
echo
"MPI_PROCESS
$MPI_PROCESS
"
time
srun
-n
${
MPI_PROCESS
}
${
MESHER_EXE
}
echo
" mesher done:
`
date
`
"
echo
##
## forward simulation
##
sleep
2
echo
echo
`
date
`
echo
starting run
in
current directory
$PWD
echo
#unset FORT_BUFFERED
time
srun
-n
${
MPI_PROCESS
}
${
SOLVER_EXE
}
echo
"finished successfully"
echo
`
date
`
specfem3d/job_script/job_daint-gpu_test_case_B.slurm
0 → 100644
View file @
83e85d3d
#!/bin/bash -l
#SBATCH --job-name=specfem3D_test_case_B
#SBATCH --time=01:00:00
#SBATCH --nodes=384
#SBATCH --ntasks-per-core=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=6
#SBATCH --partition=normal
#SBATCH --constraint=gpu
#SBATCH --output=specfem3D_test_case_B_daint-gpu-%j.output
set
-e
source
../env/env_daint-gpu
cd
$install_dir
/TestCaseB/specfem3d_globe
export
OMP_NUM_THREADS
=
$SLURM_CPUS_PER_TASK
export
CRAY_CUDA_MPS
=
1
ulimit
-s
unlimited
MESHER_EXE
=
./bin/xmeshfem3D
SOLVER_EXE
=
./bin/xspecfem3D
# backup files used for this simulation
cp
DATA/Par_file OUTPUT_FILES/
cp
DATA/STATIONS OUTPUT_FILES/
cp
DATA/CMTSOLUTION OUTPUT_FILES/
##
## mesh generation
##
sleep
2
echo
echo
`
date
`
echo
"starting MPI mesher"
echo
MPI_PROCESS
=
`
echo
"
$SLURM_NNODES
*
$SLURM_NTASKS_PER_NODE
"
| bc
-l
`
echo
"SLURM_NTASKS_PER_NODE = "
$SLURM_NTASKS_PER_NODE
echo
"SLURM_CPUS_PER_TASKS = "
$SLURM_CPUS_PER_TASK
echo
"SLURM_NNODES="
$SLURM_NNODES
echo
"MPI_PROCESS
$MPI_PROCESS
"
time
srun
-n
${
MPI_PROCESS
}
${
MESHER_EXE
}
echo
" mesher done:
`
date
`
"
echo
##
## forward simulation
##
sleep
2
echo
echo
`
date
`
echo
starting run
in
current directory
$PWD
echo
#unset FORT_BUFFERED
time
srun
-n
${
MPI_PROCESS
}
${
SOLVER_EXE
}
echo
"finished successfully"
echo
`
date
`
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment