Commit 6abed49d authored by Andrew Emerson's avatar Andrew Emerson
Browse files

New job files

parent a961cabe
#!/bin/bash
# script for DAVIDE 16 cores/node
# - 13 nodes, 4 tasks/node, 4 OMP threads/task
# Below <account> represents the budget
#SBATCH -N13
#SBATCH --gres=gpu:4
#SBATCH -A <account>
#SBATCH --tasks-per-node=4
#SBATCH -p dvd_usr_prod
#SBATCH -t 1:00:00
export OMP_NUM_THREADS=4
srun -v -np 52 ./pw.x -input ./Ta2O5.in -npool 26
#!/bin/bash
# Batch file for using the Intel APS trace facility
# on Marconi KNL
#
#SBATCH -N 4
#SBATCH --ntasks-per-node=64
#SBATCH --error=ta205-err.%j
#SBATCH --mem=80GB
#SBATCH --time=00:30:00
#SBATCH -A cin_staff
#SBATCH -p knl_usr_prod
start_time=$(date +"%s")
module purge
module load profile/knl
module load autoload qe/6.3_knl
export OMP_NUM_THREADS=1
source $INTEL_HOME/performance_snapshots/apsvars.sh
srun --cpu-bind=cores aps pw.x -npool 2 -ndiag 32 -input pw.in
end_time=$(date +"%s")
walltime=$(($end_time-$start_time))
echo "walltime $walltime"
...@@ -189,7 +189,7 @@ An example SLURM batch script for the A2 partition is given below: ...@@ -189,7 +189,7 @@ An example SLURM batch script for the A2 partition is given below:
``` shell ``` shell
#!/bin/bash #!/bin/bash
#SBATCH -N2 #SBATCH -N2
#SBATCH --tasks-per-node=34 #SBATCH --tasks-per-node=64
#SBATCH -A <accountno> #SBATCH -A <accountno>
#SBATCH -t 1:00:00 #SBATCH -t 1:00:00
...@@ -199,18 +199,17 @@ module load profile/knl ...@@ -199,18 +199,17 @@ module load profile/knl
module load autoload qe/6.0_knl module load autoload qe/6.0_knl
export OMP_NUM_THREADS=4 export OMP_NUM_THREADS=1
export MKL_NUM_THREADS=${OMP_NUM_THREADS} export MKL_NUM_THREADS=${OMP_NUM_THREADS}
mpirun pw.x -npool 4 -input file.in > file.out srun pw.x -npool 2 -ndiag 16 -input file.in > file.out
``` ```
In the above with the SLURM directives we have asked for 2 KNL nodes (each with 68 cores) in In the above with the SLURM directives we have asked for 2 KNL nodes (each with 68 cores) in
cache/quadrant mode and 93 Gb main memory each. We are running QE in cache/quadrant mode and 93 Gb main memory each. We are running QE in MPI-only
hybrid mode using 34 MPI processes/node, each with 4 OpenMP mode using 64 MPI processes/node with the k-points in 2 pools; the diagonalisation of the Hamiltonian
threads/process and distributing the k-points in 4 pools; the Intel will be done by 16 (4x4) tasks.
MKl library will also use 4 OpenMP threads/process.
Note that this script needs to be submitted using the KNL scheduler as follows: Note that this script needs to be submitted using the KNL scheduler as follows:
...@@ -233,4 +232,4 @@ Please check the Cineca documentation for information on using the ...@@ -233,4 +232,4 @@ Please check the Cineca documentation for information on using the
| Large test case | CNT | Carbon nanotube | | Large scaling runs only. Memory and time requirements high| | Large test case | CNT | Carbon nanotube | | Large scaling runs only. Memory and time requirements high|
Last updated: 14-January-2019 __Last updated: 14-January-2019__
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment