but check your system documentation since mpirun may be replaced by
`mpiexec, runjob, aprun, srun,` etc. Note also that normally you are not
allowed to run MPI programs interactively but must instead use the
allowed to run MPI programs interactively without using the
batch system.
A couple of examples for PRACE systems are given in the next section.
### Hints for running the GPU version
The GPU port of Quantum Espresso runs almost entirely in the GPU memory. This means that jobs are restricted
by the memory of the GPU device, normally 16-32 GB, regardless of the main node memory. Thus, unless many nodes are used the user is likely to see job failures due to lack of memory, even for small datasets.
For example, on the CSCS Piz Daint supercomputer each node has only 1 NVIDIA Tesla P100 (16GB) which means that you will need at least 4 nodes to run even the smallest dataset (AUSURF in the UEABS).
## 6. Examples
We now give a build and 2 run examples.
Example job scripts for various supercomputer systems in PRACE are available in the repository.
### Computer System: DAVIDE P100 cluster, cineca
...
...
@@ -128,20 +135,20 @@ haven't been substantailly tested for Quantum Espresso (e.g. flat
mode) but significant differences in performance for most inputs are
not expected.
An example PBS batch script for the A2 partition is given below:
An example SLURM batch script for the A2 partition is given below: