Skip to content
Commits on Source (6)
......@@ -39,6 +39,7 @@ The application codes that constitute the UEABS are:
- [Quantum Espresso](#espresso)
- [SHOC](#shoc)
- [SPECFEM3D](#specfem3d)
- [TensorFlow](#tensorflow)
# ALYA <a name="alya"></a>
......@@ -275,3 +276,19 @@ The SHOC benchmark suite currently contains benchmark programs, categoried based
| **General information** | **Scientific field** | **Language** | **MPI** | **OpenMP** | **GPU** | **LoC** | **Code description** |
|------------------|----------------------|--------------|---------|------------|---------------------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
| [- Website](https://geodynamics.org/cig/software/specfem3d_globe/) <br>[- Source](https://github.com/geodynamics/specfem3d_globe.git) <br>[- Bench](https://repository.prace-ri.eu/git/UEABS/ueabs/tree/r2.1-dev/specfem3d) <br>[- Summary](https://repository.prace-ri.eu/git/UEABS/ueabs/blob/r2.1-dev/specfem3d/PRACE_UEABS_Specfem3D_summary.pdf) | Geodynamics | Fortran | yes | yes | Yes (CUDA) | 140000 | The software package SPECFEM3D simulates three-dimensional global and regional seismic wave propagation based upon the spectral-element method (SEM). |
# TensorFlow <a name="tensorflow"></a>
TensorFlow (https://www.tensorflow.org) is a popular open-source library for symbolic math and linear algebra, with particular optimization for neural-networks-based machine learning workflow. Maintained by Google, it is widely used for research and production in both the academia and the industry.
TensorFlow supports a wide variety of hardware platforms (CPUs, GPUs, TPUs), and can be scaled up to utilize multiple compute devices on a single or multiple compute nodes. The main objective of this benchmark is to profile the scaling behavior of TensorFlow on different hardware, and thereby provide a reference baseline of its performance for different sizes of applications.
There are many open-source datasets available for benchmarking TensorFlow, such as `mnist`, `fashion_mnist`, `cifar`, `imagenet`, and so on. This benchmark suite, however, would like to focus on a scientific research use case. `DeepGalaxy` is a code built with TensorFlow, which uses deep neural network to classify galaxy mergers in the Universe, observed by the Hubble Space Telescope and the Sloan Digital Sky Survey.
- Website: https://github.com/maxwelltsai/DeepGalaxy
- Code download: https://github.com/maxwelltsai/DeepGalaxy
- [Prerequisites installation](tensorflow/prerequisites-installation.md)
- [Test Case A](tensorflow/Testcase_A/)
- [Test Case B](tensorflow/Testcase_B/)
- [Test Case C](tensorflow/Testcase_C/)
\ No newline at end of file
TensorFlow
===
TensorFlow (https://www.tensorflow.org) is a popular open-source library for symbolic math and linear algebra, with particular optimization for neural-networks-based machine learning workflow. Maintained by Google, it is widely used for research and production in both the academia and the industry.
TensorFlow supports a wide variety of hardware platforms (CPUs, GPUs, TPUs), and can be scaled up to utilize multiple compute devices on a single or multiple compute nodes. The main objective of this benchmark is to profile the scaling behavior of TensorFlow on different hardware, and thereby provide a reference baseline of its performance for different sizes of applications.
DeepGalaxy
===
There are many open-source datasets available for benchmarking TensorFlow, such as `mnist`, `fashion_mnist`, `cifar`, `imagenet`, and so on. This benchmark suite, however, would like to focus on a scientific research use case. `DeepGalaxy` is a code built with TensorFlow, which uses deep neural network to classify galaxy mergers in the Universe, observed by the Hubble Space Telescope and the Sloan Digital Sky Survey.
- Website: https://github.com/maxwelltsai/DeepGalaxy
- Code download: https://github.com/maxwelltsai/DeepGalaxy
- [Prerequisites installation](#prerequisites-installation)
- [Test Case A](Testcase_A/README.md)
- [Test Case B](Testcase_B/README.md)
- [Test Case C](Testcase_C/README.md)
## Prerequisites Installation
The prerequsities consists of a list of python packages as shown below. It is recommended to create a python virtual environment (either with `pyenv` or `conda`). The following packages can be installed using the `pip` package management tool:
```
pip install tensorflow
pip install horovod
pip install scikit-learn
pip install scikit-image
pip install pandas
```
Note: there is no guarantee of optimal performance when `tensorflow` is installed using `pip`. It is better if `tensorflow` is compiled from source, in which case the compiler will likely be able to take advantage of the advanced instruction sets supported by the processor (e.g., AVX512). An official build instruction can be found at https://www.tensorflow.org/install/source. Sometimes, an HPC center may have a tensorflow module optimized for their hardware, in which case the `pip install tensorflow` line can be replaced with a line like `module load <name_of_the_tensorflow_module>`.
## How to benchmark the throughput of a HPC system
**Step 1**: Download the benchmark code:
```
git clone https://github.com/maxwelltsai/DeepGalaxy.git
```
This should clone the full benchmark code to a local directory called `DeepGalaxy`. Enter this directory with `cd DeepGalaxy`.
**Step 2**: Download the training dataset.
In the `DeepGalaxy` directory, download the training dataset. Depending on the benchmark size, there are three datasets available:
- (512, 512) pixels: https://edu.nl/r3wh3 (2GB)
- (1024, 1024) pixels: https://edu.nl/gcy96 (6.1GB)
- (2048, 2048) pixels: https://edu.nl/bruf6 (14GB)
**Step 3**: Run the code on different number of workers. For example, the following command executes the code on `np = 4` workers:
```
mpirun -np 4 dg_train.py -f output_bw_512.hdf5 --epochs 20 --noise 0.1 --batch-size 4 --arch EfficientNetB4
```
where `output_bw_512.hdf5` is the training dataset downloaded in the previous step. Please change the file name if necessary. One could also change the other parameters, such as `--epochs`, `--batch-size`, and `--arch` according to the size of the benchmark. For example, the `EfficientNetB0` deep neural network is for small HPC systems, `EfficientNetB4` is for medium-size ones, and `EfficientNetB7` is for large systems. Also, if there are a lot of memory, increasing the `--batch-size` could improve the throughput. If the `--batch-size` parameter is too large, an out-of-memory error could occur.
It is wise to save the output of the `mpirun` command to a text file, for example, `DeepGalaxy.np_4.out`.
**Step 4**: Repeat Step 3 with different `np`.
All the desired `np` settings are completed, we should have a bunch of output files on the local directory. For example, `DeepGalaxy.np_4.out`, `DeepGalaxy.np_8.out`, and so on. We could then extract the throughput using the following command:
```
grep sample DeepGalaxy.np_4.out
```
A sample output looks like this:
```
7156/7156 [==============================] - 1435s 201ms/sample - loss: 5.9885 - sparse_categorical_accuracy: 0.0488 - val_loss: 5.8073 - val_sparse_categorical_accuracy: 0.1309
7156/7156 [==============================] - 1141s 160ms/sample - loss: 3.0371 - sparse_categorical_accuracy: 0.3376 - val_loss: 2.0614 - val_sparse_categorical_accuracy: 0.5666
7156/7156 [==============================] - 1237s 173ms/sample - loss: 0.5927 - sparse_categorical_accuracy: 0.8506 - val_loss: 0.0503 - val_sparse_categorical_accuracy: 0.9835
7156/7156 [==============================] - 1123s 157ms/sample - loss: 0.0245 - sparse_categorical_accuracy: 0.9963 - val_loss: 0.0033 - val_sparse_categorical_accuracy: 0.9994
7156/7156 [==============================] - 1236s 173ms/sample - loss: 0.0026 - sparse_categorical_accuracy: 0.9998 - val_loss: 9.3778e-07 - val_sparse_categorical_accuracy: 1.0000
```
The throughput can be read from the timing here, such as `173ms/sample`. Usually, this number is a bit larger in the first epoch, because `TensorFlow` needs to do some initialization in the first epoch. So we could pikc up the number from the 3rd or even 5th epoch when it is stablized.
Extract this number for different `np`, and see how this number changes a function of `np`. In a system with perfect (i.e., linear) scaling, this number should be constant. But in reality, this number should increase due to the communication overhead. Therefore, the growth of this number as a function of `np` tell us something about the scaling efficiency of the underlying system.
## Test Case A
This test case is designed to benchmark TensorFlow with small-to-medium-sized datasets using a medium-sized deep neural network (DNN). The image resolution is set at (512, 512) px. The training can be carried out using one or more nodes. The DNN is relatively small (about 17 million parameters), which would fit into most GPUs when using a `batch_size` of 8 or even 16.
The dataset can be downloaded at: https://surfdrive.surf.nl/files/index.php/s/Mzm28FQ1udG3FG7 (2GB)
If the training is done on a single node, running the following command (after necessary allocation of the compute resources) would be enough:
```
python dg_train.py -f output_bw_512.hdf5 --arch EfficientNetB4 --epochs 10 --noise 0.3 --batch-size 4
```
Please replace `output_bw_512.hdf5` with the actual dataset file name, and modify other parameters whenever necessary. `--batch-size` of 8 may be used if the GPU has 32 GB of memory.
If multiple nodes are used, it is necessary to run the code with `mpirun` or `mpiexec`. For example, train the DNN on 2 nodes, each with 4 GPUs:
```
mpirun -np 8 python dg_train.py -f output_bw_512.hdf5 --arch EfficientNetB4 --epochs 10 --noise 0.3 --batch-size 4
```
If NVIDIA GPUs are used, `DeepGalaxy` can automatically bind an MPI process to a GPU, so no explicit specification of `CUDA_VISIBLE_DEVICES` is needed.
## Test Case B
This test case is designed to benchmark TensorFlow with small-to-medium-sized datasets using a large-sized deep neural network (DNN). The image resolution is set at (512, 512) px. The training can be carried out using one or more nodes. The DNN is moderately large (about 64 million parameters). In comparison, the popular `ResNet50` CNN has about 23 million parameters. With a network of this size, using a batch size of 2 or 4 is recommended for most GPU.
The dataset can be downloaded at: https://surfdrive.surf.nl/files/index.php/s/Mzm28FQ1udG3FG7 (2GB)
If the training is done on a single node, running the following command (after necessary allocation of the compute resources) would be enough:
```
python dg_train.py -f output_bw_512.hdf5 --arch EfficientNetB7 --epochs 10 --noise 0.3 --batch-size 4
```
Please replace `output_bw_512.hdf5` with the actual dataset file name, and modify other parameters whenever necessary. `--batch-size` of 8 may be used if the GPU has 32 GB of memory.
If multiple nodes are used, it is necessary to run the code with `mpirun` or `mpiexec`. For example, train the DNN on 2 nodes, each with 4 GPUs:
```
mpirun -np 8 python dg_train.py -f output_bw_512.hdf5 --arch EfficientNetB7 --epochs 10 --noise 0.3 --batch-size 4
```
If NVIDIA GPUs are used, `DeepGalaxy` can automatically bind an MPI process to a GPU, so no explicit specification of `CUDA_VISIBLE_DEVICES` is needed.
## Test Case C
This test case aims to stress the underlying hardware with high-resolution images and a large deep neural network (DNN). For example, running on a tier-0 cluster using 256 nodes (each with 2 CPU sockets) would be (please allocate the compute resources first):
```
mpirun -np 512 python dg_train.py -f output_bw_2048.hdf5 --arch EfficientNetB7 --epoches 10 --noise 0.3 --batch-size 1
```
Here, `--batch-size 1` is used because the DNN is so large that it would use up to 160 GB of memory. This memory requirement exceeds the available memory of all currently available GPUs, and so the training should be performed on the CPU in principle. In some large-memory nodes, a `batch-size` of 2 or even 4 might be possible.
As a walkaround, it is still possible to perform the training on the GPUs by using CUDA's unified memory. This would likely introduce performance penalty. As an example, if one wishes to allocate 5x the size of GPU memory, and carry out training on 256 nodes (each with 4 GPUs), the commandline would be
```
mpirun -np 1024 python dg_train.py -f output_bw_2048.hdf5 --arch EfficientNetB7 --epoches 10 --noise 0.3 --batch-size 1 --gpu-mem-frac 5
```
Please note that the surplus of memory requirement is offset by the host memory. So in this example, the host memory is expected to offset (160 - 32) * 4 = 512 GB. So please choose the `--gpu-mem-frac` with according to the actual hardware specs to ensure that the model has enough memory to operate.
#!/bin/bash
echo "no need to compile, it's python code with calls to TensorfFlow and Horovod"
echo "To install the required libraries, please run the `install_prerequisites.sh` script."
#!/bin/bash
git clone https://github.com/maxwelltsai/DeepGalaxy.git
# please create a virtual environment and activate it if applicable.
pip install tensorflow
pip install horovod
pip install scikit-learn
pip install scikit-image
pip install h5py
#!/bin/bash
#SBATCH --job-name=dg_512_1_4_4_3_100 # <image_size>,<nb_node>,<nb_MPI_task>,<nb_gpu_per_node>,<number_position>,<number_epoch>
#SBATCH --ntasks=4 # number of tasks
#SBATCH -N 4 # number of nodes
#SBATCH --gres=gpu:4 # number of GPU per node
#SBATCH --cpus-per-task=10 # number of CPU cores
#SBATCH --hint=nomultithread # use physical core
#SBATCH --time=03:00:00 # walltime max
#SBATCH --exclusive
#SBATCH -A qbg@gpu
#SBATCH --output=dg.out # output filename
#SBATCH --error=dg.err # error filename
set -x
source env_bench
srun /gpfslocalsup/pub/idrtools/bind_gpu.sh python $DG_TRAIN
module purge
# modules load
module load openmpi/4.0.2-cuda
module load tensorflow-gpu/py3/2.1.0+hvd-0.19
module load git
GIT_ROOT=`git rev-parse --show-toplevel`
DG_DIR=$GIT_ROOT/Deepgalaxy/
DG_TRAIN=$DG_DIR/DeepGalaxy-master/dg_train.py
#!/bin/bash
#SBATCH --job-name=dg_512_1_4_4_3_100 # <image_size>,<nb_node>,<nb_MPI_task>,<nb_gpu_per_node>,<number_position>,<number_epoch>
#SBATCH --ntasks=4 # number of tasks
#SBATCH -N 4 # number of nodes
#SBATCH --gres=gpu:4 # number of GPU per node
#SBATCH --cpus-per-task=10 # number of CPU cores
#SBATCH --hint=nomultithread # use physical core
#SBATCH --time=03:00:00 # walltime max
#SBATCH --exclusive
#SBATCH -A qbg@gpu
#SBATCH --output=dg.out # output filename
#SBATCH --error=dg.err # error filename
set -x
source env_bench
srun /gpfslocalsup/pub/idrtools/bind_gpu.sh python $DG_TRAIN
module purge
# modules load
module load 2020
module load TensorFlow/2.1.0-foss-2019b-Python-3.7.4-CUDA-10.1.243
#!/bin/bash -l
#SBATCH -J DeepGalaxyTrain
#SBATCH -o DeepGalaxyTrain_out.txt
#SBATCH -e DeepGalaxyTrain_err.txt
#SBATCH -t 1:00:00
#SBATCH --partition gpu_titanrtx
#SBATCH --nodes 8
#SBATCH --ntasks-per-node=4
#SBATCH --gres=gpu:4
module purge
module load 2020
module load TensorFlow/2.1.0-foss-2019b-Python-3.7.4-CUDA-10.1.243
# module load TensorFlow/1.15.0-foss-2019b-Python-3.7.4-10.1.243
mpirun -np 32 python dg_train.py -f output_bw_512.hdf5 --arch EfficientNetB4 --epochs 10 --noise 0.3 --batch-size 4
## Prerequisites Installation
The prerequsities consists of a list of python packages as shown below. It is recommended to create a python virtual environment (either with `pyenv` or `conda`). The following packages can be installed using the `pip` package management tool:
```
pip install tensorflow
pip install horovod
pip install scikit-learn
pip install scikit-image
pip install pandas
```
Note: there is no guarantee of optimal performance when `tensorflow` is installed using `pip`. It is better if `tensorflow` is compiled from source, in which case the compiler will likely be able to take advantage of the advanced instruction sets supported by the processor (e.g., AVX512). An official build instruction can be found at https://www.tensorflow.org/install/source. Sometimes, an HPC center may have a tensorflow module optimized for their hardware, in which case the `pip install tensorflow` line can be replaced with a line like `module load <name_of_the_tensorflow_module>`.
batch_medium.slurm
efn_b4.h5
env_bench
model_hvd_bw_512_B4_with_noise_n_p_4.h5
output_bw_512.hdf5
results-DG-medium/
train_log.txt
Medium test case presentation
-----------------------------
This test case performs a training using 512X512 images, with 3 positions per image, as input.
Reference time on Jean-zay with 4 nodes, 16 MPI proces, 16 GPUs, 3 positions and 100 epochs:
* For 100epochs: ~67ms/sample and 32min30s as time to solution