Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
###
### README - QCD Accelerator Benchmarksuite Part 2
###
### 2017 - Jacob Finkenrath - CaSToRC - The Cyprus Institute (j.finkenrath@cyi.ac.cy)
###
The QCD Accelerator Benchmark suite Part 2 consists of two kernels,
the QUDA and the QPhix library. The QUDA library is based on CUDA and
optimized for running on NVIDIA GPUs. The QPhix library consists of
routines which are optimize to use INTEL intrinsic functions of
multiple vector length, including optimized routines for KNC and
KNL. In both QUDA and QPhix, this benchmark uses the Conjugate
Gradient solvers implemented within the libraries.
[1] R. Babbich, M. Clark and B. Joo, “Parallelizing the QUDA Library for Multi-GPU Calculations
in Lattice Quantum Chromodynamics” SC 10 (Supercomputing 2010)
[2] B. Joo, D. D. Kalamkar, K. Vaidyanathan, M. Smelyanskiy, K. Pamnany, V. W. Lee, P. Dubey,
W. Watson III, “Lattice QCD on Intel Xeon Phi”, International Supercomputing Conference (ISC’13), 2013
###
### Table of Contents
###
GPU - BENCHMARK SUITE (QUDA)
1. Compile and Run the GPU-Benchmark Suite
1.1 Compile
1.2 Run
1.2.1 Main-script: "run_ana.sh"
1.2.2 Main-script: "prepare_submit_job.sh"
1.2.3 Main-script: "submit_job.sh.template"
1.3 Example Benchmark results
XEONPHI - BENCHMARK SUITE (QPHIX)
2. Compile and Run the XeonPhi-Benchmark Suite
2.1 Compile
2.1.1 Example compilation on PRACE machines
2.1.1.1 BSC - Marenostrum III Hybrid partitions
2.1.1.2 CINES - Frioul
2.2 Run
2.2.1 Main-script: "run_ana.sh"
2.2.2 Main-script: "prepare_submit_job.sh"
2.2.3 Main-script: "submit_job.sh.template"
2.3 Example Benchmark Results
###
###
### GPU - BENCHMARK SUITE
###
###
##
## 1. Compile and Run the GPU-Benchmark Suite
##
##
## 1.1 Compile
##
Download Cmake and Quda
General information how to build QUDA with cmake can be found under:
"https://github.com/lattice/quda/wiki/Building-QUDA-with-cmake". Here
we just give a short overview:
Build Cmake: (./QCD_Accelerator_Benchmarksuite_Part2/GPUs/src/cmake-3.7.0.tar.gz)
Cmake can be downloaded from source at URL:
https://cmake.org/download/. This guide uses version 3.7.0. The build
instruction can be found in the main directory under "README.rst". Use
the configure file "./configure" . Then run "gmake" to compile.
Build Quda: (./QCD_Accelerator_Benchmarksuite_Part2/GPUs/src/quda.tar.gz)
Download quda for example by using "git clone
https://github.com/lattice/quda.git". Create a build-folder. Use
"cmake" in the build-folder, which should be under cmake/bin if you
compiled cmake from source. Execute:
./$PATH2CMAKE/cmake $PATH2QUDA -DQUDA_GPU_ARCH=sm_XX -DQUDA_DIRAC_WILSON=ON -DQUDA_DIRAC_TWISTED_MASS=OFF
-DQUDA_DIRACR_DOMAIN_WALL=OFF -DQUDA_HISQ_LINK=OFF -DQUDA_GAUGE_FORCE=OFF -DQUDA_HISQ_FORCE=OFF -DQUDA_MPI=ON
with
PATH2CMAKE=<path to the cmake-executable>
PAT2QUDA=<path to the home dir of QUDA>
Set -DQUDA_GPU_ARCH=sm_XX to the GPU Architecture (sm_60 for Pascal, sm_35 for Kepler)
If cmake or the compilation fails, library paths and options can be
set via the text user interface of cmake by using "ccmake". Use
"./PATH2CMAKE/ccmake PATH2BUILD_DIR" to see and edit the available
options. After successfully configuring the buil, run "make". Now in
the folder test/ one can find the needed Quda executables which begin
with "invert_".
##
## 1.2 Run
##
The Accelerator QCD-Benchmarksuite Part 2 provides bash-scripts
located in the folder
./QCD_Accelerator_Benchmarksuite_Part2/GPUs/scripts" to setup the
benchmark runs on the target machines. This bash-scripts are:
run_ana.sh : Main-script, sets up the benchmark mode and submits the jobs (analyse the results)
prepare_submit_job.sh : Generates the job-scripts
submit_job.sh.template : Template for submit script
##
## 1.2.1 Main-script: "run_ana.sh"
##
The path to the executable has to be set by $PATH2EXE . Upon first
run, QUDA automatically tunes the GPU-kernels by sweeping the number
of threads per block. The optimal setup will be saved in the folder
which pointed to in the environment variable "QUDA_RESOURCE_PATH". You
must set this variable, otherwise the tune data will be lost and
performance will be sub-optimal. Set it to the folder where the tuning
data should be saved. Strong scaling or Weak scaling can be chosen by
using the variable sca_mode (="Strong" or ="Weak"). The lattice sizes
can be set by "gx" and "gt". Choose mode="Run" for run mode or
mode="Analysis" for extracting the GFLOPS. Note that the script
assumes Slurm is used as the job scheduler. If not, change the line
which includes the "sbatch" command accordingly.
##
## 1.2.2 Main-script: "prepare_submit_job.sh"
##
Add additional options if necessary.
##
## 1.2.3 Main-script: "submit_job.sh.template"
##
The submit-template will be edited by "prepare_submit_job.sh" to
generate the final submit-script. The first lines (beginning with
"#SBATCH") depend on the queuing system of the target machine, which in
this case is assumed to be Slurm. These should be changed in case of a
different queuing system.
The Accelerator QCD-Benchmarksuite Part 2 provides bash-scripts to
setup the benchmark runs on the target machines. These bash-scripts
are:
##
## 1.3 Example Benchmark results
##
Here are shown the benchmark results on PizDaint located in Switzerland at CSCS
and the GPGPU-partition of Cartesius at Surfsara based in Netherland, Amsterdam. The runs are performed by using the provided bash-scripts. PizDaint has one Pascal-GPU per node and two different testcases are shown,
the "Strong-Scaling mode with a random lattice configuration of size 32^3x96 and
a "Weak-Scaling" mode with a configuration of local lattice size 48^3x24.
The GPGPU nodes of Cartesius has two Kepler-GPU per node and the "Strong-Scaling" test is shown for the case
that one card per node and two cards per node are used.
The benchmark are done by using the Conjugated Gradient solver which
solve a linear equation, D * x = b, for the unknown solution "x" based on the clover improved Wilson Dirac operator
"D" and a known right hand side "b".
---------------------
PizDaint - Pascal P100
---------------------
Strong - Scaling:
global lattice size (32x32x32x96)
sloppy-precision: single
precision: single
GPUs GFLOPS sec
1 786.520000 4.569600
2 1522.410000 3.086040
4 2476.900000 2.447180
8 3426.020000 2.117580
16 5091.330000 1.895790
32 8234.310000 1.860760
64 8276.480000 1.869230
sloppy-precision: double
precision: double
GPUs GFLOPS sec
1 385.965000 6.126730
2 751.227000 3.846940
4 1431.570000 2.774470
8 1368.000000 2.367040
16 2304.900000 2.071160
32 4965.480000 2.095180
64 2308.850000 2.005110
Weak - Scaling:
local lattice size (48x48x48x24)
sloppy-precision: single
precision: single
GPUs GFLOPS sec
1 765.967000 3.940280
2 1472.980000 4.004630
4 2865.600000 4.044360
8 5421.270000 4.056410
16 9373.760000 7.396590
32 17995.100000 4.243390
64 27219.800000 4.535410
sloppy-precision: double
precision: double
GPUs GFLOPS sec
1 376.611000 5.108900
2 728.973000 5.190880
4 1453.500000 5.144160
8 2884.390000 5.207090
16 5004.520000 5.362020
32 8744.090000 5.623290
64 14053.00000 5.910520
---------------------
SurfSara - Kepler K20m
---------------------
##
## 1 GPU per Node
##
Strong - Scaling:
global lattice size (32x32x32x96)
sloppy-precision: single
precision: single
GPUs GFLOPS sec
1 243.084000 4.030000
2 478.179000 2.630000
4 939.953000 2.250000
8 1798.240000 1.570000
16 3072.440000 1.730000
32 4365.320000 1.310000
sloppy-precision: double
precision: double
GPUs GFLOPS sec
1 119.786000 6.060000
2 234.179000 3.290000
4 463.594000 2.250000
8 898.090000 1.960000
16 1604.210000 1.480000
32 2420.130000 1.630000
##
## 2 GPU per Node
##
Strong - Scaling:
global lattice size (32x32x32x96)
sloppy-precision: single
precision: single
GPUs GFLOPS sec
2 463.041000 2.720000
4 896.707000 1.940000
8 1672.080000 1.680000
16 2518.240000 1.420000
32 3800.970000 1.460000
64 4505.440000 1.430000
sloppy-precision: double
precision: double
GPUs GFLOPS sec
2 229.579000 3.380000
4 450.425000 2.280000
8 863.117000 1.830000
16 1348.760000 1.510000
32 1842.560000 1.550000
64 2645.590000 1.480000
###
###
### XEONPHI - BENCHMARK SUITE
###
###
##
## 2. Compile and Run the XeonPhi-Benchmark Suite
##
Unpack the provided source tar-file located in
"./QCD_Accelerator_Benchmarksuite_Part2/XeonPhi/src" or clone the
actual git-hub branches of the code packages QMP:
"git clone https://github.com/usqcd-software/qmp"
and for QPhix
"git clone https://github.com/JeffersonLab/qphix"
Note that the AVX512 instructions, which are needed for an optimal run
on KNLs, are not yet part of the main branch. The AVX512 instructions
are available in the avx512-branch ("git checkout avx512). The
provided source file is using the avx512-branch (Status as of 01/2017).
##
## 2.1 Compile
##
The QPhix library must be built upon QMP, a thin communication layer
on top of MPI. Compile QMP first:
./configure --prefix=$QMP_INSTALL_DIR CC=mpiicc CFLAGS=" -mmic/-xAVX512 -std=c99" --with-qmp-comms-type=MPI --host=x86_64-linux-gnu --build=none-none-none
Create the install folder and link with $QMP_INSTALL_DIR to it. Use
the compiler flag "-mmic" for the compilation for KNC while use
"-xAVX512" for the compilation for KNL. Then use "make" to compile
and "make install" to copy the necessary source files in
$QMP_INSTALL_DIR.
The QPhix executable can be compiled by using, for KNC:
./configure --enable-parallel-arch=parscalar --enable-proc=MIC --enable-soalen=8 --enable-clover --enable-openmp --enable-cean --enable-mm-malloc CXXFLAGS="-openmp -mmic -vec-report -restrict -mGLOB_default_function_attrs=\"use_gather_scatter_hint=off\" -g -O2 -finline-functions -fno-alias -std=c++0x" CFLAGS="-mmic -vec-report -restrict -mGLOB_default_function_attrs=\"use_gather_scatter_hint=off\" -openmp -g -O2 -fno-alias -std=c9l9" CXX=mpiicpc CC=mpiicc --host=x86_64-linux-gnu --build=none-none-none --with-qmp=$QMP_INSTALL_DIR
or for KNL:
./configure --enable-parallel-arch=parscalar --enable-proc=AVX512 --enable-soalen=8 --enable-clover --enable-openmp --enable-cean --enable-mm-malloc CXXFLAGS="-qopenmp -xMIC-AVX512 -g -O3 -std=c++14" CFLAGS="-xMIC-AVX512 -qopenmp -O3 -std=c99" CXX=mpiicpc CC=mpiicc --host=x86_64-linux-gnu --build=none-none-none --with-qmp=$QMP_INSTALL_DIR
by using the previously set variable QMP_INSTALL_DIR which links to
the folder in which the QMP library was copied. The executable
"time_clov_noqdp" should appear in the "./qphix/test" folder. Note
that the avx512-branch will compile an additional executable which has
dependencies on the package QDP (which will generate an error at the
end of the compilation process).
##
## 2.1.1 Example compilation on PRACE machines
##
In the subsection we provide some example compilation on PRACE machines
which where used to develop the QCD Benchmarksuite 2.
##
## 2.1.1.1 BSC - Marenostrum III Hybrid partitions
##
The nodes of the hybrid partition of Marenostrum are equipped with KNC
cards. First load the following modules:
module unload openmpi
module load impi
and then setup the appropriate environment with:
source /opt/intel/impi/4.1.1.036/bin64/mpivars.sh
source /opt/intel/2013.5.192/composer_xe_2013.5.192/bin/compilervars.sh intel64
export I_MPI_MIC=enable
export I_MPI_HYDRA_BOOTSTRAP=ssh
Configure and compile the QMP-library with:
./configure --prefix=$QMP_INSTALL_DIR CC=mpiicc CFLAGS="-mmic -std=c99" --with-qmp-comms-type=MPI --host=x86_64-linux-gnu --build=none-none-none
make
make install
Configure and compile QPhix with:
./configure --enable-parallel-arch=parscalar --enable-proc=MIC --enable-soalen=8 --enable-clover --enable-openmp --enable-cean --enable-mm-malloc CXXFLAGS="-openmp -mmic -vec-report -restrict -mGLOB_default_function_attrs=\"use_gather_scatter_hint=off\" -g -O2 -finline-functions -fno-alias -std=c++0x" CFLAGS="-mmic -vec-report -restrict -mGLOB_default_function_attrs=\"use_gather_scatter_hint=off\" -openmp -g -O2 -fno-alias -std=c9l9" CXX=mpiicpc CC=mpiicc --host=x86_64-linux-gnu --build=none-none-none --with-qmp=$QMP_INSTALL_DIR
make
##
## 2.1.1.2 CINES - Frioul
##
On a test cluster at CINES the Benchmarksuite was tested on KNL cards.
The steps are similar to Marenostrum above. First setup the appropriate environment with:
source /opt/software/intel/composer_xe_2015/bin/compilervars.sh intel64
source /opt/software/intel/impi_5.0.3/bin64/mpivars.sh
Configure and compile QMP with:
./configure --prefix=$QMP_INSTALL_DIR CC=mpiicc CFLAGS="-xMIC-AVX512 -mGLOB_default_function_attrs="use_gather_scatter_hint=off" -openmp -g -O2 -fno-alias -std=c99" --with-qmp-comms-type=MPI --host=x86_64-linux-gnu --build=none-none-none
make
make install
Configure and compile QPhix with:
./configure --enable-parallel-arch=parscalar --enable-proc=AVX512 --enable-soalen=8 --enable-clover --enable-openmp --enable-cean --enable-mm-malloc CXXFLAGS="-qopenmp -xMIC-AVX512 -g -O3 -std=c++14" CFLAGS="-xMIC-AVX512 -qopenmp -O3 -std=c99" CXX=mpiicpc CC=mpiicc --host=x86_64-linux-gnu --build=none-none-none --with-qmp=/home/finkenrath/benchmark/qmp/install
and
make
##
## 2.2 Run
##
The Accelerator QCD-Benchmarksuite Part 2 provides bash-scripts to
setup the benchmark runs on the target machines. These are:
run_ana.sh : Main-script, set up the bechmark mode and submit the jobs (analyse the results)
prepare_submit_job.sh : Generate the job-scripts
submit_job.sh.template : Template for submit script
##
## 2.2.1 Main-script: "run_ana.sh"
##
The path to the executable has to be set by $PATH2EXE . Choose a
scaling mode between Strong scaling or Weak scaling by setting the
variable sca_mode (="Strong" or ="Weak"). The lattice sizes can be set
by "gx" and "gt". Choose between mode="Run" for run mode or
mode="Analysis" for extracting the GFLOPS. Note that the script
assumes Slurm is used as the job scheduler. If not, change the line
which includes the "sbatch" command accordingly.
##
## 2.2.2 Main-script: "prepare_submit_job.sh"
##
Add additional options if necessary.
##
## 2.2.3 Main-script: "submit_job.sh.template"
##
The submit-template will be edited by "prepare_submit_job.sh" to
generate the final submit-script. The first lines (beginning with
"#SBATCH") depend on the queuing system of the target machine, which
in this case is assumed to be Slurm. These should be changed in case
of a different queuing system.
##
## 2.3 Example Benchmark Results
##
The benchmark results for the XeonPhi benchmark suite are performed on
Frioul, a test cluster at CINES, and the hybrid partion on MareNostrum III at BSC.
Frioul has one KNL-card per node while the hybrid partion of MareNostrum III is
equiped with two KNCs per node. The data on Frioul are generated by using
the bash-scripts provided by the QCD-Accelerator Benchmarksute Part 2
and are done for the two test cases "Strong-Scaling" with a lattice size
of 32^3x96 and "Weak-scaling" with a local lattice size of 48^3x24 per
card. In case of the data generated at MareNostrum, data for the "Strong-Scaling"
mode on a 32^3x96 lattice are shown. The Benchmark is using a random gauge configuration and uses the
Conjugated Gradient solver to solve a linear equation involving the clover Wilson Dirac operator.
---------------------
Frioul - KNLs
---------------------
Strong - Scaling:
global lattice size (32x32x32x96)
precision: single
KNLs GFLOPS
1 340.75
2 627.612
4 1111.13
8 1779.34
16 2410.8
precision: double
KNLs GFLOPS
1 328.149
2 616.467
4 1047.79
8 1616.37
Weak - Scaling:
local lattice size (48x48x48x24)
precision: single
KNLs GFLOPS
1 348.304
2 616.697
4 1214.82
8 2425.45
16 4404.63
precision: double
KNLs GFLOPS
1 172.303
2 320.761
4 629.79
8 1228.77
16 2310.63
---------------------
MareNostrum III - KNC's
---------------------
Strong - Scaling:
global lattice size (32x32x32x96)
precision: single - 1 Cards per Node
KNCs GFLOPS
2 103.561
4 200.159
8 338.276
16 534.369
32 815.896
precision: single - 2 Cards per Node
KNCs GFLOPS
4 118.995
8 212.558
16 368.196
32 605.882
64 847.566