QCD_Run_README.txt 2.55 KB
Newer Older
Valeriu Codreanu's avatar
Valeriu Codreanu committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
Running the QCD Benchmarks in the JuBE Framework
================================================

Unpack the QCD_Source_TestCaseA.tar.gz into a directory of your
choice. 

After unpacking the Benchmark the following directory structure is available:
     PABS/
     applications/
     bench/
     doc/
     platform/
     skel/
     LICENCE

The applications/ subdirectory contains the QCD benchmark
applications. 
The bench/ subdirectory contains the benchmark environment scripts. 
The doc/ subdirectory contains the overall documentation of the
framework and a tutorial. 
The platform/ subdirectory holds the platform definitions as well as
job submission script templates for each defined platform. 
The skel/ subdirectory contains templates for analysis patterns for
text output of different measurement tools.  

Configuration
=============

Definition files are already prepared for many platforms. If you are
running on a defined platform just go forward, otherwise please have a
look at QCD_Build_README.txt.

Execution
=========

Assuming the Benchmark Suite is installed in a directory that can be
used during execution, a typical run of a benchmark application will
contain two steps.  
1. Compiling and submitting the benchmark to the system scheduler.
2. Verifying, analysing and reporting the performance data.
 
Compiling and submitting
------------------------

If configured correctly, the application benchmark can be compiled and
submitted on the system (e.g. the IBM BlueGene/Q at Jülich) with
the commands:  
>> cd PABS/applications/QCD
>> perl ../../bench/jube prace-scaling-juqueen.xml

The benchmarking environment will then compile the binary for all
node/task/thread combinations defined, if those parameters need to be
compiled into the binary. It creates a so-called sandbox subdirectory
for each job, ensuring conflict free operation of the individual
applications at runtime. If any input files are needed, those are
prepared automatically as defined. 

Each active benchmark in the application’s top-level configuration
file will receive an ID, which is used as a reference by JUBE later
on. 

Verifying, analysing and reporting
----------------------------------

After the benchmark jobs have run, an additional call to jube will
gather the performance data. For this, the options -update and -result
are used.  

>> cd DEISA_BENCH/application/QCD
>> perl ../../bench/jube -update -result <ID>

The ID is the reference number the benchmarking environment has
assigned to this run. The performance data will then be output to
stdout, and can be post-processed from there.