Running the QCD Benchmarks contain in ./part_cpu in the JuBE Framework ========================================================================== The folder ./part_cpu contains following subfolders PABS/ applications/ bench/ doc/ platform/ skel/ LICENCE The applications/ subdirectory contains the QCD benchmark applications. The bench/ subdirectory contains the benchmark environment scripts. The doc/ subdirectory contains the overall documentation of the framework and a tutorial. The platform/ subdirectory holds the platform definitions as well as job submission script templates for each defined platform. The skel/ subdirectory contains templates for analysis patterns for text output of different measurement tools. Configuration ============= Definition files are already prepared for many platforms. If you are running on a defined platform just go forward, otherwise please have a look at QCD_Build_README.txt. Execution ========= Assuming the Benchmark Suite is installed in a directory that can be used during execution, a typical run of a benchmark application will contain two steps. 1. Compiling and submitting the benchmark to the system scheduler. 2. Verifying, analysing and reporting the performance data. Compiling and submitting ------------------------ If configured correctly, the application benchmark can be compiled and submitted on the system (e.g. the IBM BlueGene/Q at Jülich) with the commands: >> cd PABS/applications/QCD >> perl ../../bench/jube prace-scaling-juqueen.xml The benchmarking environment will then compile the binary for all node/task/thread combinations defined, if those parameters need to be compiled into the binary. It creates a so-called sandbox subdirectory for each job, ensuring conflict free operation of the individual applications at runtime. If any input files are needed, those are prepared automatically as defined. Each active benchmark in the application’s top-level configuration file will receive an ID, which is used as a reference by JUBE later on. Verifying, analysing and reporting ---------------------------------- After the benchmark jobs have run, an additional call to jube will gather the performance data. For this, the options -update and -result are used. >> cd DEISA_BENCH/application/QCD >> perl ../../bench/jube -update -result The ID is the reference number the benchmarking environment has assigned to this run. The performance data will then be output to stdout, and can be post-processed from there.