README.md 4.14 KB
Newer Older
Kurt Lust's avatar
Kurt Lust committed
1
2
# Example build scripts

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
## BSC-MareNostrum4-skylake

MareNostrum 4 is a Tier-0 cluster installed at the Barcelona Supercomputing
Center. GPAW was installed on the general-purpose block of the cluster,
consisting of 3,456 nodes with dual 24-core Intel Xeon Platinum 8160
"Skylake" CPUs and 96 GB of RAM in most nodes. The interconnect is
Omni-Path.

Environment modules are provided through [Lmod](https://lmod.readthedocs.io/en/latest/).

[Go to the directory](BSC-MareNostrum4-skylake)


## HLRS-Hawk-rome

Hawk is a Tier-0 cluster hosted by HLRS in Germany. The cluster consists of
5,632 compute nodes with dual 64-core AMD EPYC 7742 "Rome" CPUs and 256 GB of
memory. The interconnect is HDR200 InfiniBand in a 9-dimensional hypercube topology.

Most software on the cluster is installed through [Spack](https://spack.io/).
Environment modules are provided through [Lmod](https://lmod.readthedocs.io/en/latest/).

[Go to the directory](HLRS-Hawk-rome)


## JSC-JUWELS-skylake

JUWELS is a Tier-0 system hosted by the Jülich Supercomputing Centre. GPAW was only
installed on the so-called "Cluster Module" consisting of 2,271 compute nodes
with dual 24-core Intel Xeon Platinum 8168 "Skylake" CPUs and 96 GB RAM per node.
The interconnect is EDR InfiniBand.

Most software on the cluster is installed through [EasyBuild](https://easybuild.io/).
Environment modules are provided through [Lmod](https://lmod.readthedocs.io/en/latest/).

[Go to the directory](JSC-JUWELS-skylake)


## LRZ-SuperMUCng-skylake

SuperMUC-NG is a Tier-0 system hosted by LRZ in Germany. The cluster has 6,480
compute nodes with dual 24-core Intel Xeon Platinum 8174 CPUs. Most nodes have 96 GB
of RAM. The interconnect is Omni-Path. The cluster runs a SUSE Linux variant.

Most software on the cluster is installed through [Spack](https://spack.io/).
Environment modules are provided through the TCL-based [Environment Modules
package](https://modules.readthedocs.io/).

[Go to the directory](LRZ-SuperMUCng-skylake)


## TGCC-Irene-rome

Joliot-Curie/Irene is a Tier-0 machine hosted at the TGCC in France. GPAW was installed
on the AMD Rome partition. This partition has 2,292 dual 64-core AMD EPYC 7H12 "Rome"
CPUs with 256 GB RAM each. The interconnect is HDR100 InfiniBand.

Environment modules are provided through the TCL-based [Environment Modules
package](https://modules.readthedocs.io/).

[Go to the directory](TGCC-Irene-rome)


Kurt Lust's avatar
Kurt Lust committed
66
67
## CalcUA-vaughan-rome

68
Vaughan is a cluster installed at the University of Antwerp
Kurt Lust's avatar
Kurt Lust committed
69
as one of the VSC (Vlaams Supercomputer Centrum) Tier-2 clusters.
70
The cluster has 144 nodes with dual 32-core AMD EPYC 7452 "Rome" CPUs and 256 GB of RAM memory
Kurt Lust's avatar
Kurt Lust committed
71
72
73
74
per node. Nodes are interconnected through an InfiniBand HDR100 network. The system
does not contain accelerators. The cluster uses CentOS 8 as the
operating system.

75
Most software on the cluster is installed through [EasyBuild](https://easybuild.io/).
Kurt Lust's avatar
Kurt Lust committed
76
77
Environment modules are provided through [Lmod](https://lmod.readthedocs.io/en/latest/).

78
79
[Go to the directory](CalcUA-vaughan-rome)

Kurt Lust's avatar
Kurt Lust committed
80

Kurt Lust's avatar
Kurt Lust committed
81
## CalcUA-leibniz-broadwell
Kurt Lust's avatar
Kurt Lust committed
82

83
Leibniz is a cluster installed at the University of Antwerp
Kurt Lust's avatar
Kurt Lust committed
84
85
86
as one of the VSC (Vlaams Supercomputer Centrum) Tier-2 clusters.
The cluster has 153 regular CPU nodes with dual Intel Xeon E5-2680v4
"broadwell" CPUs with 128 or 256 GB of RAM memory per node.
87
Nodes are interconnected through an InfiniBand EDR network.
Kurt Lust's avatar
Kurt Lust committed
88
89
90
91
92
The system has two nodes with dual NVIDIA P100 GPUs and a node
with dual NEC Aurora TSUBASA first generation vector boards.
During the testing period the cluster used CentOS 7 as the
operating system.

93
Most software on the cluster is installed through [EasyBuild](https://easybuild.io/).
Kurt Lust's avatar
Kurt Lust committed
94
95
Environment modules are provided through [Lmod](https://lmod.readthedocs.io/en/latest/).

Kurt Lust's avatar
Kurt Lust committed
96
97
[Go to the directory](CalcUA-leibniz-broadwell)

Kurt Lust's avatar
Kurt Lust committed
98
99
100
101
102
103

## piz-daint @ CSCS

Build instructions for the Piz Daint (GPU cluster).

These instructions haven't been tested recently. They are still for a version
104
105
106
based on distutils rather than setuptools that was dropped towards the end of
the development of this version of the benchmark suite. They are here because
it is the only still relevant set of instructions for a GPU-based machine.
Kurt Lust's avatar
Kurt Lust committed
107
108

[Go to the directory](piz-daint)
109