Newer
Older
# README - Wireworld Example (C++ Version)
For a general description of *Wireworld* and the file format, see the `README.md` file in the parent directory.
This code sample demonstrates:
* Using **Collective IO** MPI functions for efficiently reading and writing from multiple nodes to the same File, i.e. `MPI_File_set_view`, `MPI_File_read_all`
* Using MPI Datatypes for IO and Communication, i.e. `MPI_Type_create_subarray`, `MPI_Type_vector`
* Creating a distributed **Graph Topology** for the Halo Exchange, i.e. `MPI_Dist_graph_create_adjacent`
* Two different approaches for the Halo Exchange:
1. Using **collective communication**, i.e. `MPI_Ineighbor_alltoallw`
1. Using **point to point communication**, i.e. `MPI_Isend`, `MPI_Irecv`
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
The code sample is structured as follows:
* `Communicator.*`: Creating the graph topology, Datatypes, Communication
* `Configuration.*`: Command line parsing
* `FileIO.*`: Code for reading and writing the Wireworld file
* `MpiEnvironment.*`: Wrapper class for the MPI Environment
* `MpiSubarray.*`: Wrapper class for the MPI Datatype Subarray
* `MpiWireworld.*`: Simulating a generation step, computing next state
* `Tile.*`: Represents a Tile, memory management, debugging
## Release Date
2016-10-24
## Version History
* 2016-10-24: Initial Release on PRACE CodeVault repository
## Contributors
* Thomas Steinreiter - [thomas.steinreiter@risc-software.at](mailto:thomas.steinreiter@risc-software.at)
## Copyright
This code is available under Apache License, Version 2.0 - see also the license file in the CodeVault root directory.
## Languages
This sample is entirely written in C++ 14.
## Parallelisation
This sample uses MPI-3 for parallelization.
## Level of the code sample complexity
Intermediate / Advanced
## Compiling
Follow the compilation instructions given in the main directory of the kernel samples directory (`/hpc_kernel_samples/README.md`).
## Running
To run the program, use something similar to
mpiexec -n [nprocs] ./5_structured_wireworld ../worlds/primes.wi
either on the command line or in your batch script, where an inputfile must be provided.
* `-x [ --nprocs-x ]`: number of processes in x-direction (optional, automatically deduced)
* `-y [ --nprocs-y ]`: number of processes in y-direction (optional, automatically deduced)
* `-g [ --generations ]`: number of generations simulated (default 1000)
* `-m [ --commmode ]`: Communication Mode. `Collective` or `P2P` (default Collective)
* `-f [ --inputfile ]`: path to wireworld input file (mandatory, flag can be obmitted) The file dimension must be divisible by the *grid dimension.
* `-o [ --outputfile ]`: path to wireworld input file (optional)
### Example
If you run
mpiexec -n 8 ./5_structured_wireworld -g 10000 -f ../worlds/primes.wi -o ../worlds/primes.out.wi -m Collective