# README - Wireworld Example (C++ Version) ## Description For a general description of *Wireworld* and the file format, see the `README.md` file in the parent directory. This code sample demonstrates: * Using **Collective IO** MPI functions for efficiently reading and writing from multiple nodes to the same File, i.e. `MPI_File_set_view`, `MPI_File_read_all` * Using MPI Datatypes for IO and Communication, i.e. `MPI_Type_create_subarray`, `MPI_Type_vector` * Creating a distributed **Graph Topology** for the Halo Exchange, i.e. `MPI_Dist_graph_create_adjacent` * Two different approaches for the Halo Exchange: 1. Using **collective communication**, i.e. `MPI_Ineighbor_alltoallw` 1. Using **point to point communication**, i.e. `MPI_Isend`, `MPI_Irecv` The code sample is structured as follows: * `Communicator.*`: Creating the graph topology, Datatypes, Communication * `Configuration.*`: Command line parsing * `FileIO.*`: Code for reading and writing the Wireworld file * `MpiEnvironment.*`: Wrapper class for the MPI Environment * `MpiSubarray.*`: Wrapper class for the MPI Datatype Subarray * `MpiWireworld.*`: Simulating a generation step, computing next state * `Tile.*`: Represents a Tile, memory management, debugging ## Release Date 2016-10-24 ## Version History * 2016-10-24: Initial Release on PRACE CodeVault repository ## Contributors * Thomas Steinreiter - [thomas.steinreiter@risc-software.at](mailto:thomas.steinreiter@risc-software.at) ## Copyright This code is available under Apache License, Version 2.0 - see also the license file in the CodeVault root directory. ## Languages This sample is entirely written in C++ 14. ## Parallelisation This sample uses MPI-3 for parallelization. ## Level of the code sample complexity Intermediate / Advanced ## Compiling Follow the compilation instructions given in the main directory of the kernel samples directory (`/hpc_kernel_samples/README.md`). ## Running To run the program, use something similar to mpiexec -n [nprocs] ./5_structured_wireworld ../worlds/primes.wi either on the command line or in your batch script, where an inputfile must be provided. ### Command line arguments * `-x [ --nprocs-x ]`: number of processes in x-direction (optional, automatically deduced) * `-y [ --nprocs-y ]`: number of processes in y-direction (optional, automatically deduced) * `-g [ --generations ]`: number of generations simulated (default 1000) * `-m [ --commmode ]`: Communication Mode. `Collective` or `P2P` (default Collective) * `-f [ --inputfile ]`: path to wireworld input file (mandatory, flag can be obmitted) The file dimension must be divisible by the *grid dimension. * `-o [ --outputfile ]`: path to wireworld input file (optional) ### Example If you run mpiexec -n 8 ./5_structured_wireworld -g 10000 -f ../worlds/primes.wi -o ../worlds/primes.out.wi -m Collective the output should look similar to iteration:0 iteration:1000 iteration:2000 iteration:3000 iteration:4000 iteration:5000 iteration:6000 iteration:7000 iteration:8000 iteration:9000 Execution time:6.66665s