Commit 35c0b466 authored by Thomas Ponweiser's avatar Thomas Ponweiser
Browse files

Halo exchange sample (unstructured grids): updated README.md (known OpenMPI issue #1304)

parent 42023af7
......@@ -10,13 +10,13 @@ This code sample demonstrates how to implement halo-exchange for structured or u
* How to use MPI-3's **distributed graph topology** with and **neighborhood collectives** (a.k.a. sparse collectives), i.e. `MPI_Dist_graph_create_adjacent` and `MPI_Neighbor_alltoallw`
* How to use MPI-Datatypes to **send and receive non-contiguous data** directly, avoiding send and receive buffer packing and unpacking.
For the sake of simplicity, this code sample does not deal with loading and managing any actual mesh data structure. It rather attempts to mimic the typical communication characteristics (i.e. the neighborhood relationships and message size variations between neighbors) for halo-exchange on a 3D unstructured mesh. For this purpose, a simple cube with edge-length *E* is used as global "mesh" domain, consisting of *E*³ regular hexahedral cells. A randomized, iterative algorithm is used for decomposing this cube into irregularly aligned box-shaped sub-domains.
For the sake of simplicity, this code sample does not deal with loading and managing any actual mesh data structure. It rather attempts to mimic the typical communication characteristics (i.e. the neighborhood relationships and message size variations between neighbors) for halo-exchange on a 3D unstructured mesh. For this purpose, a simple cube with edge-length *E* is used as global "mesh" domain, consisting of *E*³ regular hexahedral cells. A randomized, iterative algorithm is used for decomposing this cube into irregularly aligned box-shaped sub-domains.
Moreover, no actual computation is performed. Only halo-exchange takes place for a configurable number of times (`-i [N]`) and the exchanged information is validated once at the end of the program.
Moreover, no actual computation is performed. Only halo-exchange takes place for a configurable number of times (`-i [N]`) and the exchanged information is validated once at the end of the program.
The code sample is structured as follows:
* `box.c`, `box.h`: Simple data structure for box-shaped "mesh" sub-domains and functions for decomposition, intersection, etc.
* `box.c`, `box.h`: Simple data structure for box-shaped "mesh" sub-domains and functions for decomposition, intersection, etc.
* `configuration.c`, `configuration.h`: Command-line parsing and basic logging facilities.
* `field.c`, `field.h`: Data structure for mesh-associated data; merely an array of integers in this sample.
* `main.c`: The main program.
......@@ -29,7 +29,7 @@ The code sample is structured as follows:
* `mpi_halo_exchange_int_p2p_synchronous`: Halo exchange with *synchronous send*, i.e. `MPI_Irecv` / `MPI_Issend`.
* `mpi_halo_exchange_int_p2p_ready`: Halo exchange with *ready send*, i.e. `MPI_IRecv` / `MPI_Barrier` / `MPI_Irsend`
* `mpitypes.c`, `mpitypes.h`: Code for initialization of custom MPI-Datatypes.
* `mpitype_indexed_int`: Creates MPI-Datatype for exchanging transfer of non-contiguous halo data.
* `mpitype_indexed_int`: Creates MPI-Datatype for exchanging transfer of non-contiguous halo data.
## Release Date
......@@ -71,7 +71,7 @@ Intermediate / Advanced
In order to build the sample, only a working MPI implementation supporting MPI-3 must be available. To compile, simply run:
make
make
If you need to use a MPI wrapper compiler other than `mpicc`, e.g. `mpiicc`, type:
......@@ -115,7 +115,7 @@ For large numbers as arguments to the options `-i`, `-n` or `-N`, the suffixes '
If you run
mpirun -n 16 ./haloex -v 2
the command line output should look similar to
Configuration:
......@@ -163,3 +163,15 @@ the command line output should look similar to
Validating...
Validation successful.
# Known issues
## OpenMPI issue #1304
There is a [known issue for OpenMPI](https://github.com/open-mpi/ompi/issues/1304) when a MPI Datatype is marked for deallocation (with `MPI_Type_free`) while still in use by non-blocking collective operations. If you are using OpenMPI and get a segmentation fault in MPI_Bcast, try to re-compile with:
make CFLAGS="-DOMPI_BUG_1304" clean all
This just disables two critical calls to `MPI_Type_free`.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment