Skip to content
Snippets Groups Projects
Commit f2ade005 authored by Thomas Ponweiser's avatar Thomas Ponweiser
Browse files

updated README for DSDE sample (n-body methods)

parent 3e0c6101
Branches
No related merge requests found
......@@ -3,13 +3,13 @@
## Description
*Dynamic Sparce Data Exchange* (DSDE), is a problem central to many HPC applicaitons, such as graph computations (e.g. breadth firsrt search), sparse matrix computations with sparsity mutations, and **particle codes**.
It occurs in situations, where:
It occurs in situations where:
* Each process has a set of data items to send a small number of other processes.
* The destination processes typically do not know how much they will receive from which other process.
* In addition, the send-to relations are changing dynamically and are somewhat localized.
This code sample demonstrates:
* How to implement DSDE using a **non-blocking barrier**, i.e. `MPI_Ibarrier` which is new in MPI-3, together with non-blocking point-to-point communications.
* How to implement DSDE using a **non-blocking barrier**, i.e. `MPI_Ibarrier` (introduced in MPI-3), together with non-blocking point-to-point communications.
* How to use MPI-Datatypes to **send and receive structured strided data** directly, avoiding send and receive buffer packing and unpacking.
* How to provide a **custom reduce operation** to `MPI_Reduce`.
* How to use a **cartesian topology** for MPI communication.
......@@ -20,13 +20,14 @@ The code sample is structured as follows:
* `mpicomm.c`, `mpicomm.h`: **Probably the most interesting part**, implementing the core message-passing functionality:
* `mpi_communicate_particles_dynamic`: Implementation of DSDE using `MPI_Ibarrier`, `MPI_Issend`, `MPI_IProbe` and `MPI_Recv`.
* `mpi_communicate_particles_collective`: Implementation of DSDE using `MPI_Alltoall` (exchange of message sizes) and `MPI_Alltoallw` (exchange of particle data). Expected to be not as scalable as the first implementation.
* `mpi_reduce_sim_info`: Reduction for simulation statistics (min./max. particles per processor domain, max. velocity, etc.)
* `mpi_reduce_sim_info`: Reduction for simulation statistics (minimunm/maximum particles per processor domain, maximum velocity, etc.)
* `mpitypes.c`, `mpitypes.h`: Code for initialization of custom MPI-Datatypes.
* `mpitype_part_init_send`: Creates MPI-Datatype for sending structured strided particle data.
* `mpitpye_part_init_recv`: Creates MPI-Datatype for receiving strucutred particle data in contiguous blocks.
* `particles.c`, `particles.h`: Code for maintaining the particle data structure.
* `random.c`, `random.h`: Helper functions for initializing particle data with random values.
* `simulation.c`, `simulation.h`:
* `simulation.c`, `simulation.h`: Demo implementation of particle simulation.
* `vector.c`, `vector.h`: Helper data structure for arrays of 2-dimensional vectors.
## Release Date
2016-01-18
......@@ -71,7 +72,7 @@ either on the command line or in your batch script, where `nx` and `ny` specify
### Further Command line arguments
* `-v [0-3]`: Specify the output verbosity level - 0: OFF; 1: INFO (Default); 2: DEBUG; 3: TRACE;
* `-d [rank]`: Debug MPI process with specified rank. Enables debug output for the specified rank (otherwise only output of rank 0 is written) and, if compiled with `-CFLAGS="DDEBUG_ATTACH"`, enables a waiting loop for the specified rank which allows to attach a debugger.
* `-g [rank]`: Debug MPI process with specified rank. Enables debug output for the specified rank (otherwise only output of rank 0 is written) and, if compiled with `-CFLAGS="-g -DDEBUG_ATTACH"`, enables a waiting loop for the specified rank which allows to attach a debugger.
* `--use-cart-topo`: Use MPI Communicator with cartesian topology (according to `nx` and `ny`) for all MPI communications. If supported, this allows MPI a better mapping of ranks to the physical hardware topology.
* `--collective`: Use collective operations for DSDE, i.e. `MPI_Alltoall`for exchanging message sizes and `MPI_Alltoallw` for exchanging particle data.
* `--dynamic`: Use more sophisticated communication scheme for DSDE, i.e. `MPI_Issend`, `MPI_IProbe`, `MPI_Ibarrier`.
......@@ -86,7 +87,7 @@ either on the command line or in your batch script, where `nx` and `ny` specify
* `-M [max-mass]`: Maximum particle mass; Default: 1.
* `-F [max-random-force]`: Maximum magnitude of random force applied to each particle in each iteration; Default: 0 (Disabled).
Large numbers as arguments to the options `-i`, `-n` or `-N` can be suffixed with 'k' or 'M'. For example, `-n 16k` specifies 16 * 1024 particles per cell; `-N 1M` specifies 1024 * 1024 (~1 million) particles in total.
For large numbers as arguments to the options `-i`, `-n` or `-N`, the suffixes 'k' or 'M' may be used. For example, `-n 16k` specifies 16 * 1024 particles per cell; `-N 1M` specifies 1024 * 1024 (~1 million) particles in total.
### Example
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment