This operation performs a tree-wise data reduction operation (here:
bit-wise or) on all participating processes with MPI_Reduce and
then distributes the result partially to all participating nodes with
MPI_Scatterv. Every node recieves a different part of the
result-array. This result kind of distribution to all paricipating
nodes is similar to the one ofMPI_Reduce_scatter, so it is
interesting to compare this operation to MPI_Reduce_scatter,
which distributes the result to all nodes in one call. We vary over
the number of nodes with a message length of 256 Bytes for each node.
Pattern: Collective varied over number of nodes.
default values: 8 nodes, message length 256 units, max. / act. time for suite disabled/0.00 min.