next up previous contents
Next: MPI_Reduce_Scatterv-length Up: Collective-pattern Previous: MPI_Reduce_Bcast-length   Contents

MPI_Reduce_Bcast-nodes

This operation performs a tree-wise data reduction operation (here: bit-wise or) on all participating processes with MPI_Reduce and then distributes the result to all participating nodes with MPI_Bcast. This result distribution to all paricipating nodes is the difference to the normal MPI_Reduce operation, where the result is stored in a single root processor. So it is interesting to compare this operation to MPI_Allreduce, which distributes the result to all nodes in one call. We vary over the number of nodes with a message length of 256 Bytes for each node.

\epsfig{file=col_MPI_Reduce_Bcast-nodes.eps}
  number of nodes
X axis scale linear
Param. refinement no automatic x wide adaption
Argument range 2 - 8 units (s. below)
Argument stepwidth 1.000000

next up previous contents
Next: MPI_Reduce_Scatterv-length Up: Collective-pattern Previous: MPI_Reduce_Bcast-length   Contents
Per Ekman 2002-01-29