next up previous contents
Next: MPI_Alltoall-length Up: Collective-pattern Previous: MPI_Allreduce-length   Contents

MPI_Allreduce-nodes

This operation performs a tree-wise data reduction operation (here: bit-wise or) on all participating processes and distributes the result to all participating nodes. This result distribution to all paricipating nodes is the difference to the normal MPI_Reduce operation, where the result is stored in a single root processor. So it is interesting to compare this operation to the normal MPI_Reduce and to a MPI_Reduce follwed by an MPI_Bcast operation (our measurement MPI_Reduce_Bcast), which would also distribute the result to all nodes. We vary over the number of nodes with a message length of 256 Bytes for each node.

\epsfig{file=col_MPI_Allreduce-nodes.eps}
  number of nodes
X axis scale linear
Param. refinement no automatic x wide adaption
Argument range 2 - 8 units (s. below)
Argument stepwidth 1.000000

next up previous contents
Next: MPI_Alltoall-length Up: Collective-pattern Previous: MPI_Allreduce-length   Contents
Per Ekman 2002-01-29