MPI Messages
Cost of MPI Messages
Over a given network we can calculate the time to send a message:
\[\text{message}\propto{k}\left(\text{latency}+\frac{\text{message size}}{\text{bandwidth}}\right)\]With smaller messages we see that the latency makes a much larger impact on time.
MPI_Reduce
SYNTAX
C Syntax
#include <mpi.h>
int MPI_Reduce(const void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, int root,
MPI_Comm comm)
Everyone calls this function resulting in collective communication.
Collective Functions
Name | Function | Type |
---|---|---|
MPI_Bcast |
Send the same data to all ranks. | 1 to many. |
MPI_Scatter |
Distributing data between ranks. | 1 to many. |
MPI_Gather |
Collecting data together from ranks. | Many to one. |
MPI_Reduce |
Pulling data from all ranks and doing a reduction operation (summation, product). | Many to one. |
- There are also
MPI_Allgather
andMPI_Allreduce
that act as usual, but broadcast the result to all ranks. MPI_Scatter
,MPI_Gather
,MPI_Allgather
are also available in the formMPI_Scatterv
. This allows sending of variable size data chunks.