Skip to content
UoL CS Notes

Message Passing Interface (MPI) Introduction

COMP328 Lectures

Types of Locks

Deadlock
Occurs when two or more task wait for each other an each will not resume until some action is taken
Livelock
Occurs when the tasks involved in a deadlock take action to resolve the original deadlock but in such a way that there is no progress.

Parallel Programming Models

Shared Memory:

  • All memory is accessible
  • Programmed using OpenMP.

Distributed Memory:

  • Sharing of memory must be explicit.
  • Network latency and bandwidth become important.
  • Programmed using MPI:
    • Message passing interface.

Flynn’s Taxonomy

MPMD
Multiple programs multiple data.
SPMD
Single program multiple data. Can also be multiple copies of the same program.

Data must be passed explicitly between the processes.

MPI Vocabulary

Process
Each instance of the code runs as an MPI process. Each process has a numbered rank.
Communicator
A collection of processors that can send messages to each other. The default communicator, containing all processes is MPI_COMM_WORLD.
Rank
A numerical ID of a process within a communicator (indexed from 0).

Example MPI Programmes

#include <stdio.h>
#include <mpi.h>
int main(void){
	MPI_Init(NULL, NULL);
	mpiRankWorkToDo();
	MPI_Finalize();
}
#include <stdio.h>
#include <mpi.h>
int main(void) {
	int myID;
	MPI_Init(NULL, NULL);
	MPI_Comm_rank(MPI_COMM_WORLD, &myID);
	printf("Hi from process ranked %d\n", myID);
	MPI_Finalize();
}

We can compile this with mpicc and run it with:

$ mpirun –np<num_process> ./a.out

MPI Point-to-Point Communications

  • One sender
  • One receiver

There are two important functions for MPI (taken from man):

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Send(const void *buf, int count, MPI_Datatype datatype, int dest,
            int tag, MPI_Comm comm)
SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Recv(void *buf, int count, MPI_Datatype datatype,
            int source, int tag, MPI_Comm comm, MPI_Status *status)

These functions also take the types and source rank to give some static checking.

We need to build in logic so that our code can detect it’s own rank:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <mpi.h>
const int MAX_STRING = 100;
int main(void) {
	int commSize;
	int myRank;
	
	char greeting[MAX_STRING];
	
	MPI_Init(NULL, NULL);
	MPI_Comm_size(MPI_COMM_WORLD, &commSize);
	MPI_Comm_rank(MPI_COMM_WORLD, &myRank);
	
	char processor_name[MPI_MAX_PROCESSOR_NAME];
	int name_len;
	MPI_Get_processor_name(processor_name, &name_len);
	
	if(myRank != 0){
		sprintf(greeting, "Greetings from processor %s, process %d of %d!", processor_name, myRank, commSize);
		MPI_Send(greeting, strlen(greeting)+1, MPI_CHAR, 0, 0, MPI_COMM_WORLD);
	} else {
		printf("Greetings from processor %s, process %d of %d!", processor_name, myRank, commSize);
		int i;
		for(i = 1; i < commSize; i++){
			MPI_Recv(greeting, MAX_STRING, MPI_CHAR, i, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE);
			printf("%s\n", greeting);
		}
	}
}

Point-to-Point Problems

Both MPI_Recv and MPI_Send are blocking functions:

  • Send will not return until it has been received.
  • Receive will not return until there is a message.

This can lead to deadlocks if all ranks are trying to send an receive at the same time.

One solution is to have odds send and evens receive for one cycle and then swap if required. This way data will flow properly and there will be no locks.