Hello it's a me again Drifter Programming! Today we will implement a **Distributed algorithm** that is used for **routing packages/packets through a Distributed System**. I will first give you a quick introduction to what Distributed Systems are, then talk about the algorithm and finally implement the algorithm using OpenMPI!

So, without further do, let's get started!

## Introduction

Running code only on a **single computing unit **(mostly thread or core) **is not vcry efficient **and also not fast enough for what we want computers for. Because of that we ended up creating **multi-core and multi-threaded CPU's **that let us run multiple tasks/processes at once. This is what we call **Parallel computing**.

Even "extremer" is the idea of splitting the program to parts that can be executed by completetely different machines/computers. This creates* ***Distributed Systems** where the different computers **pass messages through the network** they are connected with. This connection can be made using many **architectures** like: client-server, peer-to-peer and n-tier.

The **difference** between Parallel and Distributed computing is exactly the way we pass information. In Parallel computing (multi-core systems) the different compute units (CPU cores) have **shared memory access** and can communicate very fast through the bus. In Distributed systems we have* ***distributed memory **and so every computer has it's own memory and the only way of passing information is of course through messages using the network!

## Memory Passing Interface (MPI)

The **communication protocol** that is mostly used in Distributed algorithm coding is MPI!

MPI is a **library** that provides us with a lot of **functionality**. It contains functions/methods for passing and receiving messages with an abstract, virtual and topology independent way of appliance. We have plenty of datatypes and communication is made very simple!

Because it would take more then one post to talk about this only and a lot is already been posted online, I would suggest you to read about it on your own.

Here some **links**:

## Routing algorithms

In my Java graphs series I already covered some of those algorithms.

** Routing algorithms** are those that give us the** shortest path to an specific destination**. This means that we calculate the shortest paths from one or more vertices to the others and can then use this** routing matrix **to see which node is the best to send a package to, if we want it to end up on another node in the best and most efficient way.

**Algorithms** that start from one (source) vertex are:

Both are links to my post about them...

On the other hand, **Floyd-Warshall** (or even Johnson) is an algorithm that finds the shortest paths for all pairs. The distributed algorithm that we will talk about today is based on Floyd-Warshall and I suggest you to read about it first!

**Floyd-Warshall **calculates the minimum path from and to every node using weights on the edges/connections between those nodes. It's of course a non-distributed algorithm and because it finds all the paths it has an complexity of Θ(N^3) , which means that we have N^3 steps (N^2 for each of the N nodes),

## Toueg Algorithm

Toueg's algorithm is:

- An distributed algorithm
- Based on Floyd-Warshall
- Each node knows it's neighbours and the weight of their connection
- N messages are being passed through each channel/connection and N*E in total (with E being the number of Edges)

Each node u is executing the following **algorithm **(pseudocode):

`Initialize Su := nullset;`

`forall v in V do{`

` if v == u then init Du[v]:=0, Nbu[v] = udef;`

` else if v is Neighu then init Du[v] = w(uv), Nbu[v] = v;`

` else init Du[v] = inf, Nbu[v] = udef;`

` while Su != V do {`

` pick w from V`

` if u == w then broadcast table Dw`

` else receive table Dw`

` forall v in V do`

` if Du[w] + Dw[v] < Du[v] then`

` Du[v] := Du[w] + Dw[v];`

` Nbu[v] := Nbu[w]`

` Su := Su union {w}`

` }`

`}`

**MPI Implementation:**

`#include "mpi.h"`

`#include <stdio.h>`

`#include <stdlib.h>`

`#include <time.h>`

`#define MASTER 0`

`double **graph_initialization(int size);`

`void print_graph(double **adj, int size);`

`void print_neighbours(double *d, int *nb, int size);`

`int main(int argc, char *argv[]){`

` int rank, size, retVal, i, j;`

` double **adj; // adjacency matrix`

` double *du; // distances`

` double *dw; // distances received`

` int *nb; // neighbours`

` MPI_Status status;`

` char inMessage, outMessage='x';`

` `

` retVal=MPI_Init( &argc, &argv );`

` if (retVal!=MPI_SUCCESS) {`

` printf("Error starting MPI.\n");`

` MPI_Abort(MPI_COMM_WORLD,retVal);`

` }`

` `

` MPI_Comm_rank(MPI_COMM_WORLD, &rank);`

` MPI_Comm_size(MPI_COMM_WORLD, &size);`

` du = (double*) malloc (size * sizeof(double));`

` dw = (double*) malloc (size * sizeof(double));`

` nb = (int*) malloc (size * sizeof(int));`

` `

` if(rank == MASTER){`

` // graph initialization`

` adj = graph_initialization(size);`

` printf("-------------------------------\n");`

` printf("Graph adj matrix: \n");`

` print_graph(adj, size);`

` printf("-------------------------------\n");`

` `

` // send the neighbours of the others`

` for(i = 0; i < size; i++){`

` if(i == MASTER) continue; // same process`

` `

` // prepare matrix`

` for(j = 0; j <size; j++){`

` du[j] = adj[i][j];`

` }`

` `

` // send matrix`

` MPI_Send(&(du[0]), size, MPI_DOUBLE, i, 0, MPI_COMM_WORLD);`

` }`

` `

` // set own neighbour distances`

` for(i = 0; i <size; i++){`

` du[i] = adj[MASTER][i];`

` }`

` }`

` else{ // not master process`

` // receive matrix`

` MPI_Recv(&(du[0]), size, MPI_DOUBLE, i, 0, MPI_COMM_WORLD, &status);`

` }`

` `

` // initialize neighbours`

` for(i = 0; i < size; i++){`

` nb[i] = i; `

` }`

` `

` // main toueg`

` for(i = 0; i < size; i++){ //w`

` if(i == rank){`

` // initialize broadcast table`

` for(j = 0; j < size; j++){`

` dw[j] = du[j];`

` }`

` }`

` // receive table if i != rank, else bcast`

` MPI_Bcast(&(dw[0]), size, MPI_DOUBLE, i, MPI_COMM_WORLD);`

` `

` if(i == rank) continue;`

` `

` // update distances`

` for (j = 0; j < size; j++){ //v`

` // if du[w] + dw[v] < du[v] then du[v] = du[w] + dw[v] and nb[v] = w`

` if(du[i] + dw[j] < du[j]){`

` du[j] = du[i] + dw[j];`

` nb[j] = i;`

` }`

` }`

` }`

` printf("process %d best weights: ", rank);`

` print_neighbours(du, nb, size);`

` `

` MPI_Finalize();`

` return 0;`

`}`

`// FUNCTIONS`

`double **graph_initialization(int size){`

` srand(time(NULL));`

` double **temp;`

` int i, j;`

` `

` // memory allocation and initialization`

` temp = (double**) malloc(size * sizeof(double*));`

` for(i = 0; i < size; i++){`

` temp[i] = (double*) malloc(size * sizeof(double));`

` for(j = 0; j < size; j++){`

` if(i == j){`

` temp[i][j] = 0;`

` }`

` else{`

` temp[i][j] = 1 + rand()%15 + 0.1*(rand()%10); // rand value 1.0 - 15.9`

` }`

` } `

` }`

` `

` return temp; `

`}`

`void print_graph(double **adj, int size){`

` int i, j;`

` for(i = 0; i < size; i++){`

` for(j = 0; j < size; j++){`

` printf("%g ", adj[i][j]);`

` }`

` printf("\n"); `

` }`

`}`

`void print_neighbours(double *d, int *nb, int size){`

` int i;`

` for(i = 0; i < size; i++){`

` printf("%g(%d) ", d[i], nb[i]); `

` }`

` printf("\n"); `

`}`

You can download the code from here.

## Image sources:

And I hope that you enjoyed it!

I'm thinking of uploading more implementations of algorithms using MPI and other protocols! Algorithms in general are very interesting and useful and seem to be a topic of interest!

Bye!