Simulation of an colony of ants camille coti coti@lri fr qoscosgrid barcelona meeting 10 25 06
This presentation is the property of its rightful owner.
Sponsored Links
1 / 16

MPI Labs PowerPoint PPT Presentation


  • 61 Views
  • Uploaded on
  • Presentation posted in: General

Simulation of an colony of ants Camille Coti [email protected] QosCosGrid Barcelona meeting, 10/25/06. MPI Labs. Introduction to ants. How ants find food and how they remember the path Random walk around the source Once they find some food : go back to the source Drop pheromones along this path

Download Presentation

MPI Labs

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Simulation of an colony of ants camille coti coti@lri fr qoscosgrid barcelona meeting 10 25 06

Simulation of an colony of ants

Camille Coti

[email protected]

QosCosGrid Barcelona meeting, 10/25/06

MPI Labs


Introduction to ants

Introduction to ants

  • How ants find food and how they remember the path

    • Random walk around the source

    • Once they find some food : go back to the source

    • Drop pheromones along this path

  • When they find some pheromones:

    • Follow them

    • Pheromones evaporate, thereby limiting their influence over time


Modelising an ant colony

Modelising an ant colony

  • Cellular automata

    • Grid of cells, reprensented as a matrix

    • State of a cell:

      • With an ant on it (or several ones)

      • Some pheromones can have been dropped on it

      • It can also be empty

    • We define a transition rule from time t to time t+1


A picture can make things easier

A picture can make things easier

The ant-hill

(where the ants live)

The food

Ants spread

around

the ant-hill

Ants that have

found the food

drop pheromones


Update algorithm

Update algorithm

  • every ant seeks around it

  • if it finds pheromones:

    • follow it

  • if it finds some food:

    • take some

    • go back to the ant hill dropping pheromones on the path

  • otherwise:

    • chose a random direction


Parallelisation

Parallelisation

  • Share the grid among the processors

    • Each processor computes a part of the calculation

    • Use MPI communication between the processes

    • This is parallel computing ☺

Proc #0

Proc #1

Proc #2

Proc #3


Parallelisation1

Parallelisation

  • Each processor can compute the transition rule for almost all the space it is assigned to

    • BUT problem near the boundaries: need to know what is next

    • THEN each processor has to send the state of its frontiers to its neighbours

  • Overlap computation and communications

    • Non-blocking communication

    • Computation

    • Wait for the communications to be finished (usually not necessary)


Algorithm of the parallelisation

Algorithm of the parallelisation

  • Initialisation

  • for n iterations do:

    • send/receive frontiers

    • compute the transition rule (excepted near the frontiers)

    • finish the communications

    • compute the transition rule near the frontiers

    • send the result

    • update the bounds (ants might have walked across the frontiers)


What you have to do

What you have to do

  • We provide you

    • The basic functions

    • The update rule

  • You have to write

    • The MPI communications

    • An MPI data type creation and declaration


Some good practice rules

Some “good” practice rules

  • Initalise your communications

    • MPI_Init(&argc, &argv);

    • MPI_Comm_size(MPI_COMM_WORLD, &size);

    • MPI_Comm_rank(MPI_COMM_WORLD, &rank);

  • Finalise them

    • MPI_Finalize();


Some good practice rules1

Some “good” practice rules

  • Use non-blocking communications rather than blocking ones

    • MPI_Isend() / MPI_Irecv()

    • Wait for completion with MPI_Waitall()

  • So that you can overlap communications with computation


Creating a new mpi data type

Creating a new MPI data type

  • Declare the types that will be contained

    • MPI_Datatypes types[2] = {MPI_INT, MPI_CHAR}

  • Declare the offset for the address

    • MPI_Aint displ[2]={0, 4}

  • Create your structure and declare its name

    • MPI_Type_create_struct(...)

  • And commit it

    • MPI_Type_commit(...)


Create a topology

Create a topology

  • For example, create a torus

    • void create_comm_torus_1D(){

      • int mpierrno, period, reorder;

      • period=0; reorder=0;

      • mpierrno=MPI_Cart_create(MPI_COMM_WORLD, 1, &comm_size, &period, reorder, &comm_torus_1D);

      • MPI_Cart_shift(comm_torus_1D,0,1,&my_west_rank, &my_east_rank);

    • }

  • (you won't have to do this for the labs, this function is provided, but it is for your personal culture)


Some collective communications

Some collective communications

  • Reductions: sum, min, max...

    • Useful for time measurements or to make a global sum of local results, for example

    • MPI_Reduce(...)

  • Barriers

    • All the processes get synchronised

    • MPI_Barrier(communicator)


Mpi labs

Misc

  • Time measurement:

    • t1 = MPI_Wtime();

    • t2 = MPI_Wtime();

    • time_elapsed = t2 - t1;

    • MPI_Wtime() returns the time elapsed on a given processor


If you need more

If you need more

  • www.lri.fr/~coti/QosCosGrid

  • Feel free to ask questions☺


  • Login