1 / 32

Hardware Environment

Hardware Environment. VIA cluster - 8 nodes Two 1.0 GHz VIA-C3 processors each node Connected with Gigabit Ethernet Linux kernel – 2.6.8-1-smp Blade Server – 5 nodes two 3.0 GHz Intel Xeon processors each node each Xeon processor is Hyper-Threading Connected with Gigabit Ethernet

brasen
Download Presentation

Hardware Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hardware Environment • VIA cluster - 8 nodes • Two 1.0 GHz VIA-C3 processors each node • Connected with Gigabit Ethernet • Linux kernel – 2.6.8-1-smp • Blade Server – 5 nodes • two 3.0 GHz Intel Xeon processors each node • each Xeon processor is Hyper-Threading • Connected with Gigabit Ethernet • Linux kernel – 2.6.8-1-smp

  2. MPI – message passing interface • Basic data types • MPI_CHAR – char • MPI_UNSIGNED_CHAR – unsigned char • MPI_BYTE – like unsigned char • MPI_SHORT – short • MPI_LONG – long • MPI_INT – int • MPI_FLOAT – float • MPI_DOUBLE – double • ……

  3. MPI – message passing interface • 6 basic MPI functions • MPI_Init – initialize MPI environment • MPI_Finalize – shutting down MPI environment • MPI_Comm_size – determine number of processes • MPI_Comm_rank – determine process rank • MPI_Send – blocking data send • MPI_Recv – blocking data receive

  4. MPI – message passing interface • Initialize MPI • MPI_Init(&argc, &argv) • First MPI function called by each process • Allow system to do any necessary setup • Not necessarily first executable statement in your code

  5. MPI – message passing interface • Communicators • Communicators: opaque object that provides message- passing environment for processes • MPI_COMM_WORLD • Default communicator • Includes all processes • Create new communicators • MPI_Comm_create() • MPI_Group_incl()

  6. Communicator Name Communicator Processes Ranks • Communicators MPI_COMM_WORLD 0 5 2 1 4 3

  7. MPI – message passing interface • Shutting Down MPI environment • MPI_Finalize() • Call after all MPI function calls • Allow system to free any resources

  8. MPI – message passing interface • Determine Number of Processes • MPI_Comm_size(MPI_COMM_WORLD,&size) • First argument is communicator • Number of processes returned through second argument

  9. MPI – message passing interface • Determine Process Rank • MPI_Comm_rank(MPI_COMM_WORLD,&myid) • First argument is communicator • Process rank (in range 0, 1, 2, …, P-1) returned through second argument

  10. Example - hello.c

  11. Example - hello.c (con’t) • Compile MPI Programs • mpicc –o foo foo.c • mpicc – script to compile and link MPI library • example: mpicc –o hello hello.c

  12. Example - hello.c (con’t) • Execute MPI Programs • mpirun –np <p> <exec> <argc1> … • -np <p> - number of processes • <exec> - executable filename • <argc1> … - argument passing to <exec> • example: mpirun –np 4 hello

  13. Example – hello.c (con’t) hello

  14. Example – hello.c (con’t) hello hello hello hello

  15. Example – hello.c (con’t) hello hello hello hello rank = 0 rank = 1 rank = 2 rank = 3

  16. Example – hello.c (con’t) hello hello hello hello rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4

  17. Example – hello.c (con’t) hello hello hello hello rank = 0 size = 4 rank = 1 size = 4 rank = 2 size = 4 rank = 3 size = 4

  18. MPI – message passing interface • Specify Host Processors • machine file describes machines to run your program • # of MPI processes > physical machines ? • avoid login with password • mpirun –np <p> -machinefile <filename> <exec> • example: in machines.LINUX # machines.LINUX # put machine hostname below node01 node02 node03

  19. MPI – message passing interface • Blocking Send and Receive • MPI_send(&buf,count,datatype,dest,tag,MPI_COMM_WORLD) • MPI_recv(&buf,count,datatype,src,tag,MPI_COMM_WORLD,status) • Argument datatype must be MPI_CHAR, MPI_INT….. • For each send-recv pair, tag must be the same

  20. MPI – message passing interface • Other program notes • variables and functions except for MPI_XXXare local • messages dumped are not in order • example: send_recv.c

  21. MPI – send_recv.c

  22. Odd-Even Sort • Operation in two phases, even phase and odd phase • Even phase • Even-numbered processes exchange numbers with their right neighbor • Odd phase • Odd-numbered processes exchange numbers with their right neighbor

  23. How to solve this 8-number sorting? • Sequential program – easy • MPI • one number for one MPI process • start MPI program • master sends data to other process • start odd_even sorting • master collects result from other processes • end MPI program

  24. Other problem? • # of unsorted numbers is not power of 2 ? • # of unsorted numbers is large ? • # of unsorted numbers can not be divided by nprocs ?

  25. MPI – message passing interface • Advanced MPI functions • MPI_Bcast – broadcast a msg from source to other processes • MPI_Scatter – scatter values to a group of processes • MPI_Gather – gather values from a group of processes • MPI_Allgather – gather data from all tasks an distribute it to all • MPI_Barrier – block until all processes reach this routine

  26. MPI_Bcast MPI_Bcast (void *buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm)

  27. MPI_Scatter MPI_Scatter (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)

  28. MPI_Gather MPI_Gather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, int root, MPI_Comm comm)

  29. MPI_Allgather MPI_Allgather (void *sendbuf, int sendcnt, MPI_Datatype sendtype, void *recvbuf, int recvcnt, MPI_Datatype recvtype, MPI_Comm comm)

  30. MPI_Barrier MPI_Barrier (MPI_Comm comm)

  31. Extension of MPI_Recv • MPI_Recv(void *buffer, int count,MPI_Datatype datatype, int source, int tag, • MPI_Comm comm, MPI_Status *status) • source is don’t-care – MPI_ANY_SOURCE • tag is don’t-care – MPI_ANY_TAG • to retrieve sender’s information typedef struct { int count; int MPI_SOURCE; int MPI_TAG; int MPI_ERROR; } MPI_Status; • use status->MPI_SOURCE to get sender’s id • use status->MPI_TAG to get message

More Related