1 / 60

Distributed System Building Blocks

Distributed System Building Blocks. Outline. Distributed Programming Paradigm Shared Memory Programming Message Passing Interface Networking Remote Procedure Call. Distributed Programming Paradigm.

kuniko
Download Presentation

Distributed System Building Blocks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed System Building Blocks

  2. Outline • Distributed Programming Paradigm • Shared Memory Programming • Message Passing Interface • Networking • Remote Procedure Call

  3. Distributed Programming Paradigm • Based on the memory architecture, the programming paradigm can be roughly categorizes to different classes • Shared Memory Programming • The processing units share the same memory space • Message Passing Interface Programming • There is no shared memory among multiple processing units thus the processing units can only communicate sending and receiving messages

  4. Task 1 Task 2 Task 1 Task 2 Parallel Processing Idea • Serialized processing with context switch • Parallel processing

  5. Shared Memory Programming Model • Multiple processing units connect to the shared memory and have the same memory address space. All the processing units can see the virtually the same memory. Processing Unit Processing Unit Processing Unit …… Memory Buses Memory Unit Memory Unit Memory Unit …… Single shared memory address space

  6. Multi-thread Programming • Shared memory multi-thread programming is the standard for single machine programming. It can harness the full power of multicore architecture (with careful programming). The programming model is quite simple but it is hard to program correctly and efficiently. • Windows: WinThread • Linux: pthread • Scientific Computing: OpenMP • We will see some examples of pthread and OpenMP programming and use the examples to show some important concepts while building our distributed systems.

  7. Process and Thread

  8. Thread creation and termination • Create a thread by providing the entry of the thread (a function) • pthread_create(thread, attr, start_routine, arg) • Wait a thread to finish. This is a special kind of thread synchronization. • pthread_join • Quit the execution of a thread • pthread_exit • Once a thread is created, they are peers and independent. • pthreadcreate.c

  9. Thread synchronization • As threads share the same memory address space, it is dangerous that if a shared resources are accessed simultaneously. If this happens, the behavior of the program will not be defined. Thus, we need the mechanisms to synchronize the access to the shared resources. Commonly, this can be achieved through locks. • Another synchronization happens when you want to define the order of instruction flow in different threads. As every two threads are independent, some synchronization should be used to implement this.

  10. Mutual Exclusion • Providing mutual exclusion access to shared resources. Thread Thread Shared Resources Thread Thread

  11. pthread Mutual Exclusion • Mutex is an abbreviation for "mutual exclusion". Mutex variables are one of the primary means of implementing thread synchronization and for protecting shared data when multiple writes occur. Only one thread can lock (or own) a mutex variable at any given time. • pthread_mutex_init • pthread_mutex_destroy • pthread_mutex_lock • pthread_mutex_unlock • pthreadmutex.c

  12. Define execution order among different threads • It is quite common that events are used as a mechanism for defining the execution order among different portions of codes located in multiple threads. • Event means the execution of some threads will not continue until something has happened. Another thread will make the thing happen. Waiting thread1 Notify Working thread2

  13. pthread Conditional Variables • Condition variables allow threads to synchronize based upon the actual value of data. • pthread_cond_int(condition,attr) • pthread_cond_destroy(condition) • pthread_condition_wait(condition, mutex) • pthread_condition_signal(condition) • pthread_condition_broadcast(condition) • pthreadcondition.c

  14. Race condition Two thread access the shared resources without synchronization. The behavior of race condition is undefined and might bring some undesirable results. intglobal_counter=0; • //thread 1 • for (inti = 0; i<50;i++) • global_counter+=i; • //thread 2 • for (inti = 50; i<=100;i++) • global_counter += i; What will be the final result of global_couter after these two code blocks finished?

  15. Dead lock • //thread 2 • lock(B) • lock(A) • do_someotherthings() • unlock(A) • unlock(B) • //thread 1 • lock(A) • lock(B) • do_something() • unlock(B) • unlock(A)

  16. Deadlock on the road

  17. Live lock • Only request for lock but do nothing useful. • while (true){ • Lock L2 • if (!Lock L1) • Release(L2) • else • break; • } • do something useful here • while (true){ • Lock L1 • if (!Lock L2) • Release(L1) • else • break; • } • do something useful here

  18. Message Passing Interface • With the large number of computing nodes, it is very difficult to build a single shared memory space for the processing units. Thus, process can exchange information by sending/receiving messages. • MPI is the de-facto standard for programming in the cluster environment for scientific computing.

  19. process 0 process 1 process 2 Load Process Gather Store MPI Programs • Each process has its own stack and code segment. Processes exchange information by passing messages. Support both SPMD and MPMD computing. SPMD Program

  20. Node 1 prog_a (a) MPMD Master/Worker Node 2 prog_b Node 3 Node 1 prog_a (b) MPMD Coupled Analysis Node 2 prog_b Node 3 prog_c prog_b prog_a prog_c (c) MPMD Streamline Node 1 Node 2 Node 3 MPI supports MPMD

  21. Create the MPI world Hello world from process 0 of 4 Hello world from process 1 of 4 Hello world from process 2 of 4 Hello world from process 3 of 4 #include <stdio.h> #include "mpi.h" int main(intargc, char *argv[] ){ int rank; int size; MPI_Init(argc, argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Hello world from process %d of %d\n", rank, size ); MPI_Finalize(); return 0; }

  22. MPI Basic: Point to Point Communication • int MPI_SEND(buf, count, datatype, dest, tag, comm) • int MPI_RECV(buf,count,datatype,source,tag,comm,status) • What parameter make the communication happen? • The buffer of sender or receiver • quantity of data, count • data type • source and destination • tag • the communicators and groups

  23. Group Synchronization • MPI_Barrier(comm) • Creates a barrier synchronization in a group. Each task, when reaching the MPI_Barrier call, blocks until all tasks in the group reach the same MPI_Barrier call.

  24. Broadcast

  25. Scatter and Gather data  process  Gather: many to one Scatter: one to many

  26. Allgather

  27. Alltoall data  process 

  28. Network basics • IP, TCP, DNS • Socket • Protocols • programming structures

  29. TCP/IP, DNS • IP: The Internet Protocol (IP) is the principal communications protocol used for relaying datagrams (also known as network packets) across an internetwork using the Internet Protocol Suite responsible for routing packets across network boundaries. (Routing, finding a specified destination on the Internet) • TCP: TCP provides reliable, ordered delivery of a stream of octets from a program on one computer to another program on another computer. • DNS: You can not remember something like 74.125.128.100 (IP address) easily. But you can remember www.google.com easily. DNS is just the system to translate the name of a machine to the IP address.

  30. What makes two process talk? • The address of two machines and the identification of two process in each machine. • Source IP, Destination IP: the address of two machines • Source Port, Destination Port: the identification of two processes in two machines • So, a connection is identified by following four parameters: • source IP • destination IP • source Port • destination Port

  31. Socket • Sock is the fundamental programming abstraction for communication between two processes on two different machines. (It is OK to use socket for communication in the same machine, however this is not the typical usage for inter process communication between two processes on the same machine.) • Client and server use two different types of sockets: • Client creates a client-side socket and make a connect call to the server socket. After successfully connected, the client can start sending data to the server. • Server creates a server-side socket and often listen on this socket waiting for the client. If a connection request package is received, the server will then make a new socket to accept the connect and start communication. The original socket can be used waiting for other connections.

  32. Ports • As mentioned before, ports are used to identify a specific process within a machine(with an IP address). • Using different source ports allows multiple clients to connect to a server at once.

  33. Example: Web Server (1/3) The server creates a listener socket attached to a specific port. 80 is the agreed-upon port number for web traffic.

  34. Example: Web Server (2/3) The client-side socket have to use a source port, but the OS chooses a random unused port number When the client requests a URL (e.g., “www.google.com”), its OS uses DNS system to find its IP address.

  35. Example: Web Server (3/3) Listener is ready for more incoming connections, while current connection can processed in parallel.

  36. Example: Web Server

  37. The network packet • Data transfer over Internet by using data packets. • Packets wrapper various information that is used for different usage. For example, addresses are used for routing, serial number and size are used for stream control. • Your data can be considered as payload in the packet which looks like a letter inside an envelop. You should know that here are some lower level of protocols for interoperate with physical devices such as MAC layer for Ethernet or using 802.11 wireless.

  38. IP: the Internet Protocol • IP mainly focuses on how to find a machine on the Internet. Thus, IP define the addressing schema of machines. • IP packet encapsulate the upper layer protocol information as well as the data that provided by the applications. • IP protocol does not provide reliability. IP protocol just includes enough information for the data to tell the routers where is the destination for the data carried in the packet.

  39. TCP: Transport Control • TCP is built on top of IP. • TCP provides a virtual line between two ends. The data is stream oriented instead of message (packet) orientied. • TCP provides the reliability and ordering of messages. • TCP is very important basic building block for upper layer protocol. For example, HTTP is built on top of TCP.

  40. You and the web you want to access • Not actually tube-like “underneath the hood” • Unlike phone system (circuit switched), the packet switched Internet uses many routes at once

  41. It is difficult to handle network problems • If you can not receive a message from a specific machine, it is quite difficult even impossible to identify whether it is node crash or network crash. • If you send some data to a machine and a party to a socket disconnects, how can we identify how much data did the other receive. • Security problems: during the data transfer, Can someone in the middle intercept/modify our data? • Performance problems: Traffic congestion makes switch/router topology important for efficient throughput

  42. Programming structures for processing network information • fork() based server data processing • multiple threads based • select() based • poll() based • see the bible of UNIX Network Programming

  43. Before you do the data transfer client server listenfd=socket(); setsockopt(listenfd) bind(listenfd) listen(listenfd) acceptedfd=accet(listenfd) do_various_work_with_acceptedfd();//see following slides • fd=socket() • setsockopt(fd) • r=connect(fd,destination) • read(fd)/write(fd) • send(fd)/recv(fd) • sendto/recvfrom(fd) • close(fd)

  44. fork based server data processing • acceptedfd=accept(listenfd); • pid=for(); • assert(pid>=0); • if(pid==0) //child process • close(listenfd); • do_some_thing_with_acceptedfd; • else • close(acceptedfd); • go_back_to_accept();

  45. threads code • acceptedfd=accept(listenfd); • thread=get_free_thread_from_pool(); • set_thread_data(thread,acceptedfd); • activate_thread(thread) • go_back_to_accept() • //in the thread • do_something(acceptedfd); • close(acceptedfd);

  46. select code for multiple sockets • Why? you want to reuse the power of single thread and processing on multiple sockets. you want to stay in the same thread an thus you can keep the information more conveniently ( you don’t want to do some synchronization among threads). • FD_ZERO, FD_SET, fd_set(readfds, writefds), maxfd //setting the fds you want to monitor • switch(select(fd_set)){ • -1: something is wrong; break; • 0: rarely happen, you should do the select again;break; • default: for each fd you want to monitor if FD_ISSET(fd_set, fd) do data transfer with the fd. • }

  47. poll code for multiple sockets • Some other group propose the function of poll and use similar but different programming interface. • The programming structure can be the same as select. • int poll(structpollfd *ufds, unsigned intnfds, int timeout); • POLLIN, POLLOUT, POLLPRI

  48. If you are using the poll or select version, how can you notify the working thread? • using fifo, and put one of the fd pair in the poll or select list • otherwise, you can use eventfd() which only use one instead of two fds.

  49. RPC • What is remote procedure call? • Why RPC? • The types of RPC • How can we implement RPC (RPC internals)

  50. RPC • A remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. • That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. • Consider the object oriented principles, RPC is called as remote invocation or remote method invocation.

More Related