1 / 54

Message Passing Distributed Systems with examples from Socket based Client/Server Systems

Message Passing Distributed Systems with examples from Socket based Client/Server Systems. Distributed Computing Topologies Critical Points in C/S Topologies The Message Passing Model Delivery Guarantees Request Ordering and Reliable Broadcast Programming C/S Systems with Sockets. Overview.

wyman
Download Presentation

Message Passing Distributed Systems with examples from Socket based Client/Server Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Message Passing Distributed Systemswith examples from Socket based Client/Server Systems

  2. Distributed Computing Topologies Critical Points in C/S Topologies The Message Passing Model Delivery Guarantees Request Ordering and Reliable Broadcast Programming C/S Systems with Sockets Overview Starting with the common C/S model, we will develop core properties for distributed protocols.

  3. Distributed Computing Topologies - Client/Server - Hierarchical - Totally Distributed - Bus Topologies

  4. Client/Server Systems server client request Request processing response Clients initiate communication and (typically) block while waiting for the server to process the request. Still the most common DS topology.

  5. Hierarchical Systems Client and Server Every node can be both client and server but some play a special role, e.g. are Domain Name System (DNS) server. A reduction of communication overhead and central control options are some of the attractive features of this topology.

  6. Totally distributed Client and Server Every node IS both client and server. Watch out: peer to peer systems need not be totally distributed!

  7. Bus Systems Client and Server Every node listens for data and posts data in response. This achieves a high degree of separation and indepencence. Event-driven systems follow this topology.

  8. Critical Points in C/S Topologies - Load Management, Delays and Bottlenecks - Security - Queuing Theory Aspects - Sync. vs. Async. Processing Patterns

  9. Critical Points in C/S Systems server client Processing: clustering?, single/multithr.? Session state? Authentication?authorization? Privacy? Requester: locate server? Authenticate? Synchronous vs asynchr. call? request response Bandwidth/latency? Scalability, Availability and Security all depend on the answers to those questions.

  10. Queuing Theory for Multi-tier Process Networks A modern application servers performance largely relies on the proper configuration of several queues from network listening to threadpools etc. Queuing theory lets us determine the proper configurations (see resources). In general, architectures like above are very sensitive for saturated queues. Good architectures create a funnel from left to right and limit resources like max. threads. Caching an batching are directly derived from queuing theory. Picture from: http://publib.boulder.ibm.com/infocenter/wasinfo/v5r1//index.jsp?topic=/com.ibm.websphere.base.doc/info/aes/ae/rprf_queue.html

  11. Synchronous I/O (blocking calls) Output Channel FileSys Input Channel Thread Switch Wait for client cmd. Process client cmd, e.g. get file Switch Send response to client Switch Wait for client cmd. Many threads are required to stay responsive. Many context switches occur and each thread needs extra memory. see: Aruna Kalaqanan et.al. http://www-128.ibm.com/developerworks/java/library/j-javaio

  12. Non-Blocking: Reactor Pattern “Server applications in a distributed system must handle multiple clients that send them service requests. Before invoking a specific service, however, the server application must demultiplex and dispatch each incoming request to its corresponding service provider. The Reactor pattern serves precisely this function. It allows event-driven applications to demultiplex and dispatch service requests, which are then delivered concurrently to an application from one or more clients.” The Reactor pattern is closely related to the Observer pattern in this aspect: all dependents are informed when a single subject changes. The Observer pattern is associated with a single source of events, however, whereas the Reactor pattern is associated with multiple sources of events.” From: Aruna Kalaqanan et.al. http://www-128.ibm.com/developerworks/java/library/j-javaio. The downside: all processing needs to be non-blocking and the threads need to maintain the state of the processing between handler calls (explicit state management vs. implicit in normal multi-threaded designs).

  13. Reactor Pattern From Doug Lea (see resources). Note that this is the single threaded version. Symbian OS uses a similiar concept called Active Objects. Handlers are not allowed to block. In many cases a single thread may be enough.

  14. The Message Passing Model - Modelling and Automata - Async. vs. Sync Systems - Protocol Properties: Correctness, Liveness, Fairness,... - Complexity - Failure Types

  15. Modeling of Distributed Systems Ptp primitives: Process I State variables Inbuf < - Receive Outbuf Send -> (After: J.Aspnes): A processing function takes the Inbuf Data from other processes, the internal state variables and computes a new internal state and new Outbuf data Communication ist point-to-point and deterministic. A configuration is the state vector for all processes. Events change configurations into new ones. An execution is a sequence of Configurations and Events: C0 e0, C1 e1, C2 e2 ...

  16. Synchronous vs. Asynchronous Systems Synchronous (lockstep): e== event, t== time e0t0 ---> delivery at t0+1, e1t1 ---> deliv. T1+1, ….. Asynchronous (delayed): e0t0 ---> delivery at ?, e1t1 ---> deliv. T1+?, ….. Reqs: infinitely many comp. Steps possible, events will be eventually delivered. Synchronous systems have simpler distributed algorithms, but are harder to build. The reality is async. Systems with additonal help from failure detectors, randomization etc.

  17. Protocol Properties - Correctness: invariant properties are shown to hold throughout executions - Liveness/Termination: the protocol is shown to make progress in the context of certain failures and in a bounded number of rounds - Fairness: no starvation for anybody - Agreement: e.g. all processes agree output the same decision - Validity: for the same input x, all processes output according to x (Or: there is a possible execution for every possible output value) (after: Aspnes).

  18. Complexity - Time complexity: the time of the last event before all processes finish (Aspens) - Message complexity: the number of messages sent Message size and the number of rounds needed for termination are important for the scalability of protocols

  19. Failure Types - Crash failure: a process stops working and stays down - Connectivity failures: network failures e.g. causing split brain situations with two separate networks or node isolation. Typically the time for message propagation is affected. - Message loss: single messages are lost, nodes are up. - Byzantine Failures: „Evil“ nodes violating protocol assumptions and promises. E.g. breaking a promise due to disk failure, configuration failer etc. All protocols are validated with respect to certain failure scenarios!!

  20. The Role of Delivery Guarantees - Problem Scenario: Shop order - TCP Communication properties - Message complexity: the number of messages sent

  21. Crash During Shop Order shop Message lost Order Order processing Shop user Order confirmation Browser crashed Server crashed Lost in transmit What happens to the order when certain failure types apply? What kind of guarantees do you have? Does TCP Help? What outcomes to you expect?

  22. TCP communication properties • lost messages retransmitted • Re-sequencing of out of order messages • Sender choke back (flow control) • No message boundary protection These features form a “reliable communication channel”. This does not include proper behavior in case of connection failures! (timeout problem). (Ken Birman, building secure and reliable network applications, chapter 1)

  23. Communication Failure: timeout Case: Client sends request and receives timeout Failure Cases: a) network problem, server did not receive request b) Server problem: OS did receive request but server crashed during work c) OS/Network problem: Server finished request but response got lost or OS crashed during send. Client options: drop request (ok in c), resend request (ok in a and b), send request to different server (ok in a and b). Other client actions lead either to lost or duplicated requests.

  24. Delivery Guarantees • Best effort (doesn’t guarantee anything) • At least once (same request several times received) • At most once (not more than once but possibly not at all) • Once and only once (simply the best) In case of channel break-down TCP does NOT make ANY delivery guarantees. This becomes your job then (or better: a job for your middleware)

  25. „At most once“ implementation for non-idempotent requests server A response is stored until client confirms client Request # request Response # (ack) By adding a request number to each request the server can detect duplicate requests and throw them away. The server itself needs to store a response until the client acknowledges that it was received. This creates state on the server!

  26. Idempotent Requests? • Get bank account balance • Push elevator button • Get /index.html …. • Book flight • Cancel flight • …. What kind of delivery guarantee do you need for idempotent service requests?

  27. Request Ordering with Multiple Nodes - Reliable Broadcast - FifoCast - Causal Cast - Absolute Ordered Casts Taken from: C. Karamanoulis and K.Birman

  28. Fault-tolerant Broadcast Model Process I Process II Broadcast primitives: Bcast (m,#) Deliver (m,#) Del. (m,#) Middle-Ware Middle-Ware Ptp primitives: Send (m,#) Receive (m,#) Communication Layer Watch out: messages can be delivered without respect to some order. Or they can be sorted, kept back at the middleware layer and only delivered when a certain order can be guaranteed. Notice the self-delivery of messages by the sending process.

  29. Reliable Broadcast with no Order order client cancel rebate server ???? A cancel request without previous order Taken from: C. Karamanoulis, Reliable Broadcasts

  30. Reliable Broadcast with no Order client rebate order cancel server ???? Taken from: C. Karamanoulis

  31. Reliable Broadcast with FiFo-Order M1 M2 M3 client M3 gets FiFO delvd. Here: server M1 Recv. Rel.delivery FiFo delivery M3 Recv. Rel.delivery FIFO del. Delayed!!! Taken from: C. Karamanoulis,

  32. Causal Violation with FiFO Order M1 M2 M3 Stud1 Lecture cancelled! Let's go somewhere! Stud2 But we have a lecture??? Stud3 M1 FiFo delivered! Taken from: C. Karamanoulis, Local Order: If a process delivers a message m before broadcasting a message m’, then no correct process delivers m’ unless it has previously delivered m.

  33. Solutions for Causal Ordered Broadcasts - Piggyback every message sent with privious messages: Processes which missed a message can learn about it with The next incoming message and then deliver correctly - Send event history with every message (e.g. using vector Clocks. Delay delivery until order is correct. Taken from: C. Karamanoulis and K.Birman. What are the advantages/disadvantages of both solutions?

  34. Causal Violation with FiFO Order M1 M3 p1 M2 p2 p3 M2 delivered! M3 delayed, until M1 delivered! Taken from: C. Karamanoulis, P3 has delivered M2 to itself, before delivering M1. Is this a problem? Think about causal dependencie and what causes it!

  35. Replication Anomalies with Causal Order State:100 State:200 State:2000 Replica 1 Add 100 Multiply by 10 Replica 2 State:1000 State:1100 State:100 Taken from: C. Karamanoulis, Total Order: If correct processes p and q both deliver messages m and m’, then p delivers m before m’ if and only if q delivers m before m’.

  36. Solutions for Atomic Broadcasts - All nodes send messages to every other node. - All nodes receive messages, but wait with delivery - One node has been selected to organize total order. - This node orders all messages into a total order - This node sends the total order to all nodes - All nodes receive the total order and deliver their messages According to this order. Taken from: K.Birman. What are the advantages/disadvantages of this solution?

  37. Programming Client/Server Systems with Sockets

  38. Overview • Socket primitives • Process Model with sockets • Example of server side socket use • Transparency and socket programming? • Security, Performance, Availability, Flexibility etc. of socket based C/S. • Typical C/S infrastructure (Proxies, Firewalls, LDAP)

  39. Protocol Stack for Sockets Socket: host B, port 80, tcp-conn Socket: host A, port 3500, tcp-conn Reliable comm. channel Transport/Session Transport/Session Tcp connection Udp connection Network Network Data Link Data Link Physical Physical

  40. Socket Properties • Using either tcp or udp connections • Serving as a programming interface • A specification of “Host”, “Port”, “Connection type” • A unique address of a channel endpoint.

  41. Berkeley Sockets (1) Socket primitives for TCP/IP. From: van Steen, Tanenbaum, Distributed Systems

  42. Berkeley Sockets (2) Connection-oriented communication pattern using sockets. From: van Steen, Tanenbaum, Distributed Systems

  43. Server Side Processing using Processes Connecting on arbitrary port C Listening on port X Client Server Dispatcher Process Accept and spawn process on Port Y Connection established between client on port C and server on port Y Server (process) After spawning a new process the dispatcher goes back to listening for new connection requests. This model scales to some degree (process creation is expensive and only few processes are possible). Example: traditional CGI processing in web-server

  44. Server Side Processing using Threads Connecting on arbitrary port C Listening on port X Client Server Dispatcher Process Accept and spawn thread on Port Y Connection established between client on port C and server on port Y Server (thread) After spawning a new thread the dispatcher goes back to listening for new connection requests. This model scales well (thread creation is expensive but they can be pooled) and a larger number of threads are possible). Example: servlet request processing in servlet engine (aka “web-container”)

  45. Server Side Concurrency Process per request Threaded server Thread Thread addMoney(account, value) addMoney(account, value) In the case of the threaded server the function needs to be re-entrant. No unprotected global variables. Keep state per thread on stack.

  46. Designing a socket based service • Design the message formats to be exchanged (e.g. “http1.0 200 OK …). Try to avoid data representation problems on different hardware. • Design the protocol between clients and server: - Will client wait for answer? (asynchronous vs. synchr. Comm.) - Can server call back? (== client has server functionality) - Will connection be permanent or closed after request? - Will server hold client related state (aka session)? - Will server allow concurrent requests?

  47. Stateful Allows transactions and delivery guarantees Can lead to resource exhaustion (e.g. out of sockets) on a server Needs somehow reliable hardware and networks to succeed. Stateless or Stateful Service? Stateless: • Scales extremely well • Makes denial of service attacks harder • Forces new authentication and authorization per request

  48. Server Dangers: Keeping State and expecting clients to behave -TCP SYN flooding client server server client SYN SYN Client info stored Client info stored SYN SYN,ACK(SYN) Client info stored request Client info stored Client never sends request, only SYN, Server buffer gets filled and other clients cannot connect

  49. A Client using sockets • Define hostname and port number of server host • Allocate a socket with host and port parameters • Get the input channel from the socket (messages from server) • Get output channel from socket (this is where the messages to the server will go) • Create a message for the server, e.g. “GET /somefile.html HTTP/1.0” • Write message into output channel (message is sent to server) • Read response from input channel and display it. A multithreaded client would use one thread to read e.g. from the console and write to the output channel while the other thread reads from the input channel and displays the server messages on the console (or writes to a file)

  50. A server using sockets • Define port number of service (e.g. 80 for http server) • Allocate a server socket with port parameter. Server socket does “bind” and “listen” for new connections. • “Accept” an incoming connection, get a new socket for the client connection • Get the input channel from the socket and parse client message • Get output channel from socket (this is where the messages to the client will go) • Do request processing (or create a new thread to do it) • Create a response message e.g. “HTTP/1.0 2000 \n…” • Write message into output channel (message is sent to client) • Read new message from client channel or close the connection A bare bone server. Could be extended through e.g. a command pattern to match requests with processing dynamically. New commands could get loaded dynamically as well. (“Application Server”)

More Related