1 / 32

Pattern-Oriented Software Architecture Concurrent & Networked Objects Friday, October 31, 2014

Pattern-Oriented Software Architecture Concurrent & Networked Objects Friday, October 31, 2014. Dr. Douglas C. Schmidt schmidt@uci.edu www.cs.wustl.edu/~schmidt/posa.ppt Electrical & Computing Engineering Department The Henry Samueli School of Engineering

brett-rose
Download Presentation

Pattern-Oriented Software Architecture Concurrent & Networked Objects Friday, October 31, 2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern-Oriented Software ArchitectureConcurrent & Networked ObjectsFriday, October 31, 2014 Dr. Douglas C. Schmidt schmidt@uci.edu www.cs.wustl.edu/~schmidt/posa.ppt Electrical & Computing Engineering Department The Henry Samueli School of Engineering University of California, Irvine

  2. CPUs and networks have increased by 3-7 orders of magnitude in the past decade 2,400 bits/sec to 1 Gigabits/sec These advances stem largely from standardizing hardware & software APIs and protocols, e.g.: 10 Megahertz to 1 Gigahertz • Intel x86 & Power PC chipsets • TCP/IP, ATM Increasing software productivity and QoS depends heavily on COTS • POSIX & JVMs • CORBA ORBs & components • Ada, C, C++, RT Java The Road Ahead • Extrapolating this trend to 2010 yields • ~100 Gigahertz desktops • ~100 Gigabits/sec LANs • ~100 Megabits/sec wireless • ~10 Terabits/sec Internet backbone In general, software has not improved as rapidly or as effectively as hardware

  3. www.posa.uci.edu • Flexibility • Extensibility • Dependability • Predictability • Scalability • Efficiency The Proxy Pattern • Present solutions to common software problems arising within a certain context • Help resolve key design forces • Capture recurring structures & dynamics among software participants to facilitate reuse of successful designs • Generally codify expert knowledge & “best practices” Overview of Patterns and Pattern Languages Patterns Pattern Languages • Define a vocabulary for talking about software development problems • Provide a process for the orderly resolution of these problems • Help to generate & reuse software architectures

  4. Frameworks differ from conventional class libraries: Class Library Architecture Frameworks Class Libraries “Semi-complete” applications Stand-alone components Domain-specific Domain-independent Inversion of control Borrow caller’s thread of control Framework Architecture Overview of Frameworks & Components • Framework • An integrated collection of components that collaborate to produce a reusable architecture for a family of related applications • Frameworks faciliate reuse of successful software designs & implementations • Applications inherit from and instantiate framework components

  5. The JAWS Web Server Framework • Key Sources of Variation • Concurrency models • e.g.,thread pool vs. thread-per request • Event demultiplexing models • e.g.,sync vs. async • File caching models • e.g.,LRU vs. LFU • Content delivery protocols • e.g.,HTTP 1.0+1.1, HTTP-NG, IIOP, DICOM • Event Dispatcher • Accepts client connection request events, receives HTTP GET requests, & coordinates JAWS’s event demultiplexing strategy with its concurrency strategy. • As events are processed they are dispatched to the appropriate Protocol Handler. • Protocol Handler • Performs parsing & protocol processing of HTTP request events. • JAWS Protocol Handler design allows multiple Web protocols, such as HTTP/1.0, HTTP/1.1, & HTTP-NG, to be incorporated into a Web server. • To add a new protocol, developers just write a new Protocol Handler component & configure it into the JAWS framework. • Cached Virtual Filesystem • Improves Web server performance by reducing the overhead of file system accesses when processing HTTP GET requests. • Various caching strategies, such as least-recently used (LRU) or least-frequently used (LFU), can be selected according to the actual or anticipated workload & configured statically or dynamically.

  6. Applying Patterns to Resolve Key JAWS Design Challenges Patterns help resolve the following common challenges: • Encapsulating low-level OS APIs • Decoupling event demultiplexing & connection management from protocol processing • Scaling up performance via threading • Implementing a synchronized request queue • Minimizing server threading overhead • Using asynchronous I/O effectively • Efficiently Demuxing Asynchronous Operations & Completions • Enhancing server configurability • Transparently parameterizing synchronization into components • Ensuring locks are released properly • Minimizing unnecessary locking • Synchronizing singletons correctly

  7. calls API FunctionA() calls methods Application calls API FunctionB() calls API FunctionC() Wrapper Facade void method1(){ void methodN(){ functionA(); functionA(); data functionB(); } } method1() … methodN() : Application : Wrapper : APIFunctionA : APIFunctionB Facade method() functionA() functionB() Encapsulating Low-level OS APIs Problem The diversity of hardware and operating systems makes it hard to build portable and robust Web server software by programming directly to low-level operating system APIs, which are tedious, error-prone, & non-portable. ContextA Web server must manage a variety of OS services, including processes, threads, Socket connections, virtual memory, & files. Most operating systems provide low-level APIs written in C to access these services. Solution Apply the Wrapper Facade design pattern to avoid accessing low-level operating system APIs directly. Intent This pattern encapsulates data & functions provided by existing non-OO APIs within more concise, robust, portable, maintainable, & cohesive OO class interfaces.

  8. A Web server can be accessed simultaneously by multiple clients, each of which has its own connection to the server. • A Web server must therefore be able to demultiplex and process multiple types of indication events that can arrive from different clients concurrently. • A common way to demultiplex events in a Web server is to use select(). Client HTTP GET request Web Server Socket Handles HTTP GET request Client Client Connect request Event Dispatcher select() Sockets Decoupling Event Demuxing and Connection Management from Protocol Processing Context • Problem • Developers often tightly couple a Web server’s event-demultiplexing and connection-management code with its protocol-handling code that performs HTTP 1.0 processing. • In such a design, the demultiplexing and connection-management code cannot be reused as black-box components • Neither by other HTTP protocols, nor by other middleware and applications, such as ORBs and image servers. • Thus, changes to the event-demultiplexing and connection-management code will affect the Web server protocol code directly and may introduce subtle bugs. • e.g., porting it to use TLI or WaitForMultipleObjects() Solution Apply the Reactor pattern and the Acceptor-Connector pattern to separate the generic event-demultiplexing and connection-management code from the web server’s protocol code.

  9. Reactor Event Handler * handle_events() register_handler() remove_handler() dispatches handle_event () get_handle() * owns Handle * notifies handle set <<uses>> Concrete Event Handler A Concrete Event Handler B Synchronous Event Demuxer handle_event () get_handle() handle_event () get_handle() select () : Main Program : Concrete : Reactor : Synchronous Event Handler Event Demultiplexer Con. Event Events register_handler() Handler get_handle() Handle handle_events() Handles select() event handle_event() Handles service() The Reactor Pattern Intent The Reactor architectural pattern allows event-driven applications to demultiplex & dispatch service requests that are delivered to an application from one or more clients. • Observations • Note inversion of control • Also note how long-running event handlers can degrade the QoS since callbacks steal the reactor’s thread! • Initialize phase • Event handling phase

  10. notifies notifies Dispatcher select() handle_events() register_handler() remove_handler() uses uses uses * * * Transport Handle Transport Handle Transport Handle uses owns owns owns <<creates>> notifies * Service Handler * * Acceptor Connector peer_stream_ Connector() connect() complete() handle_event () peer_acceptor_ open() handle_event () set_handle() Acceptor() Accept() handle_event () <<activate>> <<activate>> * Concrete Acceptor Concrete Connector Concrete Service Handler A Concrete Service Handler B The Acceptor-Connector Pattern Intent The Acceptor-Connector design pattern decouples the connection & initialization of cooperating peer services in a networked system from the processing performed by the peer services after being connected & initialized.

  11. : Application : Acceptor : Dispatcher open() register_handler() handle_events() accept() : Service Handler open() Service Events Handler register_handler() handle_event() service() Acceptor Dynamics ACCEPT_ • Passive-mode endpoint initialize phase • Service handler initialize phase • Service processing phase Acceptor Handle1 EVENT : Handle2 Handle2 Handle2 • The Acceptor ensures that passive-mode transport endpoints aren’t used to read/write data accidentally • And vice versa for data transport endpoints… • There is typically one Acceptor factory per-service/per-port • Additional demuxing can be done at higher layers, a la CORBA

  12. : Application : Connector : Service : Dispatcher Handler Service Addr get_handle() Handler connect() Handle register_handler() Service open() Events Handle Handler handle_events() handle_event() service() Synchronous Connector Dynamics Motivation for Synchrony • If connection latency is negligible • e.g., connecting with a server on the same host via a ‘loopback’ device • If multiple threads of control are available & it is efficient to use a thread-per-connection to connect each service handler synchronously • If the services must be initialized in a fixed order & the client can’t perform useful work until all connections are established. • Sync connection initiation phase • Service handler initialize phase • Service processing phase

  13. : Application : Connector : Service : Dispatcher Handler Service Addr Handler get_handle() connect() register_handler() Handle CONNECT Handle Connector EVENT handle_events() complete() open() register_handler() Service Handle Events Handler handle_event() service() Asynchronous Connector Dynamics Motivation for Asynchrony • If client is establishing connections over high latency links • If client is a single-threaded applications • If client is initializing many peers that can be connected in an arbitrary order. • Async connection initiation phase • Service handler initialize phase • Service processing phase

  14. The Acceptor-Connector design pattern can use a Reactor as its Dispatcher in order to help decouple: • The connection & initialization of peer client & server HTTP services from • The processing activities performed by these peer services once they are connected & initialized. Applying the Reactor and Acceptor-Connector Patterns in JAWS • The Reactor architectural pattern decouples: • JAWS generic synchronous event demultiplexing & dispatching logic from • The HTTP protocol processing it performs in response to events Reactor Event Handler * handle_events() register_handler() remove_handler() dispatches handle_event () get_handle() * owns Handle * notifies handle set <<uses>> HTTP Acceptor HTTP Handler Synchronous Event Demuxer handle_event () get_handle() handle_event () get_handle() select ()

  15. The JAWS Web Server Framework • Key Sources of Variation • Concurrency models • e.g.,thread pool vs. thread-per request • Event demultiplexing models • e.g.,sync vs. async • File caching models • e.g.,LRU vs. LFU • Content delivery protocols • e.g.,HTTP 1.0+1.1, HTTP-NG, IIOP, DICOM • Event Dispatcher • Accepts client connection request events, receives HTTP GET requests, & coordinates JAWS’s event demultiplexing strategy with its concurrency strategy. • As events are processed they are dispatched to the appropriate Protocol Handler. • Protocol Handler • Performs parsing & protocol processing of HTTP request events. • JAWS Protocol Handler design allows multiple Web protocols, such as HTTP/1.0, HTTP/1.1, & HTTP-NG, to be incorporated into a Web server. • To add a new protocol, developers just write a new Protocol Handler component & configure it into the JAWS framework. • Cached Virtual Filesystem • Improves Web server performance by reducing the overhead of file system accesses when processing HTTP GET requests. • Various caching strategies, such as least-recently used (LRU) or least-frequently used (LFU), can be selected according to the actual or anticipated workload & configured statically or dynamically.

  16. notifies notifies Dispatcher select() handle_events() register_handler() remove_handler() uses uses uses * * * Transport Handle Transport Handle Transport Handle uses owns owns owns <<creates>> notifies * Service Handler * * Acceptor Connector peer_stream_ Connector() connect() complete() handle_event () peer_acceptor_ open() handle_event () set_handle() Acceptor() Accept() handle_event () <<activate>> <<activate>> * Concrete Acceptor Concrete Connector Concrete Service Handler A Concrete Service Handler B The Acceptor-Connector Pattern Intent The Acceptor-Connector design pattern decouples the connection & initialization of cooperating peer services in a networked system from the processing performed by the peer services after being connected & initialized.

  17. Reactive Connection Management & Data Transfer in JAWS

  18. Scaling Up Performance via Threading • Problem • Processing all HTTP GET requests reactively within a single-threaded process does not scale up, because each server CPU time-slice spends much of its time blocked waiting for I/O operations to complete. • Similarly, to improve QoS for all its connected clients, an entire Web server process must not block while waiting for connection flow control to abate so it can finish sending a file to a client. • Context • HTTP runs over TCP, which uses flow control to ensure that senders do not produce data more rapidly than slow receivers or congested networks can buffer and process. • Since achieving efficient end-to-end quality of service (QoS) is important to handle heavy Web traffic loads, a Web server must scale up efficiently as its number of clients increases. • This solution yields two benefits: • Threads can be mapped to separate CPUs to scale up server performance via multi-processing. • Each thread blocks independently, which prevents one flow-controlled connection from degrading the QoS other clients receive. Solution Apply the Half-Sync/Half-Async architectural pattern to scale up server performance by processing different HTTP requests concurrently in multiple threads.

  19. : External Event : Async Service : Queue : Sync Service Source This pattern defines two service processing layers—one async and one sync—along with a queueing layer that allows services to exchange messages between the two layers. The pattern allows sync services, such as HTTP protocol processing, to run concurrently, relative both to each other and to async services, such as event demultiplexing. notification read() work() message message notification enqueue() read() work() message The Half-Sync/Half-Async Pattern Intent The Half-Sync/Half-Async architectural pattern decouples async & sync service processing in concurrent systems, to simplify programming without unduly reducing performance. The pattern introduces two inter-communicating layers, one for async & one for sync service processing. Sync Sync Service 1 Sync Service 2 Sync Service 3 Service Layer <<read/write>> <<read/write>> Queueing Queue <<read/write>> Layer <<dequeue/enqueue>> <<interrupt>> Async Service Layer External Async Service Event Source

  20. Applying the Half-Sync/Half-Async Pattern in JAWS Synchronous Worker Thread 2 Worker Thread 3 Worker Thread 1 Service Layer <<get>> <<get>> <<get>> Queueing Request Queue Layer <<put>> HTTP Handlers, HTTP Acceptor Asynchronous Service Layer Socket <<ready to read>> Event Sources Reactor • JAWS uses the Half-Sync/Half-Async pattern to process HTTP GET requests synchronously from multiple clients, but concurrently in separate threads • The worker thread that removes the request synchronously performs HTTP protocol processing & then transfers the file back to the client. • If flow control occurs on its client connection this thread can block without degrading the QoS experienced by clients serviced by other worker threads in the pool.

  21. Problem • A naive implementation of a request queue will incur race conditions or ‘busy waiting’ when multiple threads insert and remove requests. • e.g., multiple concurrent producer and consumer threads can corrupt the queue’s internal state if it is not synchronized properly. • Similarly, these threads will ‘busy wait’ when the queue is empty or full, which wastes CPU cycles unnecessarily. Worker Thread 1 Worker Thread 2 Worker Thread 3 <<get>> <<get>> <<get>> Request Queue <<put>> HTTP Handlers, HTTP Acceptor Reactor Monitor Object Client 2..* sync_method1() sync_methodN() Monitor Lock uses uses * Monitor Condition acquire() release() wait() notify() notify_all() Implementing a Synchronized Request Queue • Context • The Half-Sync/Half-Async pattern contains a queue. • The JAWS Reactor thread is a ‘producer’ that inserts HTTP GET requests into the queue. • Worker pool threads are ‘consumers’ that remove & process queued requests. Solution Apply the Monitor Objectpattern to implement a synchronized queue. • This design pattern synchronizes concurrent method execution to ensure that only one method at a time runs within an object. • It also allows an object’s methods to cooperatively schedule their execution sequences.

  22. : Monitor : Client : Client : Monitor : Monitor Condition Thread1 Thread2 Object Lock sync_method1() acquire() dowork() wait() the OS thread scheduler automatically suspends the client thread sync_method2() acquire() the OS thread scheduler dowork() automatically resumes notify() the client thread and the synchronized release() method dowork() release() Dynamics of the Monitor Object Pattern • Synchronized method invocation & serialization • Synchronized method thread suspension • Monitor condition notification • Synchronized method thread resumption the OS thread scheduler atomically releases the monitor lock the OS thread scheduler atomically reacquires the monitor lock

  23. Request Queue HTTP Handler Worker Thread <<get>> <<put>> put() get() uses uses 2 Thread Condition wait() notify() notify_all() Thread_Mutex acquire() release() Applying the Monitor Object Pattern in JAWS The JAWS synchronized request queue implement the queue’s not-empty and not-full monitor conditions via a pair of ACE wrapper facades for POSIX-style condition variables. • When a worker thread attempts to dequeue an HTTP GET request from an empty queue, the request queue’s get() method atomically releases the monitor lock and the worker thread suspends itself on the not-empty monitor condition. • The thread remains suspended until the queue is no longer empty, which happens when an HTTP_Handler running in the Reactor thread inserts a request into the queue.

  24. accept() accept() accept() accept() accept() accept() passive-modesocket handle Minimizing Server Threading Overhead Context Socket implementations in certain multi-threaded operating systems provide a concurrent accept() optimization to accept client connection requests and improve the performance of Web servers that implement the HTTP 1.0 protocol as follows: • The operating system allows a pool of threads in a Web server to call accept() on the same passive-modesocket handle. • When a connection request arrives, the operating system’s transport layer creates a new connected transport endpoint, encapsulates this new endpoint with a data-modesocket handle and passes the handle as the return value from accept(). • The operating system then schedules one of the threads in the pool to receive this data-mode handle, which it uses to communicate with its connected client.

  25. Worker Thread 1 Worker Thread 2 Worker Thread 3 <<get>> <<get>> <<get>> Request Queue <<put>> HTTP Handlers, HTTP Acceptor Reactor • Dynamic memory (de)allocation, • Synchronization operations, • A context switch, & Drawbacks with the Half-Sync/ Half-Async Architecture Problem Although Half-Sync/Half-Async threading model is more scalable than the purely reactive model it is not necessarily the most efficient design. • e.g., passing a request between the Reactor thread and a worker thread incurs: Solution Apply the Leader/Followers pattern to minimize server threading overhead. • CPU cache updates • This overhead makes JAWS’ latency unnecessarily high, particularly on operating systems that support the concurrent accept() optimization.

  26. Thread Thread : Thread : Handle : Concrete 1 2 Pool Set Event Handler join() join() event handle_event() deactivate_ handle() thread sleeps 2 until it becomes the leader thread 2 waits for a handle_ new event, events() thread reactivate_ 1 processes handle() current event event join() handle_event() thread sleeps 1 until it becomes deactivate_ the leader handle() Dynamics in the Leader/Followers Pattern • Leader thread demuxing • Follower thread promotion • Event handler demuxing & event processing • Rejoining the thread pool handle_events() promote_ new_leader()

  27. Handle Set handle_events() deacitivate_handle() reactivate_handle() select() Applying the Leader/Followers Pattern in JAWS • Two options: • If platform supports accept() optimization then the OS implements the Leader/Followers pattern • Otherwise, this pattern can be implemented as a reusable framework Although Leader/Followers thread pool design is highly efficient the Half-Sync/Half-Async design may be more appropriate for certain types of servers, e.g.: • The Half-Sync/Half-Async design can reorder and prioritize client requests more flexibly, because it has a synchronized request queue implemented using the Monitor Object pattern. • It may be more scalable, because it queues requests in Web server virtual memory, rather than the operating system kernel. demultiplexes Thread Pool synchronizer join() promote_new_leader() * Event Handler * uses handle_event () get_handle() Handle HTTP Acceptor HTTP Handler handle_event () get_handle() handle_event () get_handle()

  28. This pattern allows event-driven applications to efficiently demultiplex & dispatch service requests triggered by the completion of async operations, thereby achieving the performance benefits of concurrency without incurring many of its liabilities. Initiator <<uses>> <<uses>> <<invokes>> <<uses>> is associated with Asynchronous Operation Asynchronous Operation Processor Handle Completion Handler * execute_async_op() async_op() handle_event() <<demultiplexes <<enqueues>> <<executes>> & dispatches>> Asynchronous Event Demuxer Proactor Concrete Completion Handler Completion Event Queue handle_events() get_completion_event() <<dequeues>> The Proactor Pattern • Problem • Developing software that achieves the potential efficiency & scalability of async I/O is hard due to the separation in time & space of async operation invocations and their subsequent completion events. Solution Apply the Proactor architectural pattern to make efficient use of async I/O.

  29. Completion : Initiator : Asynchronous : Completion : Proactor : Asynchronous Handler Operation Operation Event Queue Processor Completion Handler Completion Ev. Queue async_operation() exec_async_ operation () handle_events() event Result Result event service() Result Result handle_ event() Dynamics in the Proactor Pattern • Initiate operation • Process operation • Run event loop • Generate & queue completion event • Dequeue completion event & perform completion processing • Note similarities & differences with the Reactor pattern, e.g.: • Both process events via callbacks • However, it’s generally easier to multi-thread a proactor

  30. Web Server <<uses>> <<uses>> <<invokes>> <<uses>> is associated with Asynchronous Operation Windows NT Operating System Handle Completion Handler AcceptEx() ReadFile() WriteFile() * execute_async_op() handle_event() <<demultiplexes <<enqueues>> <<executes>> & dispatches>> Asynchronous Event Demuxer Proactor I/O Completion Port handle_events() HTTP Acceptor HTTP Handler GetQueuedCompletionStatus() <<dequeues>> Applying the Proactor Pattern in JAWS • JAWS HTTP components are split into two parts: • Operations that execute asynchronously • e.g., to accept connections & receive client HTTP GET requests • The corresponding completion handlers that process the async operation results • e.g., to transmit a file back to a client after an async connection operation completes The Proactor pattern structures the JAWS concurrent server to receive & process requests from multiple clients asynchronously.

  31. Proactive Connection Management & Data Transfer in JAWS

More Related