1 / 30

Aniruddha Gokhale

Model-driven Performance Analysis Methodology for Scalable Performance Analysis of Distributed Systems. Aniruddha Gokhale a.gokhale@vanderbilt.edu Asst. Professor of EECS, Vanderbilt University, Nashville, TN. Jeff Gray gray@cis.uab.edu Asst. Professor of CIS Univ. of Alabama at Birmingham

herb
Download Presentation

Aniruddha Gokhale

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model-driven Performance Analysis Methodology for Scalable Performance Analysis of Distributed Systems Aniruddha Gokhale a.gokhale@vanderbilt.edu Asst. Professor of EECS, Vanderbilt University, Nashville, TN Jeff Gray gray@cis.uab.edu Asst. Professor of CIS Univ. of Alabama at Birmingham Birmingham, AL Swapna Gokhale ssg@engr.uconn.edu Asst. Professor of CSE, University of Connecticut, Storrs, CT Presented at NSF NGS/CSR PI Meeting Rhodes, Greece, April 25-26, 2006 CSR CNS-0406376, CNS-0509271, CNS-0509296, CNS-0509342

  2. Distributed Performance Sensitive Software Systems • Military/Civilian distributed performance-sensitive software systems • Network-centric & larger-scale “systems of systems” • Stringent simultaneous QoS demands • e.g., dependability, security, scalability, thruput • Dynamic context

  3. Trends in DPSS Development • Historically developed using low-level APIs • Increasing use of middleware technologies • Standards-based COTS middleware helps to: • Control end-to-end resources & QoS • Leverage hardware & software technology advances • Evolve to new environments & requirements • Middleware helps capture & codify commonalities across applications in different domains by providing reusable & configurable patterns-based building blocks Examples: CORBA, .Net, J2EE, ICE, MQSeries Developers must decide at design-time which blocks to use to obtain desired functionality and performance

  4. The “What If” Performance Analysis Process Model building (basic characteristics) Model validation (simulation/measurements) Model replication Code generation Generalization of the model Model decomposition

  5. Guiding Example: The Reactor Pattern Reactor Event Handler * handle_events() register_handler() remove_handler() dispatches handle_event () get_handle() * owns Handle * notifies handle set <<uses>> Concrete Event Handler A Concrete Event Handler B Synchronous Event Demuxer handle_event () get_handle() handle_event () get_handle() select () The Reactor architectural pattern allows event-driven applications to demultiplex & dispatch service requests that are delivered to an application from one or more clients. • Many networked applications are developed as event-driven programs • Common sources of events in these applications include activity on an IPC stream for I/O operations, POSIX signals, Windows handle signaling, & timer expirations • Reactor pattern decouples the detection, demultiplexing, & dispatching of events from the handling of events • Participants include the Reactor, Event handle, Event demultiplexer, abstract and concrete event handlers

  6. Reactor Dynamics : Main Program : Concrete : Reactor : Synchronous Event Handler Event Demultiplexer Con. Event Events register_handler() Handler get_handle() Handle handle_events() Handles select() event handle_event() Handles service() • Registration Phase • Event handlers register themselves with the Reactor for an event type (e.g., input, output, timeout, exception event types) • Reactor returns a handle it maintains, which it uses to associate an event type with the registered handler • Snapshot Phase • Main program delegates thread of control to Reactor, which in turn takes a snapshot of the system to determine which events are enabled in that snapshot • For each enabled event, the corresponding event handler is invoked, which services the event • When all events in a snapshot are handled, the Reactor proceeds to the next snapshot

  7. Characteristics of Base Reactor • Single-threaded, select-based Reactor implementation • Reactor accepts two types of input events, with one event handler registered for each event type with the Reactor • Each event type has a separate queue to hold the incoming events. Buffer capacity for events of type one is N1 and of type two is N2. • Event arrivals are Poisson for type one and type two events with rates l1and l2. • Event service time is exponential for type one and type two events with rates m1and m2. • In a snapshot, events are serviced non-deterministically (in no particular order). • -- Base model of the prioritized reactor presented in NGS 2005. • -- Non-deterministic handling makes it interesting/complicated.

  8. Performance Metrics • Throughput: • -Number of events that can be processed • -Applications such as telecommunications call processing. • Queue length: • -Queuing for the event handler queues. • -Appropriate scheduling policies for applications with real-time requirements. • Total number of events: • -Total number of events in the system. • -Scheduling decisions. • -Resource provisioning required to sustain system demands. • Probability of event loss: • -Events discarded due to lack of buffer space. • -Safety-critical systems. • -Levels of resource provisioning. • Response time: • -Time taken to service the incoming event. • - Important from user’s perspective, real-time systems.

  9. SRN Model of the Base Reactor A2 A1 N2 N1 B2 B1 Sn1 Sn2 S2 S1 Sr2 Sr1 StSnpSht (a) (b) T_ProcSnp2 T_StSnp1 T_StSnp2 SnpInProg2 SnpInProg1 T_ProcSnp1 T_EnSnp2 T_EnSnp1 • Stochastic Reward Nets (SRNs) – Extension of PNs/GSPNs. • Part A: Models arrivals, queuing, and service of events. • Transitions A1 and A2: Event arrivals. • Places B1 and B2: Buffer/queues. • Places S1 and S2: Service of the events. • Transitions Sr1 and Sr2: Service completions. • Inhibitor arcs: Place B1and transition A1 with multiplicity N1 (B2, A2, N2) • - Prevents firing of transition A1 when there are N1 tokens in place B1.

  10. SRN Model of the Base Reactor A2 A1 N2 N1 B2 B1 Sn1 Sn2 S2 S1 Sr2 Sr1 StSnpSht (a) (b) T_ProcSnp2 T_StSnp1 T_StSnp2 SnpInProg2 SnpInProg1 T_ProcSnp1 T_EnSnp2 T_EnSnp1 • Part B: • Process of taking successive snapshots • Non-deterministic service of events. • T_StSnp(i) enabled: Token in StSnpSht && Tokens in B(i) & no Token in S(i). • T_EnSnp(i) enabled: No token in S(i). • T_ProcSnp(i) enabled: Token in place S(i) and no token in other S(i)s.

  11. SRN Model of the Base Reactor A2 A1 N2 N1 B2 B1 Sn1 Sn2 S2 S1 Sr2 Sr1 StSnpSht (a) (b) T_ProcSnp2 T_StSnp1 T_StSnp2 SnpInProg2 SnpInProg1 T_ProcSnp1 T_EnSnp2 T_EnSnp1 • Reward rates: • Loss probability (Pr(#B(i) == N(i))) • Total number of events ((#B(i) + #S(i))) • Throughput (rate(“Sr(i)”)) • Queue length ((#B(i)) • Optimistic and pessimistic bounds on the response times using the • tagged-customer approach.

  12. SRN Model of the Base Reactor • Validate the performance measures • Simulation implemented in CSIM • Arrival rates, l1 = l2 = 0.4/sec • Service rates, m1 = m2 = 2.0/sec Average response time estimate obtained from simulation lies between pessimistic and optimistic response time estimates obtained from SRN.

  13. SRN Model of the Base Reactor Response time of event #1 Response time of event #2 Sensitivity analysis • Vary the arrival rate of type #1 events. • -- Remaining parameters same as before. • Response time of type #1, #2 events: • -- Approaches pessimistic response • time as arrival rate becomes higher. • -- Actual response time of type #1 • events is higher than type #2 events. • Loss probabilities of type #1, #2 events: • -- Increases with arrival rate.

  14. SRN Model of the Generalized Reactor A2 A1 N2 N1 B2 B1 Nm Bm Sn1 Sn2 Snm S2 S1 Sm Sr2 Sr1 Srm StSnpSht TStSnp2 TStSnp1 TProcSnp1,2 TStSnpm TProcSnpm,1 SnpInProg2 SnpInProg1 TProcSnp1,3 SnpInProgm TProcSnpm,m-1 TProcSnp1,m TEnSnpm TEnSnp2 TEnSnp1 Am m event types handled by the reactor.

  15. SRN Model of the Generalized Reactor • CTMC for m=2, buffer space = 1. • Number of states given by: • CTMC for m events, buffer space = 1. • Number of states given by: m=2, 17 states m=3, 97 states m=4, 517 states • Dramatic superlinear growth, for • larger buffer sizes. State-space explosion

  16. Model Decomposition Tagged customer approach Tagged event bi #1 #2 #i #m • Tagged event of type #i, bi events in the queue of type #i. • Other queues may have more or less events. • Current snapshot may be in progress • Incoming event will be served after bi+1 snapshots. • In each bi+1 snapshots, other events may or may not be handled. • Pessimistic case: each bi+1 snapshots handles every event type • Service time of each of the bi+1 snapshots:

  17. Model Decomposition • Arrival, service, queuing of event type #i can be approximately represented • by M/G/1/K queuing model. • Impact of other event types on a given event type is obtained by inflating the • service time the event. • Very large buffer space, loss probability is negligible, M/G/1 model. • Provides pessimistic or worse case bounds on the performance estimates. • Actual performance estimates (obtained from SRN) are expected to be • lower/better than the ones provided by model decomposition. • (bi+2) snapshot, tagged event is serviced. • Non-deterministic order of servicing enabled event handles: • -- Optimistic case: Tagged type #i event is the first to be serviced. • -- Pessimistic case: Tagged type #i event is the last to be serviced. • For the pessimistic case, service time of (bi+2) snapshot is the same Kleinrock: “We are willing to accept the raw facts of life, which state that our models are not perfect pictures of the systems we wish to analyze so we should be willing to accept approximations and bounds in our problem solution….” vo1. 2, page 319

  18. Model Decomposition • Expected service time: • Expected number of events: • Variance of the service time: • Response time (Little’s law): • Traffic intensity: • Throughput: • Coefficient of variation: Event type #i, M/G/1 queue Closed-form solutions for worse-case bounds on the performance estimates

  19. Model Decomposition • Verification of performance estimates obtained from model decomposition • Parameters: li = 0.4/sec, mi = 2.0/sec, Ni=15 • Performance estimates obtained from SRN lower than model decomposition. • Exact solution for m=4 took over 12 hours.

  20. Addressing Middleware Variability Challenges Manual design time performance modeling and analysis is complex and tedious since middleware building blocks and their compositions incur variabilities that impact performance • Per-Block Configuration Variability • Incurred due to variations in implementations & configurations for a patterns-based building block • E.g., single threaded versus thread-pool based reactor implementation dimension that crosscuts the event demultiplexing strategy (e.g., select, poll, WaitForMultipleObjects • Compositional Variability • Incurred due to variations in the compositions of these building blocks • Need to address compatibility in the compositions and individual configurations • Dictated by needs of the domain • E.g., Leader-Follower makes no sense in a single threaded Reactor

  21. Automation Goals for “What if” Analysis workload workload Automating design-time performance analysis techniques to estimate the impact of variability in middleware-based DPSS systems • Build and validate performance models for invariant parts of middleware building blocks • Weaving of variability concerns manifested in a building block into the performance models • Compose and validate performance models of building blocks mirroring the anticipated software design of DRE systems • Estimate end-to-end performance of composed system • Iterate until design meets performance requirements Composed System Refined model of a pattern Refined model of a pattern Refined model of a pattern Invariant model of a pattern Refined model of a pattern Refined model of a pattern weave weave variability variability Refined model of a pattern Refined model of a pattern system

  22. Technology Enabler: Generic Modeling Environment Decorator Decorator Application Developers (Modelers) XML XML MDE Tool Developer (Metamodeler) … … DB #n DB #1 Storage Options “Write Code That Writes Code That Writes Code!” GME Architecture COM COM GME Editor ConstraintManager Browser Translator(s) COM Add-On(s) Metamodel GModel GMeta XML UML / OCL CORE Paradigm Definition XML ODBC Goal: Correct-by-construction DPSS systems www.isis.vanderbilt.edu/Projects/gme/default.htm

  23. Leveraging Our Existing Solution: CoSMIC CoSMIC can be downloaded at www.dre.vanderbilt.edu/cosmic • CoSMIC tools e.g., PICML used to model application components • Captures the data model of the OMG D&C specification • Synthesis of static deployment plans for DPSS applications • New capabilities being added for QoS provisioning (real-time, fault tolerance)

  24. POSAML: Modeling DPSS Composition Process • POSAML – GME-based modeling language for middleware composition • Provides a structural composition model • Captures variability in blocks • Generative programming capabilities to synthesize different artifacts e.g., benchmarking, configuration, perf modeling.

  25. SRNML: Modeling the “What if” Process • SRNML – GME-based modeling language for modeling performance models of building blocks • Provides behavioral models for building blocks • Developed Reactor and Proactor models to date • Need to apply to other patterns • Demonstrate model compositions and model solving Need to address scalability challenges for models

  26. Defines Defines Aspect Weaving Source Model Target Model Defines CopyAtom strategy CopyAtom ECL Transformation Specifications Leveraging Our Solution: C-SAW Model Transformation & Replication Engine MetaModel M M ECL Interpreter o o d d e e l l i i n n g g A A P P ECL Parser I I s s • Implemented as a GME plug-in to assist in the rapid adaptation and evolution of models by weaving crosscutting changes into models.

  27. Scaling a Base SRN Model strategy connectNewEvents(min_new, max_new: interger) { if(min_new < max_new) then connectOneNewEventToOtherNewEvents(min_new, max_new); connectNewEvents(min_new+1, max_new); endif; } strategy connectOneNewEventToOtherNewEvents(event_num, max_new: integer) { if(event_num < max_new) then connectTwoEvents(event_num, max_new); connectNewEvents(event_num, max_new-1); endif; } strategy connectTwoEvents(first_num, second_num : integer) { declare firstinProg, secondinProg : atom; declare secondTProc1, secondTProc2 : atom; declare first_numStr, second_numStr, TProcSnp_guard1, TProcSnp_guard2 : string; first_numStr := intToString(first_num); second_numStr := intToString(second_num); TProcSnp_guard1 := "((#S" + first_numStr + " == 0) && (#S" + second_numStr + " == 1))?1 : 0"; TProcSnp_guard2 := "((#S" + second_numStr + " == 0) && (#S" + first_numStr + " == 1))?1 : 0"; firstinProg := findAtom("SnpInProg" + first_numStr); secondinProg := findAtom("SnpInProg" + second_numStr); secondTProc1 := addAtom("ImmTransition", "TProcSnp" + first_numStr + "," + second_numStr); secondTProc1.setAttribute("Guard", TProcSnp_guard1); secondTProc2 := addAtom("ImmTransition", "TProcSnp" + second_numStr + "," + first_numStr); secondTProc2.setAttribute("Guard", TProcSnp_guard2); addConnection("InpImmedArc", firstinProg, secondTProc1); addConnection("OutImmedArc", secondTProc1, secondinProg); addConnection("InpImmedArc", secondinProg, secondTProc2); addConnection("OutImmedArc", secondTProc2, firstinProg); } strategy computeTEnSnpGuard(min_old, min_new, max_new : integer; TEnSnpGuardStr : string) { if (min_old < max_new) then computeTEnSnpGuard(min_old + 1, min_new, max_new, TEnSnpGuardStr + "(#S" + intToString(min_old) + " == 0)&&"); else addEventswithGuard(min_new, max_new, TEnSnpGuardStr + "(#S" + intToString(min_old) + "== 0))?1:0"); endif; } ... // several strategies not show here (e.g., addEventswithGuard) strategy addEvents(min_new, max_new : integer; TEnSnpGuardStr : string) { if (min_new <= max_new) then addNewEvent(min_new, TEnSnpGuardStr); addEvents(min_new+1, max_new, TEnSnpGuardStr); endif; } strategy addNewEvent(event_num : integer; TEnSnpGuardStr : string) { declare start, stTran, inProg, endTran : atom; declare TStSnp_guard : string; start := findAtom("StSnpSht"); stTran := addAtom("ImmTransition", "TStSnp" + intToString(event_num)); TStSnp_guard := "(#S" + intToString(event_num) + " == 1)?1 : 0"; stTran.setAttribute("Guard", TStSnp_guard); inProg := addAtom("Place", "SnpInProg" + intToString(event_num)); endTran := addAtom("ImmTransition", "TEnSnp" + intToString(event_num)); endTran.setAttribute("Guard", TEnSnpGuardStr); addConnection("InpImmedArc", start, stTran); addConnection("OutImmedArc", stTran, inProg); addConnection("InpImmedArc", inProg, endTran); addConnection("OutImmedArc", endTran, start); } strategy computeTEnSnpGuard(min_old, min_new, max_new : integer; TEnSnpGuardStr : string) { if (min_old < max_new) then computeTEnSnpGuard(min_old + 1, min_new, max_new, TEnSnpGuardStr + "(#S" + intToString(min_old) + " == 0)&&"); else addEventswithGuard(min_new, max_new, TEnSnpGuardStr + "(#S" + intToString(min_old) + "== 0))?1:0"); endif; } ... // several strategies not show here (e.g., addEventswithGuard) strategy addEvents(min_new, max_new : integer; TEnSnpGuardStr : string) { if (min_new <= max_new) then addNewEvent(min_new, TEnSnpGuardStr); addEvents(min_new+1, max_new, TEnSnpGuardStr); endif; } strategy addNewEvent(event_num : integer; TEnSnpGuardStr : string) { declare start, stTran, inProg, endTran : atom; declare TStSnp_guard : string; start := findAtom("StSnpSht"); stTran := addAtom("ImmTransition", "TStSnp" + intToString(event_num)); TStSnp_guard := "(#S" + intToString(event_num) + " == 1)?1 : 0"; stTran.setAttribute("Guard", TStSnp_guard); inProg := addAtom("Place", "SnpInProg" + intToString(event_num)); endTran := addAtom("ImmTransition", "TEnSnp" + intToString(event_num)); endTran.setAttribute("Guard", TEnSnpGuardStr); addConnection("InpImmedArc", start, stTran); addConnection("OutImmedArc", stTran, inProg); addConnection("InpImmedArc", inProg, endTran); addConnection("OutImmedArc", endTran, start); } strategy connectNewEvents(min_new, max_new: interger) { if(min_new < max_new) then connectOneNewEventToOtherNewEvents(min_new, max_new); connectNewEvents(min_new+1, max_new); endif; } strategy connectOneNewEventToOtherNewEvents(event_num, max_new: integer) { if(event_num < max_new) then connectTwoEvents(event_num, max_new); connectNewEvents(event_num, max_new-1); endif; } strategy connectTwoEvents(first_num, second_num : integer) { declare firstinProg, secondinProg : atom; declare secondTProc1, secondTProc2 : atom; declare first_numStr, second_numStr, TProcSnp_guard1, TProcSnp_guard2 : string; first_numStr := intToString(first_num); second_numStr := intToString(second_num); TProcSnp_guard1 := "((#S" + first_numStr + " == 0) && (#S" + second_numStr + " == 1))?1 : 0"; TProcSnp_guard2 := "((#S" + second_numStr + " == 0) && (#S" + first_numStr + " == 1))?1 : 0"; firstinProg := findAtom("SnpInProg" + first_numStr); secondinProg := findAtom("SnpInProg" + second_numStr); secondTProc1 := addAtom("ImmTransition", "TProcSnp" + first_numStr + "," + second_numStr); secondTProc1.setAttribute("Guard", TProcSnp_guard1); secondTProc2 := addAtom("ImmTransition", "TProcSnp" + second_numStr + "," + first_numStr); secondTProc2.setAttribute("Guard", TProcSnp_guard2); addConnection("InpImmedArc", firstinProg, secondTProc1); addConnection("OutImmedArc", secondTProc1, secondinProg); addConnection("InpImmedArc", secondinProg, secondTProc2); addConnection("OutImmedArc", secondTProc2, firstinProg); }

  28. Project Status and Work in Progress • Ongoing Integration of SRNML (behavioral) and POSAML (structural) • Incorporate SRNML & POSAML in CoSMIC and release the software in open source public domain • Integrate with C-SAW scalability engine • Performance analysis of different building blocks (patterns): • Non-deterministic reactor (all steps). • Prioritized reactor, Active Object, Proactor (Steps #1, #2: Basic model, Model validation) • Compose DPSS systems and perform performance analysis (analytical and simulation) of composed systems • Validate via automated empirical benchmarking • Demonstrate on real systems

  29. Selected Publications • U. Praphamontripong, S. Gokhale, A. Gokhale, and J. Gray, “Performance Analysis of an Asynchronous Web Server” Proc. of 30th COMPSAC, To Appear. • J. Gray, Y. Lin, and J. Zhang, "Automating Change Evolution in Model-Driven Engineering," IEEE Computer (Special Issue on Model-Driven Engineering), vol. 39, no. 2, February 2006, pp. 51-58 • Invited (Under Review): J. Gray, Y. Lin, J. Zhang, S. Nordstrom, A. Gokhale, S. Neema, and S. Gokhale, "Replicators: Transformations to Address Model Scalability," voted one of the Best Papers of MoDELS 2005 and invited for an extended submission to the Journal of Software and Systems Modeling. • S. Gokhale, A. Gokhale, and J. Gray, "Response Time Analysis of an Event Demultiplexing Pattern in Middleware for Network Services," IEEE GlobeCom, St. Louis, MO, December 2005. • Best Paper Award: J. Gray, Y. Lin, J. Zhang, S. Nordstrom, A. Gokhale, S. Neema, and S. Gokhale, "Replicators: Transformations to Address Model Scalability," Model Driven Engineering Languages and Systems (MoDELS) (formerly the UML series of conferences), Springer-Verlag LNCS 3713, Montego Bay, Jamaica, October 2005, pp. 295-308. - Voted one of the Best Papers of MoDELS 2005 and invited for an extended submission to the Journal of Software and Systems Modeling. • P. Vandal, U. Praphamontripong, S. Gokhale, A. Gokhale, and J. Gray, "Performance Analysis of the Reactor Pattern in Network Services," 5th International Workshop on Performance Modeling, Evaluation, and Optimization of Parallel and Distributed Systems (PMEO-PDS), held at IPDPS, Rhodes Island, Greece, April 2006. • A. Kogekar, D. Kaul, A. Gokhale, P. Vandal, U. Praphamontripong, S. Gokhale, J. Zhang, Y. Lin, J. Gray, "Model-driven Generative Techniques for Scalable Performability Analysis of Distributed Systems," Next Generation Software Workshop, held atIPDPS, Rhodes Island, Greece, April 2006. • S. Gokhale, A. Gokhale, and J. Gray, "A Model-Driven Performance Analysis Framework for Distributed Performance-Sensitive Software Systems," Next Generation Software Workshop, held at IPDPS, Denver, CO, April 2005.

  30. Concluding Remarks Model building (basic characteristics) • DPSS development incurs significant challenges • Model-driven design-time performance analysis is a promising approach • Performance models of basic building blocks built • Scalability demonstrated • Generative programming is key for automation Model validation (simulation/measurements) Generalization of the model Model decomposition Automatic generation Model replication Many hard R&D problems with model-driven engineering remain unresolved!! • GME is available from www.isis.vanderbilt.edu/Projects/gme/default.htm • CoSMIC is available from www.dre.vanderbilt.edu/cosmic • C-SAW is available from www.gray-area.org/Research/C-SAW

More Related