html5-img
1 / 52

Performance Analysis of Software Architectures

Performance Analysis of Software Architectures. UNIVERSITÀ DEGLI STUDI DELL’AQUILA Area Informatica, Facoltà di SS.MM.NN. Paola Inverardi. http://saladin.dm.univaq.it. Joint work with:. Simonetta Balsamo, Universita’ di Venezia

tirzah
Download Presentation

Performance Analysis of Software Architectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Analysis of Software Architectures UNIVERSITÀ DEGLI STUDI DELL’AQUILA Area Informatica, Facoltà di SS.MM.NN. Paola Inverardi http://saladin.dm.univaq.it

  2. Joint work with: • Simonetta Balsamo, Universita’ di Venezia • Group of students over the years: Mangano, Russo, Aquilani, Andolfi

  3. Goal quantitative analysis of SA descriptions. Introduce the ability to measure architectural choices. Why? andHow?

  4. Why ? To validate SA design choices with respect to performance indices To compare alternative SA designs . Produce feedback at the design level

  5. How ? HOW? Introduce quantitativemodels early in the life cycle Evaluate performance indices Add non-functional requirements to maintain the expected performance

  6. Outline of the Talk • Software Architectures • Performance Evaluation • Approaches • Our recipe • Conclusions • References • Advertising

  7. Software Architectures • High level system description in terms of subsystems (components) and the way they interact (connectors) • Static description: Topology • Dynamic description: Behavior

  8. gzip Filter Pseudo Filter Adapter Filter Topology

  9. Behavior Finite State Automata MSC

  10. Static and Dynamic Views (a) FSA, (b) Topology, (c) MSC

  11. Quality Attributes and SA • qualities discernable by observing the system execution: performance, security,availability, functionality, usability • qualities not discernable at run time: modifiability, portability, reusability, integrability, testability.

  12. Quality Attributes at run time • Performance: refers to the responsiveness of the system. It is often a function of how much communication and interaction there is between components of the system. It is clearly an architectural issue. (communications usually take longer than computations)

  13. How to measure Performance • Arrival rates and distributions of service requests, processing times, queue sizes and latency (the rate at which requests are serviced) • simulate by building a stochastic queueing model of the system based upon anticipated workload scenarios

  14. Software Architectures Quality Attributes • Static: can be measured statically (portability, scalability, reusability, …) • Dynamic: can be measured by observing the SA behavior (performance, availability, …)

  15. Software Architectures and Performance Quoting from WOSP 2000 panel introduction on Performance of SA: “the quantitative analysis of a SA allows for the early detection of potential performance problems … Early detection of potential performance problems allows alternative software designs and … … meaning designing a software system and analyzing its performance before the system is implemented …”

  16. Software Architecture Level of abstraction Dynamic model Lack of information How do we measure How do we interpret the measures?

  17. Performance Evaluation Quantitative analysis of systems; based on models and methods both deterministic and stochastic Evaluate the performance of a system means make a quantitative analysis to derive a set of (performance) indices either obtained as mean or probabilistic figures Probabilistic distribution/mean of response times, of waiting times,queus length, delay, resource utilization, throughput, …

  18. PE Models and Techniques • Models are primarily stochastic and can be solved by either analytic or simulation techniques. • Analytic techniques can be exact (e.g. numerical), approximated or bound • Simulation techniques , more general but expensive

  19. Queueing Network Models • Service centers • service time • buffer space with scheduling policy • number of servers • Customers • Number for closed models, arrival process for open models • Network Topology • models how service centers are interconnected and how customers move among them

  20. Queueing networks with finite capacity queues • Queueing network models to represent • sharing of resources with finite capacity queues • population constraints • synchronization constraints • finite capacity of the queue • n = number of customers in the service center • B = finite capacity blocking  dependence Deadlock Solution Methods : exact vs approximate simulation • various blocking types: • different behaviors of customer arrivals at a full node and of servers' activity

  21. Analytical solutions for Q.N. with finite capacity queues • Network model parameters • M number of nodes • N number of customers • µiservice rate of node i • Service time distribution: M, G, PHn ,GE • P=||pij|| routing matrix • Bi finite capacity of node i • Queue-length probability distribution ? • C-T Homogeneous Markov Chain • S = (S1,S2,..., SM) network state • State space E, transition rate matrix: Q • Steady-state probabilities π(S) • Other average performance indices can be derived from π and depend on the blocking type • Exact solution becomes soon numerically untractable • Product-form solution in special cases approximate analysis

  22. Queueing Network Models “QNModelling is a top-down process. The underlying philosophy is to begin by identifying the principal components of the system and the ways they interact, then supply any details that prove to be necessary “ (ref. Lazowska et al. Quantitative System Performance, Prentice Hall, http://www.cs.washington.edu/homes/lazowska/qsp/)

  23. QNM creation • Definitiondefinition of service centers, their number, class of customers and topology • Parameterizationdefine the alternative of studies, e.g. by selecting arrival processes and service rates • Evaluationobtain a quantitative description of system behavior. Computation of performance indices like resource utilization, system throughput and customer response time.

  24. Approaches • Software Performancethe whole system life cycle is available, design is used to incrementally produce a QNM model of the software system. • Software Specificationthe system behavioral specification is available and modeled by Stocastic Petri Nets, Stocastic Process Algebras

  25. Software Performance • Performance Analysis integrated in the software life cycle. • Assume to manage a number of software artifacts, from requirements specifications (Use Cases) to deployment diagrams • QNM models • Topology obtained from the information on the physical architecture • Information on software component is used to define the model workload References under SP, a (UML-based) survey in BS01

  26. Software Specification • Identify a precise software stage: system design specification • Formal behavioral specification: Stochastic petri Nets, Stochastically Timed Process Algebras • Behavioral and performance analysis in a single model References under SS

  27. Our Approach • No SP: we want to evaluate performance of the SA description. We do not assume to have an implementation • No SS: nice one single model but feedback too difficult. The performance model is too far from the component/connectors description

  28. SA Description CHAM, FSP,UML WRIGHT,... Behavioral Model Dynamic descriptions, FSM,MSCs,... Algorithm feedback Performance Model Favorite model QNM,SPN,SPA... Performance Evaluation Solution method: symbolic, approximation, simulation... Results and interpretation

  29. Brief history of our work in SA and PE 1/2 • Formal description of SA via CHAM • Behavioral analysis of the SA • Algebraic analysis and finite state modeling • Validation and quantitative analysis based on FSTM • global system behavior • Queueing Network Model • Feedback at the design level • Capacity planning and case studies - S. Balsamo, P. Inverardi, C. Mangano "An Approach to Performance Evaluation of Software Architectures" in IEEE Proc. WOSP'98. - S. Balsamo, P. Inverardi, C. Mangano, L.Russo "Performance Evaluation of Software Architectures" in IEEE Proc. IWSSD-98.

  30. Brief history of our work in SA and PE 2/2 • Specification of SA via Message Sequence Charts - UML • Event ordering. Event sequence. Trace of events. • Communication types, concurrency and non-determinism • Trace analysis and model structure identification • Quantitative analysis based on extended QN model • Scenarios for model parameterization • Feedback at the design level - F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi "Deriving Performance Models of Software Architectures from Message Sequence Charts" in Proc. IEEE WOSP 2000. - F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi " On using Queueing Network Models with finite capacity queues for Software Architectures performance prediction” in Proc. QNET’2000.

  31. Framework of performance analysis of SA at the design level • Description of SA via LTS - independent of ADL • Algorithm to derive the performance model structure • Add info on the communication types, state annotation • Identify scenarios for model parameterization • Performance model based on extended Queueing Network models • Analytical solution (symbolic) for simple models, approximation or simulation for complex models • Result interpretation at the software design level F. Aquilani, S. Balsamo, P. Inverardi "Performance Analysis at the software architecture design level" TR-SAL-32, Technical Report Saladin Project, 2000, to appear on Performance Evaluation.

  32. O P T IM I Z E R ob j e ct _ c ode Usual Example: The Multiphase Compiler Multiphase compiler concurrent architecture optimized architecture Software Architecture P AR SE R S E M AN T OR L EX E R p hr a s e s c h a r ac t er s t o k e n s c o r . p hr a s e s T EX T CO D E_G E N Synchronous communication Queueing Network Model with BAS blocking O B =1 O L P S G B =1 B =1 B =1 P S G Acyclic topology Solution: approximate analysis

  33. Sequential Architecture Software Architecture Same number of components strongly sequentialized. No concurrency lexer parser semantor optimizer text codegen One sin QNM 1 single service center sc1

  34. Parameterization and Evaluation • Specify parameters(e.g. arrival rate and mean service time of each center). We keep them symbolic. • Meaning of the parameters, (e.g. service time = execution time of a component, arrival rate = activation of concurrent instances of components execution. • Parameter istantiationsidentify potential implementation scenarious • In the compiler example, 3 scenarious playing with the mean service time of the concurrent model

  35. How we provide Feedback throughput of the 2 compiler SA: the concurrent SA performs 5 times better than the sequential SA Scenario in which the mean service times of the nodes have the same degree of magnitude. • enrich performance requirements in the subsequent development steps, • a global performance requirement can be broken into requirements on single components

  36. SA Dynamic description Labeled Transition System Message Sequence Charts State annotation, Communication type Algorithm feedback Performance Model- QNM Scenarios parameterization Performance Evaluation Choice of SA + new requirements on components, connectors Results and interpretation

  37. Performance Analysis at the SA design level 1/2 SA specification: Labeled Transition System <S,L, , s,P>, S set of states, L set of labels (communication types) s initial state, P set of state labels transition relation in (P x L x P) SA components: communicating concurrent subsystems SA level: consider interaction activities among components Parallel composition of communicating components P set of SA components and connectors states described by the LTS

  38. Performance Analysis at the SA design level 2/2 First model the maximum level of concurrency (each component as an autonomous server) (algorithm) derive a simple structure of the QNM by analyzing the true level of concurrency and the communication type

  39. Algorithm • LTS visit to derive interaction sets formed by interaction pairs (IP) - (p1 ,p2 ) flow of data from p1 to p2 • model connecting elements with buffer • mark non-deterministic IP • examines the sets of IP to generate the service centers and topology of the QNM

  40. SA description: MSCs - From MSCs to QNMs • UML as ADL • a model of all possible system behaviours • state diagrams for “manageable” processes • implicit parallel notation for composite processes P1||P2||…||Pn • no explicit representation due to state explosion • Sequence diagrams/MSCs to describe components interactions • MSCs with state information and iteration blocks, components are the object elements • QNM with blocking, BAS mechanism

  41. MSCs requirements • It is always possible to synthesize a FSM out of a set of MSCs • all refer to the same initial system configuration • representative of major system behaviors • Each system component is in (at least) a MSC • MSCs contain info about the state of components • Other technical conditions

  42. Extracting from MSC info about • communication among components, i.e. which components interact • communication types, i.e. synchronous/asynchronous • concurrency, i.e. components can proceed concurrently • non-determinism, i.e. components do proceed nondeterministically

  43. How do we do that? • MSCs encoding => from a MSC we derive the trace (set of regular languages) • We analyze traces to identify the kind of communications (1to2, 2to1, concurrent, non-deterministic), we build Interaction Pairs to record this information • We use IP to build the QNM topology

  44. Interaction Pairs and QNM • I = (P1,P2)s => service center representing a unique service P1 followed by P2, expressing sequentiality (P1 and P2 are not concurrent) • I = (P1,P2)a => service center with infinite buffer implicitely modelling the communication channel + the transition P1 ->P2 in the QNM • {(P1,P2)s, (P1,P3)s }ND => multi-customer service center • synchronous communication among concurrent components =>distinct service centers, the receiver component a zero capacity buffer with BAS policy in the sender component

  45. Example • Compressing Proxy system • purpose: improve the performance of Unix-based World Wide Web browsers over slow networks by an HTTP server that compresses and uncompresses data to and from the network • Software Architecture gzip Filter Pseudo Filter Adapter Filter Synchronous communication Queueing Network Model with BAS blocking exact analysis of the underlying Markov chain BGZIP=1 BAD=1 AD GZIP

  46. MSC to trace {S(Cfu,AD)S(AD,Gzip)S(Gzip,AD)S(AD,CFd)}N

  47. Trace Analysis • S(Px,Py)c1…S(Pk,Pz)c2 S(Pi,Pj)c3 S(Ps,Pt)c4 … • S(Px,Py)c1…S(Pk,Pz)c2 S(Ps,Pt)c4 S(Pi,Pj)c3 … • Pi and Ps are concurrent S(Ps,Pt)c4 S(Px,Py)c1…S(Pk,Pz)c2 S(Pi,Pj)c3 S(Pi,Pj)c3 S(Ps,Pt)c4

  48. Conclusion • Derivation of the performance model from the dynamic view of SA • Finite (incomplete) representation of the SA behavior, i.e. LTS (MSC) • Analysis of LTS (MSC) to extract relevant to PM pieces of information • Performance evaluation at the SA level of abstraction • Feedback on the design process • Case studies • Integration of architectural design tools and performance tools

  49. My opinion • Still active area of research, very high industrial interest, research interest see key action of the new IST European program call • PM models close to SA description. Symbolic evaluation! • Feedback: Make explicit the extra info to help in refining the design steps • Experiment!

  50. ADVERTISING ROME 22-26 JULY 2002 ISSTA and WOSP Together!

More Related