Performance analysis of software architectures
Download
1 / 52

performance analysis of software architectures - PowerPoint PPT Presentation


  • 176 Views
  • Updated On :

Performance Analysis of Software Architectures. UNIVERSITÀ DEGLI STUDI DELL’AQUILA Area Informatica, Facoltà di SS.MM.NN. Paola Inverardi. http://saladin.dm.univaq.it. Joint work with:. Simonetta Balsamo, Universita’ di Venezia

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'performance analysis of software architectures' - MikeCarlo


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Performance analysis of software architectures l.jpg

Performance Analysis of Software Architectures

UNIVERSITÀ DEGLI STUDI DELL’AQUILA

Area Informatica, Facoltà di SS.MM.NN.

Paola Inverardi

http://saladin.dm.univaq.it


Joint work with l.jpg
Joint work with:

  • Simonetta Balsamo, Universita’ di Venezia

  • Group of students over the years: Mangano, Russo, Aquilani, Andolfi


Slide3 l.jpg
Goal

quantitative analysis of SA descriptions.

Introduce the ability to measure architectural choices.

Why? andHow?


Slide4 l.jpg

Why ?

To validate SA design choices with respect to performance indices

To compare alternative SA designs .

Produce feedback at the design level


Slide5 l.jpg

How ?

HOW?

Introduce quantitativemodels early in the life cycle

Evaluate performance indices

Add non-functional requirements to maintain the expected performance


Outline of the talk l.jpg
Outline of the Talk

  • Software Architectures

  • Performance Evaluation

  • Approaches

  • Our recipe

  • Conclusions

  • References

  • Advertising


Software architectures l.jpg
Software Architectures

  • High level system description in terms of subsystems (components) and the way they interact (connectors)

  • Static description: Topology

  • Dynamic description: Behavior


Topology l.jpg

gzip

Filter

Pseudo Filter

Adapter

Filter

Topology


Behavior l.jpg
Behavior

Finite State Automata

MSC


Static and dynamic views l.jpg
Static and Dynamic Views

(a) FSA, (b) Topology, (c) MSC


Quality attributes and sa l.jpg
Quality Attributes and SA

  • qualities discernable by observing the system execution: performance, security,availability, functionality, usability

  • qualities not discernable at run time: modifiability, portability, reusability, integrability, testability.


Quality attributes at run time l.jpg
Quality Attributes at run time

  • Performance: refers to the responsiveness of the system. It is often a function of how much communication and interaction there is between components of the system. It is clearly an architectural issue. (communications usually take longer than computations)


How to measure performance l.jpg
How to measure Performance

  • Arrival rates and distributions of service requests, processing times, queue sizes and latency (the rate at which requests are serviced)

  • simulate by building a stochastic queueing model of the system based upon anticipated workload scenarios


Software architectures quality attributes l.jpg
Software Architectures Quality Attributes

  • Static: can be measured statically (portability, scalability, reusability, …)

  • Dynamic: can be measured by observing the

    SA behavior (performance, availability, …)


Software architectures and performance l.jpg
Software Architectures and Performance

Quoting from WOSP 2000 panel introduction on Performance of SA:

“the quantitative analysis of a SA allows for the early detection of potential performance problems … Early detection of potential performance problems allows alternative software designs and …

… meaning designing a software system and analyzing its performance before the system is implemented …”


Slide16 l.jpg

Software Architecture

Level of abstraction

Dynamic model

Lack of information

How do we

measure

How do we interpret

the measures?


Performance evaluation l.jpg
Performance Evaluation

Quantitative analysis of systems; based on models and methods both deterministic and stochastic

Evaluate the performance of a system means make a quantitative analysis to derive a set of (performance) indices either obtained as mean or probabilistic figures

Probabilistic distribution/mean of response times, of waiting times,queus length, delay, resource utilization, throughput, …


Pe models and techniques l.jpg
PE Models and Techniques

  • Models are primarily stochastic and can be solved by either analytic or simulation techniques.

  • Analytic techniques can be exact (e.g. numerical), approximated or bound

  • Simulation techniques , more general but expensive


Queueing network models l.jpg
Queueing Network Models

  • Service centers

    • service time

    • buffer space with scheduling policy

    • number of servers

  • Customers

    • Number for closed models, arrival process for open models

  • Network Topology

    • models how service centers are interconnected and how customers move among them


Queueing networks with finite capacity queues l.jpg
Queueing networks with finite capacity queues

  • Queueing network models to represent

    • sharing of resources with finite capacity queues

    • population constraints

    • synchronization constraints

  • finite capacity of the queue

    • n = number of customers in the service center

    • B = finite capacity

      blocking  dependence

      Deadlock

      Solution Methods : exact vs approximate simulation

  • various blocking types:

    • different behaviors of customer arrivals at a full node and of servers' activity


Analytical solutions for q n with finite capacity queues l.jpg
Analytical solutions for Q.N. with finite capacity queues

  • Network model parameters

    • M number of nodes

    • N number of customers

    • µiservice rate of node i

    • Service time distribution: M, G, PHn ,GE

    • P=||pij|| routing matrix

    • Bi finite capacity of node i

  • Queue-length probability distribution ?

    • C-T Homogeneous Markov Chain

      • S = (S1,S2,..., SM) network state

    • State space E, transition rate matrix: Q

    • Steady-state probabilities π(S)

  • Other average performance indices can be derived from π and depend on the blocking type

  • Exact solution becomes soon numerically untractable

  • Product-form solution in special cases approximate analysis


Queueing network models22 l.jpg
Queueing Network Models

“QNModelling is a top-down process. The underlying philosophy is to begin by identifying the principal components of the system and the ways they interact, then supply any details that prove to be necessary “

(ref. Lazowska et al. Quantitative System Performance, Prentice Hall,

http://www.cs.washington.edu/homes/lazowska/qsp/)


Qnm creation l.jpg
QNM creation

  • Definitiondefinition of service centers, their number, class of customers and topology

  • Parameterizationdefine the alternative of studies, e.g. by selecting arrival processes and service rates

  • Evaluationobtain a quantitative description of system behavior. Computation of performance indices like resource utilization, system throughput and customer response time.


Approaches l.jpg
Approaches

  • Software Performancethe whole system life cycle is available, design is used to incrementally produce a QNM model of the software system.

  • Software Specificationthe system behavioral specification is available and modeled by Stocastic Petri Nets, Stocastic Process Algebras


Software performance l.jpg
Software Performance

  • Performance Analysis integrated in the software life cycle.

    • Assume to manage a number of software artifacts, from requirements specifications (Use Cases) to deployment diagrams

  • QNM models

    • Topology obtained from the information on the physical architecture

    • Information on software component is used to define the model workload

References under SP, a (UML-based) survey in BS01


Software specification l.jpg
Software Specification

  • Identify a precise software stage: system design specification

  • Formal behavioral specification: Stochastic petri Nets, Stochastically Timed Process Algebras

    • Behavioral and performance analysis in a single model

References under SS


Our approach l.jpg
Our Approach

  • No SP: we want to evaluate performance of the SA description. We do not assume to have an implementation

  • No SS: nice one single model but feedback too difficult. The performance model is too far from the component/connectors description


Slide28 l.jpg

SA

Description

CHAM, FSP,UML

WRIGHT,...

Behavioral Model

Dynamic descriptions,

FSM,MSCs,...

Algorithm

feedback

Performance Model

Favorite model

QNM,SPN,SPA...

Performance Evaluation

Solution method:

symbolic, approximation,

simulation...

Results and

interpretation


Slide29 l.jpg

Brief history of our work in SA and PE 1/2

  • Formal description of SA via CHAM

  • Behavioral analysis of the SA

  • Algebraic analysis and finite state modeling

  • Validation and quantitative analysis based on FSTM

  • global system behavior

  • Queueing Network Model

  • Feedback at the design level

  • Capacity planning and case studies

- S. Balsamo, P. Inverardi, C. Mangano "An Approach to Performance Evaluation of Software Architectures" in IEEE Proc. WOSP'98.

- S. Balsamo, P. Inverardi, C. Mangano, L.Russo "Performance Evaluation of Software Architectures" in IEEE Proc. IWSSD-98.


Slide30 l.jpg

Brief history of our work in SA and PE 2/2

  • Specification of SA via Message Sequence Charts - UML

  • Event ordering. Event sequence. Trace of events.

  • Communication types, concurrency and non-determinism

  • Trace analysis and model structure identification

  • Quantitative analysis based on extended QN model

  • Scenarios for model parameterization

  • Feedback at the design level

- F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi "Deriving Performance Models of Software Architectures from Message Sequence Charts" in Proc. IEEE WOSP 2000.

- F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi " On using Queueing Network Models with finite capacity queues for Software Architectures performance prediction” in Proc. QNET’2000.


Slide31 l.jpg

Framework of performance analysis of SA at the design level

  • Description of SA via LTS - independent of ADL

  • Algorithm to derive the performance model structure

  • Add info on the communication types, state annotation

  • Identify scenarios for model parameterization

  • Performance model based on extended Queueing Network models

  • Analytical solution (symbolic) for simple models, approximation or simulation for complex models

  • Result interpretation at the software design level

F. Aquilani, S. Balsamo, P. Inverardi "Performance Analysis at the software architecture design level" TR-SAL-32, Technical Report Saladin Project, 2000, to appear on Performance Evaluation.


Usual example the multiphase compiler l.jpg

O

P

T

IM

I

Z

E

R

ob

j

e

ct

_

c

ode

Usual Example: The Multiphase Compiler

Multiphase compiler concurrent architecture

optimized architecture

Software Architecture

P

AR

SE

R

S

E

M

AN

T

OR

L

EX

E

R

p

hr

a

s

e

s

c

h

a

r

ac

t

er

s

t

o

k

e

n

s

c

o

r

.

p

hr

a

s

e

s

T

EX

T

CO

D

E_G

E

N

Synchronous communication

Queueing Network Model with BAS blocking

O

B

=1

O

L

P

S

G

B

=1

B

=1

B

=1

P

S

G

Acyclic topology

Solution: approximate analysis


Sequential architecture l.jpg
Sequential Architecture

Software Architecture

Same number of components

strongly sequentialized. No concurrency

lexer

parser

semantor

optimizer

text

codegen

One sin

QNM

1 single service center

sc1


Parameterization and evaluation l.jpg
Parameterization and Evaluation

  • Specify parameters(e.g. arrival rate and mean service time of each center). We keep them symbolic.

  • Meaning of the parameters, (e.g. service time = execution time of a component, arrival rate = activation of concurrent instances of components execution.

  • Parameter istantiationsidentify potential implementation scenarious

    • In the compiler example, 3 scenarious playing with the mean service time of the concurrent model


How we provide feedback l.jpg
How we provide Feedback

throughput of the 2 compiler SA:

the concurrent SA performs 5 times better than the sequential SA

Scenario in which the mean service times of the nodes have the same degree of magnitude.

  • enrich performance requirements in the subsequent development steps,

    • a global performance requirement can be broken into requirements on single components


Slide36 l.jpg

SA

Dynamic description

Labeled Transition System

Message Sequence Charts

State annotation,

Communication type

Algorithm

feedback

Performance Model- QNM

Scenarios

parameterization

Performance Evaluation

Choice of SA + new requirements on components, connectors

Results and

interpretation


Performance analysis at the sa design level 1 2 l.jpg
Performance Analysis at the SA design level 1/2

SA specification: Labeled Transition System

<S,L, , s,P>,

S set of states, L set of labels (communication types)

s initial state, P set of state labels

transition relation in (P x L x P)

SA components: communicating concurrent subsystems

SA level: consider interaction activities among components

Parallel composition of communicating components

P set of SA components and connectors states described by the LTS


Performance analysis at the sa design level 2 2 l.jpg
Performance Analysis at the SA design level 2/2

First model the maximum level of concurrency (each component as an autonomous server)

(algorithm)

derive a simple structure of the QNM by analyzing the true level of concurrency and the communication type


Algorithm l.jpg
Algorithm

  • LTS visit to derive interaction sets formed by interaction pairs (IP) - (p1 ,p2 ) flow of data from p1 to p2

    • model connecting elements with buffer

    • mark non-deterministic IP

  • examines the sets of IP to generate the service centers and topology of the QNM


Slide40 l.jpg

SA description: MSCs - From MSCs to QNMs

  • UML as ADL

    • a model of all possible system behaviours

    • state diagrams for “manageable” processes

    • implicit parallel notation for composite processes P1||P2||…||Pn

    • no explicit representation due to state explosion

  • Sequence diagrams/MSCs to describe components interactions

  • MSCs with state information and iteration blocks, components are the object elements

  • QNM with blocking, BAS mechanism


Slide41 l.jpg

MSCs requirements

  • It is always possible to synthesize a FSM out of a set of MSCs

  • all refer to the same initial system configuration

  • representative of major system behaviors

  • Each system component is in (at least) a MSC

  • MSCs contain info about the state of components

  • Other technical conditions


Slide42 l.jpg

Extracting from MSC info about

  • communication among components, i.e. which components interact

  • communication types, i.e. synchronous/asynchronous

  • concurrency, i.e. components can proceed concurrently

  • non-determinism, i.e. components do proceed nondeterministically


Slide43 l.jpg

How do we do that?

  • MSCs encoding => from a MSC we derive the trace (set of regular languages)

  • We analyze traces to identify the kind of communications (1to2, 2to1, concurrent, non-deterministic), we build Interaction Pairs to record this information

  • We use IP to build the QNM topology


Slide44 l.jpg

Interaction Pairs and QNM

  • I = (P1,P2)s => service center representing a unique service P1 followed by P2, expressing sequentiality (P1 and P2 are not concurrent)

  • I = (P1,P2)a => service center with infinite buffer implicitely modelling the communication channel + the transition P1 ->P2 in the QNM

  • {(P1,P2)s, (P1,P3)s }ND => multi-customer service center

  • synchronous communication among concurrent components =>distinct service centers, the receiver component a zero capacity buffer with BAS policy in the sender component


Slide45 l.jpg

Example

  • Compressing Proxy system

  • purpose: improve the performance of Unix-based World Wide Web browsers over slow networks by an HTTP server that compresses and uncompresses data to and from the network

  • Software Architecture

gzip

Filter

Pseudo Filter

Adapter

Filter

Synchronous communication Queueing Network Model with BAS blocking

exact analysis of the underlying Markov chain

BGZIP=1

BAD=1

AD

GZIP


Slide46 l.jpg

MSC to trace

{S(Cfu,AD)S(AD,Gzip)S(Gzip,AD)S(AD,CFd)}N


Slide47 l.jpg

Trace Analysis

  • S(Px,Py)c1…S(Pk,Pz)c2 S(Pi,Pj)c3 S(Ps,Pt)c4 …

  • S(Px,Py)c1…S(Pk,Pz)c2 S(Ps,Pt)c4 S(Pi,Pj)c3 …

  • Pi and Ps are concurrent

S(Ps,Pt)c4

S(Px,Py)c1…S(Pk,Pz)c2

S(Pi,Pj)c3

S(Pi,Pj)c3

S(Ps,Pt)c4


Slide48 l.jpg

Conclusion

  • Derivation of the performance model from the dynamic view of SA

  • Finite (incomplete) representation of the SA behavior, i.e. LTS (MSC)

  • Analysis of LTS (MSC) to extract relevant to PM pieces of information

  • Performance evaluation at the SA level of abstraction

  • Feedback on the design process

  • Case studies

  • Integration of architectural design tools and performance tools


My opinion l.jpg
My opinion

  • Still active area of research, very high industrial interest, research interest see key action of the new IST European program call

  • PM models close to SA description. Symbolic evaluation!

  • Feedback: Make explicit the extra info to help in refining the design steps

  • Experiment!


Slide50 l.jpg

ADVERTISING

ROME 22-26 JULY 2002

ISSTA and WOSP Together!


Selected bibliography l.jpg
Selected Bibliography

  • GENERAL SA

    • Shaw, M., Garlan, D., Software Architectures: Perspectives on an Emerging Discipline, Prentice Hall, 1996

    • Bass, L., Clemens, P., Kazman, R., Software Architectures in Practice, Addison Wesley, 1998

    • Hofmeister, C., Nord, R., Soni, D., Applied Software Architectures, Addison Wesley, October 1999.

    • Http://www.sei.cmu.edu

  • Survey

    • S. Balsamo, M. Simeoni "On Transforming UML models into performance models" Workshop on Transformations in UML, ETAPS 2001 Genova, Italy, April 7th, 2001.

  • Software Specification

    • G. Balbo, G. Conte and M. A. Marsan. Performance Modelsof Multiprocessor Systems. Series in Computer Systems, The MITPress, (1986).

    • R. Pooley and P. King, "Using UML to derive stochastic process algebra models“Proceedings 15th UK Performance Engineering Workshop, 1999. 

    • P. King and R. Pooley, "Derivation of Petri Net Performance Models fromUML specifications“Proceedings 11th Int. Conf. on Tools and techniques for computer Performance Evaluation, Illinois 2000.

    • M. Bernardo and R. Gorrieri "Extend Markovian ProcessAlgebra" In Proc. CONCUR '96, LNCS (Springer-Verlag) No. 1119,(1996) 315-330.

    • M. Bernardo, P. Ciancarini and L. Donatiello, "AEMPA: A Process Algebraic Description Language for the Performance Analysis of Software Architectures", in Wosp2000

  • Our Approach

    • S. Balsamo, P. Inverardi, C. Mangano "An Approach to Performance Evaluation of Software Architectures" in IEEE Proc. WOSP'98.

    • S. Balsamo, P. Inverardi, C. Mangano, L.Russo "Performance Evaluation of Software Architectures" in IEEE Proc. IWSSD-98.

    • F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi "Deriving Performance Models of Software Architectures from Message Sequence Charts" in Wosp2000

    • F. Andolfi, F. Aquilani, S. Balsamo, P. Inverardi " On using Queueing Network Models with finite capacity queues for Software Architectures performance prediction” in Proc. QNET’2000.

    • F. Aquilani, S. Balsamo, P. Inverardi "Performance Analysis at the software architecture design level" TR-SAL-32, Technical Report Saladin Project, 2000, to appear on Performance Evaluation


Selected bibliography52 l.jpg
Selected bibliography

  • Software Performance

    • H. Gomaa and D. Menasce,"A Method for Design and performance modeling of client-server Systems",IEEE Transactions on Software Engineering, 2000.

    •  H. Gomaa and D. Menasce,"Design and Performance Modeling of component Interconnection Patterns forDistributed Software Architectures" in Wosp2000.

    • V. Cortellessa, R. Mirandola,"Deriving a Queueing network based performance Model from UML Diagrams“in Wosp2000

    • D. C. Petriu, X. Wang "From UML Description of high-level softwarearchitecture to LQN Performance Models", in AGTIVE'99, LNCS 1779,Springer-verlag, 2000. 

    • D. C. Petriu, C. Shousha, A. Jalnapurkar, "Architecture-based PerformanceAnalysis Applied to a Telecommunication System", in IEEETrans. of Software Engineering, 2000.

    • R. Pooley, "Software Engineering and Performance: A Road-map“in The Future of Software Engineering, A. Finkelstein Editor, 22ICSE.

    •  R. Pooley and P. King, "The Unified Modeling Language and PerformanceEngineering“IEE Proceedings-Software, 146, 1 (February 1999).

    • C. U. Smith. Performance Engineering of Software Systems. Addison-Wesley Publishing Company, (1990).

    • C. U. Smith and L. G. Williams "Software PerformanceEngineering:A Case Study Including Performance Comparison with DesignAlternatives" IEEE Trans. on Software Engineering, Vol 19, No. 7,720-741, July 1993. 

    • C. U. Smith and L. G. Williams "Performance Evaluation ofSoftware Architectures“ in Wosp 1998

    • M. Woodside, C. Hrischuk, B. Selic, S. Bayarov, "AWideband Approach to integrating Performance prediction into a SoftwareDesign Environment", in Wosp 1998.

    • M. Woodside " Software Performance Evaluation by Models",in Performance Evaluation (G. Haring, C. Lindemann, M. Reiser Eds.),LNCS 1769, 283-304, 2000.


ad