1 / 30

Unobtrusive Performance Analysis – Where is the QoS in TAPAS?

Unobtrusive Performance Analysis – Where is the QoS in TAPAS?. University College London James Skene – j.skene@cs.ucl.ac.uk. Overview. What are we modelling? Why is performance modelling important? Why is it important for TAPAS? What is the state of the art… In performance modelling?

Download Presentation

Unobtrusive Performance Analysis – Where is the QoS in TAPAS?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unobtrusive Performance Analysis – Where is the QoS in TAPAS? University College London James Skene – j.skene@cs.ucl.ac.uk

  2. Overview • What are we modelling? • Why is performance modelling important? • Why is it important for TAPAS? • What is the state of the art… • In performance modelling? • In system modelling? • What problems must be addressed? • An example approach.

  3. What are we modelling? User Beans Beans Container Container Database Network Network

  4. Why is performance modelling important? • During development: • We’d like to be able to predict when the design will cause performance problems • We’d like to be able to combine services and components and predict the overall performance • We’d like to be able to plan the capabilities of our services. • During operation: • We’d like to know what the capacities of our services are in order determine our ability to provide for new and existing customers.

  5. Avg. Response time Arrival rate Example performance scenario 1 • Web service modelled as simple queuing network, infinite population. • Risks: Outages due to bursty traffic, • under-provisioning.

  6. Avg. response time Number of servers Example performance scenario 2 • Web service with multiple servers modelled as finite queue with multiple servers, infinite population, fixed arrival rate. • Risks: Under- or over-provisioning.

  7. What properties are of interest? • Generic properties: • Response time – time to process a single request. • Throughput – rate of processing. • Utilisation of resources – normally a percentage. • Reliability – expected time to failure. • Availability – percentage of time spent in an operational state. • Specific properties – SLA parameters. • Different properties require different techniques.

  8. Why is performance modelling important to TAPAS? • It will allow application service providers to predict what service level agreements that they can enter into, and what agreements they require. This will increase the feasibility of the SLA approach, with all its concomitant benefits. • Modelling will reduce development risk for these organisations and hasten time to market by reducing rework.

  9. What is the state of the art… In performance modelling? Briefly consider: • Formalisms • Methodologies • Standards

  10. Markov Chains: Queuing Networks: Formalisms λ 0 SCPU λ0,1 λ2,0 λ1,0 λ2,0 CPU λ1,2 1 2 SD1 λ2,1 Disk 1

  11. Stochastic Timed Petri nets: Stochastic Process Algebras: #CLIENT = (think, thinkRate) . (request, T) . (return, T) . CLIENT; #SERVICE = (request, requestRate) . (serviceA, serviceARate) . (serviceB, serviceBRate) . (serviceC, serviceCRate) . (return, returnRate) . SERVICE; (CLIENT <> CLIENT <> CLIENT) < request, return > SERVICE More Formalisms λDisk λCPU

  12. And what else… • Differential equations (control theory) • Execution graphs • Finite state machines (genebralised Markov chains) • Generalised queuing theory, LQNs • Probability theory, Bayesian networks • Simulation

  13. Methodologies • Software Performance Engineering (Smith) • Uses a combination of execution graphs and queuing networks. • Advocates a repeated cycle of: • Modelling; • Implementation; • Monitoring; • Model validation and parameterisation. • Advocates separate ‘performance engineer’ role.

  14. Standards – UML Real-time • Properly called: UMLTM Profile for Schedulability, Performance, and Time Specification. • Defines a domain model for performance analysis. • Defines a set of UML stereotypes and tagged values for annotating designs with performance information. • Relies on the UML action semantics.

  15. What is the state of the art… In system modelling • Pertinent technologies are: • SOAP, WSDL, UDDI • UML Profile for Enterprise Java Beans (JCP draft) • Ultimately we will also consider modelling the network layer.

  16. What are the problems that have to be addressed? • The software performance engineering problem. • The information hiding problem. • Creating feasible models. • Creating valid models.

  17. The software performance engineering problem • Why does the discipline ‘Software Performance Engineering’ even exist? (Menasce) • It is a cost/benefit problem. • Increasing the benefit: SLAs. • Decreasing the cost: Automation and integration – ‘Unobtrusive Performance Analysis’.

  18. The information hiding problem • Middleware obscures the details of distribution. • Application servers obscure the details of persistence and transactions. • Interfaces obscure the details of functionality (polymorphism). Good from an engineering standpoint, but makes performance analysis much harder.

  19. Creating feasible models • E.g. state space explosion is a big problem. Consider the number of states in a system with concurrent processes and pre-emption: • n processes, each sliced up m ways leads to a state space of size mn • My PC at the time of writing is running 47 distinct processes concurrently, including database and webserver. • E.g. the form of a queuing model can prohibit an exact solution, an analytical solution, or indeed any stable solution.

  20. Creating valid models • Feasible models are not guaranteed to accurately reflect the behaviour of the system under consideration. • Model making requires possibly invalid assumptions. • Formal models may be automatically generated and not inspected by modelling experts. An error margin of up to 30% may be considered acceptable in performance analysis (Menasce).

  21. An example approach • Using the EJB profile: • Model the application. • Using the real-time profile: • Provide deployment information. • Define the workload. • Using a-priori knowledge of the architecture: • Refine the model to include container actions. • Determine the resources and the resource demands. • Create and solve a queuing network model.

  22. Model the application

  23. Model the deployment

  24. Define a workload The ‘create user’ use-case:

  25. Add container actions

  26. Define a queuing network model Sthink Workload derived from sequence: Client Snet link1:Network Stable user:Table

  27. Deliver the results • In this example the results will be throughput, response time, and utilisation for the two components modelled. • The results will be delivered into the development environment somehow. Possibly they will be incorporated as an annotation to the model (UML real-time). • Model parameters could be functions, resulting in a range of analyses, the results delivered as a graph.

  28. Where were the numbers in this example? • This example used very few specific quantities, such as resource demands. • Performance characteristics differ by several orders of magnitude between local operations and remote operations. • Similarly between operations requiring persistence and those not. • This may lead to simplifications in the models. • The specific timings may be irrelevant early on. • Using fewer quantities simplifies the technique.

  29. Integration with the SLA language • The SLA language will associate components, assemblies or interfaces with performance constraints or capabilities. • A tool could translate XML structures into UML structures and annotate them according to the performance profile. • The modelling environment would check that the constraints were met and identify spare capacity.

  30. Concluding remarks • Perhaps federated services will provide the motivation for better performance analysis. • Perhaps performance analysis techniques can be incorporated into existing modelling techniques and automated. • This research is particularly useful in the specific context of TAPAS as it supports the construction of service level agreements.

More Related