1 / 45

Performance and Scalability Testing: Avoiding Performance Problems in Production

Performance and Scalability Testing: Avoiding Performance Problems in Production. Steven Haines J2EE Architect and Evangelist Quest Software. Agenda. Speaker introduction Overview of the problem Overview of the strategy throughout the development lifecycle

sela
Download Presentation

Performance and Scalability Testing: Avoiding Performance Problems in Production

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance and Scalability Testing: Avoiding Performance Problems in Production Steven Haines J2EE Architect and Evangelist Quest Software

  2. Agenda • Speaker introduction • Overview of the problem • Overview of the strategy throughout the development lifecycle • What is Performance and how is it different from Scalability? • How can I measure these important metrics? • The importance of bringing the team together • Solidifying communication between Production and Testing • Questions & Answers

  3. Speaker Introduction • J2EE Architect and Evangelist for Quest Software • Author of Java 2 Primer Plus and Java 2 From Scratch • Co-Author of Java Web Services Unleashed • Java Host and columnist on InformIT.com (Pearson Education) • Java Instructor at the University of California, Irvine (UCI) and previously Learning Tree University (LTU) • Recruited as a J2EE architect in the “real world”

  4. Overview of Performance Measurement

  5. Two Questions • How do we measure the performance of a J2EE Application? • What is the cost of performance measurement?

  6. How do we Measure Performance? • We use tools to measure the following: • Application Performance / Response Time • Platform • Application Server • Operating System • JVM • Dependent Resources • E.g. database

  7. Performance Measurement is Non-Trivial • Too many people measure without thinking • Turn it all on, see what I get • Production != pre-production • Nothing is free • You should know what everything costs • What does the measurement mean? • Interpretation is difficult • Average vs.. min/max • Servlet response time vs.. request response time

  8. J2EE Performance Measurement • Data != Knowledge • Analogy: weather • Measure temperature, barometric pressure, humidity • Understand how these metrics relate • 28C + 101.5KPa and dropping + 100% humidity = THUNDERSTORM • That’s knowledge  • Data + Model = Knowledge • Need a model of J2EE application server • You probably already have one in your head

  9. Basic Metrics • Response Time (R) • Throughput (X) • Resource Utilization (U) • Related to one another • R tends to increase with load • X and U increase linearly until U is maxed out • Once U is maximum, R and X plateau or decrease

  10. Basic Metric Behavior Resource Saturated Light Load Heavy Load Response Time (R) Utilization (U) Throughput (X) Buckle Zone # Concurrent Users (Load)

  11. Client Handling Execution Management Applications Services Disk EJBs Requests Queues JDBC JMS Servlets Sessions Threads JSPs JCA Other A Model for J2EE Systems R U R R U X JVM CPU Memory Network U

  12. Response Time • Response Time (R) • Measures the time spent executing response to a request • Good for understanding end user experience • Can vary significantly • Locking, resource contention, container activity • Measure a distribution of response times • Standard deviation • Histogram (buckets) • Most users get 2s, but 20% get 10s • Who’s going to call? 

  13. Throughput • Throughput (X) • Measures the number of transactions that are executed by the system over a period of time • e.g. 1200 tps • A measure of the system’s capacity for load • Not a user measurement • Useful in non-interactive systems • X and R can be at odds • In the lab, want highest X • In production, want lowest R • Common goal: maximize X with 95% of requests <= some value of R

  14. Resource Utilization • Resource Utilization (U) • Measures use of a resource • Memory, disk, network, CPU • Translates application performance to the lowest level • Helpful for system sizing • U is the easiest measurement to understand • System requires 56% CPU and 256Mb memory

  15. Overhead of Measurement • Measurements aren’t free • Overhead is the added cost imposed by a measurement • Can be in terms of R, X or U • Service Demand • D = U / X • Utilization normalized for throughput change • Best overhead measurement is D • Second best is R with standard deviation

  16. Overhead of Measurement • Memory impact • Memory rate • More frequent garbage collection • Memory level • More frequent garbage collection • Can be worked around by adjusting heap size • Impact on garbage collection • Frequency • Chews up system resources • Size • Disruptive to response times (R)

  17. Performance and Scalability Strategy Throughout the Development Lifecycle

  18. Performance at Every Step in the Development Process • Establish performance requirements in Use Cases • Unit test your components for performance • Both for memory usage and response time • Test your application for performance at every integration milestone • Integration of un-tuned components is analogous to building a car with broken parts! • Performance should never be an after thought once your product is complete!

  19. What is Performance and how is it different from Scalability?

  20. Performance Isn’t Scalability • Performance is a measure of the capabilities of your application • Measures things like response time, resource utilization, network traffic • Scalability is a measure of the capacity of your application • How can your application scale? • If you need to support 10,000 simultaneous users, can your application run properly in a clustered environment that can support that load?

  21. End-User Response Time • Consider measuring performance solely by end-user response time • What is faster: a CGI script or a full J2EE application? • What is easier to represent complex business logic with? • What is easier to support transactional business processes with? • Which can scale to support thousands of simultaneous users?

  22. J2EE = Scalability • Total application performance is a balance between raw performance and scalability • Infrastructure and layers of abstraction reduce raw performance but increase scalability • J2EE applications built with the proper design patterns = scalability

  23. Measuring Scalability • Unlike performance, we cannot quantify a measurement of the scalability of an application • We can however test for scalability • We can measure the capacity on a single environment and then deploy our application to a cluster and test the cumulative capacity • Capacity on an individual server deployed on two servers != twice the capacity

  24. How can I measure Performance and Scalability?

  25. What do we measure? • Application Performance Metrics • Application Server Performance Metrics • Platform Metrics • Dependency Metrics

  26. J2EE JVM OS Inserted Instrumentation • Extra code inserted into application to measure performance • Custom: inserted by developer • Automatic: inserted by a tool • Nature of J2EE allows instrumentation to be inserted • Byte code insertion • Adds to core app server measurements • For older app servers, BCI used to gather app server metrics • Resident for lifetime of class

  27. Custom Instrumentation • Developer inserts custom measurement code • ARM, manufacturer • Send results to a central recording mechanism • Which you also have to write… • void myMethod() { • long start = System.currentTimeMillis(); • for (int i = 0; i < 10; i ++) { • System.out.println(“Wasting time “ + i); • } • long end = System.currentTimeMillis(); • Recorder.getRecorder().addMethodTime(“test.myMethod”, end-start); • }

  28. Custom Instrumentation • Value: • Customized results • Drawbacks: • How is the data gathered? • Log file, expose as MBean • Impact: • Adds overhead to code • More than automatic instrumentation • Bottom Line: • Similar to log analysis • Except log analysis doesn’t require central infrastructure • Good way to measure method timings for a small number of critical methods • Bad for large number of methods • Great if API + infrastructure provided by tool vendor • Less work for you

  29. Automatic Instrumentation • Measurement code is inserted automatically • Sends results to a central recording mechanism • void myMethod() { • if (recording) { • Recorder.getRecorder().enterCallback(this); • } • for (int i = 0; i < 10; i ++) { • System.out.println(“Wasting time “ + i); • } • Recorder.getRecorder().exitCallback(this); • }

  30. Instrumentation Data • Variety of data can be gathered • Call counts • Method exclusive time • Time spent in the method • Method cumulative time • Time spent in the method and all methods it calls • Good for SLAs, back-end response times • Exceptions thrown • Exceptions are caught by instrumentation, then rethrown • Bytes transferred/serialized in RMI • Stack information • Generate call trees after collection (post-processing) • Averages, min/max, standard deviation

  31. Instrumentation Levels: Full • Full Instrumentation • Instrument all methods • Filter out application server methods • Allow custom filters to reduce method set MainServlet MyJSP AttrTag MyEJBHome MyEJB SomeUtil DetailTag OtherUtil SomeUtil

  32. Instrumentation Levels: Component • J2EE perimeter • Instrument only classes that are J2EE objects • Services (JMS, JNDI,JDBC) • EJB home and EJB methods • Servlet and JSP processing • Better for diagnosis MainServlet MyJSP AttrTag MyEJBHome MyEJB SomeUtil DetailTag OtherUtil SomeUtil

  33. J2EE JVM OS Application Server Metrics • App Server Metrics • Metrics provided by application server • Varies by application server • Metric Taxonomy • J2EE Components • Servlets, JSPs, EJB Utilization • Response times • Services • JDBC, JMS, JCA, JNDI Utilization • Transactions • Threading/queueing • General • Configuration • JVM • Web • Metric access varies • API, utility program, database table, web console, thick client

  34. Application Server Metrics + Model Client Handling Execution Management Applications Services Disk EJBs Requests Queues JDBC JMS Servlets Sessions Threads JSPs JCA Other U R R U U X JVM U CPU Memory Network

  35. Application Server Metrics • Value: • Under-the-covers look at J2EE • Covers R, U, X in all parts of the model • J2EE components, Services, Transactions, Threading/Queuing, General (Web, JVM) • Component-specific measurements • Different collection techniques • Utility program, console, API • Drawbacks: • Lots of data, hard to navigate • Data not tied to application or to individual requests • Some app servers provide little • No standards (yet) • Impact: • Cost can vary from free to significant

  36. Application Server Metrics • Bottom Line: • Great value in the data • Reasonable coverage on latest app servers • Best way to get JVM data • Trend is for this data to get better • Multiple access points • Standards would be nice • Must understand the cost of these metrics • Don’t necessarily come for free

  37. Database Web OS Other Server Other Server Metrics • Other servers have metrics • Database • Web Server • Messaging Server (MQSeries) • SNMP/MIB • Treat like OS metrics • Correlate to J2EE metrics • e.g. JDBC execute time with SQL query time from database

  38. Measuring Scalability • The key to measuring how well your application will scale is in attaining user representative transactions • You need to understand how your user is going to use your application • You need to understand how your application server is going to be used when your users are using your application (recall that your applications wont run in isolation!) • Setup a pre-production environment similar to your production environment • Load test with representative transactions and observe

  39. Bringing the Team Together(Solidifying communication between Production and Testing)

  40. Testing in a Production-like Environment • In order for your testing to be successful, your testers must know how the production environment will run • Testing in isolation is not sufficient! • Testing teams and production teams must work together to ensure success • Successful deployments rely on communication between development, QA, and production

  41. PerformaSure

  42. PerformaSure Call Tree • Shows the path of a request through an application • From HTTP to SQL • Cumulative response time shows critical path • Individual response time shows critical point • Popup windows showing details

  43. PerformaSure Metrics View • See application, application server, operating system, database, and web server metrics • Dynamically add metrics to the same graph to see them side-by-side • View metrics as areas, bars, or lines

  44. Attend a PerformaSure Web Cast Presented every Thursday1:00pm PST, 4:00pm ESThttp://www.quest.com/events/webcast_index.asp

  45. Thank you http://www.quest.com

More Related