1 / 33

Performance Testing Process

Performance Testing Process. SASQAG March 2007 Emily Ren T-Mobile. Why We Need Performance Testing?. Before release, managers need to know: Do we have enough hardware? Can we handle the target load? How many users can we handle? Is the system fast enough to make customers happy?.

Download Presentation

Performance Testing Process

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Performance Testing Process SASQAG March 2007 Emily Ren T-Mobile

  2. Why We Need Performance Testing? • Before release, managers need to know: • Do we have enough hardware? • Can we handle the target load? • How many users can we handle? • Is the system fast enough to make customers happy?

  3. Nature of Performance Testing • It is very different from functional testing. A very challenging job • It requires stellar cooperation and coordination: it is a whole team effort! • Automation tools are very powerful, but expensive and complex, training is needed • It can be fun too!

  4. Why We Need Performance Testing? • The failure of an application can be costly • Assure performance and functionality under real-world conditions • Locate potential problems before our customers do • Reduce development time – multiple rounds of load testing • Reduce infrastructure cost

  5. When we do it • During design and development • What is the best server to support target load? • Define system performance requirements • Before release • Is the system reliable enough to go into production? • After functional testing done • Post-deployment • What is the cause of performance degradation?

  6. What we are doing Performance testing before release: • Application response times - How long does it take to complete a task? • Configuration sizing - Which configuration provides the best performance level? • Capacity planning - How many users can the system handle? • Regression - Does the new version of the software adversely affect response time? • Reliability - How stable is the system under heavy work load?

  7. Load Testing Process Plan Test Create Scripts Scenario Creation Scenario Execution Result Analysis Performance Tuning

  8. Perf. Test Planning Documents • Performance Testing Initial Assessment - Pre-test plan document - Help project team to brainstorm their test scope • Performance Test Request Form - Detail information related to whole performance testing process, including setup goals, environment, business process, performance requirement (e.g., response time), usage information, internal support team, etc.

  9. What we are doing 1. Test Planning - Before we run load testing - Setup goals • Measure application response time • Configuration sizing • Capacity planning • Regression • Reliability - Type of testing • Load Testing (System performance testing with SLA target load) • Stress Testing (Capacity testing to find out breaking point) • Duration Testing (Reliability testing to test the system under load)

  10. What we are doing – Cont. - Identify usage information - Business Profile • Which business processes to use • BA, Dev team responsible for definition • Isolate peak load and peak time • BA, Dev, application support responsible for definition • Document user actions and input data for each business process • SME/Functional Testing team responsible for creation of business process document

  11. Sample : Business Profile 1- HR App.Business Processes

  12. Sample : Business Profile 2 – eCommerce Business Processes

  13. What we are doing – Cont. - Business Profile is the basis for load testing • It is the traffic model of the application • The better the documentation of the business processes, the better the test scripts and scenarios. • Save time on script and scenario creation • Good business profile can make it possible to reuse existing load testing scripts and results later.

  14. What we are doing – Cont. 2. Create Scripts - Automate business processes in LoadRunner VUGen (Virtual User Generator): Scripts are C, C++-like code Scripts are different with different protocol/technology LoadRunner has about 50 protocols, including WAP • Record user actions • Need assistance of SME/Functional Testing group • Add programming and test data in the scripts • E.g. add correlation to handle dynamic data, e.g. session id • Test data may need lot of work from project team

  15. Sample Script web_submit_data("logon.sap", "Action=http://watstwscrm02:50000/bd/logon.sap", "Method=POST", "RecContentType=text/html", "Referer=http://watstwscrm02:50000/bd/startEBPP.sap", "Snapshot=t3.inf", "Mode=HTML", ITEMDATA, "Name=login_submit", "Value=true", ENDITEM, "Name=j_authscheme", "Value=default", ENDITEM, "Name=j_alias", "Value={UserName}", ENDITEM, "Name=j_password", "Value=coffee@2", ENDITEM, "Name=j_language", "Value=EN", ENDITEM, "Name=AgreeTerms", "Value=on", ENDITEM, "Name=Login", "Value=Log on", ENDITEM, LAST);

  16. What we are doing – Cont. 3. Create Test Scenario - Build test scenario according to usage information in Business Profile • Load Calculation • Can use rendezvous point, IP Spoofing, etc. - Run-Time setting • Think time • Pacing • Browser Emulation: simulate browser cache, new user each iteration • Browser version, bandwidth, etc.

  17. What we are doing – Cont. 4. Execute Load Testing • Execute test scenarios with automated test scripts in LoadRunner Controller • Isolate top time transactions with low load • Overdrive test (120% of full load) to isolate SW & HW limitations - Work with Internal Support Team to monitor the whole system, e.g., web server, DB server, middleware, etc.

  18. Example Parameters to Monitor • system - % total processor time • Memory - page faults/sec • Server work queues - bytes transferred/sec • HTTP Response • Number of connections • Support team will have better ideas for what to monitor • Individual write-up is highly suggested as part of test report • ---need to get csv files, then import to LoadRunner

  19. What we are doing – Cont. 5. Analyze Test Result - Analysis - Collect statistics and graphs from LoadRunner - Report results - Most commonly requested results: Transaction Response time Throughput Hits per sec HTTP response Network Delay *Server Performance - Merge graphs to make it more meaningful Transaction response time under load Response time/Vuser vs CPU utilization Cross scenario graphs

  20. What we are doing – Cont. 6. Test Report - Don’t send LoadRunner result and graphs directly - Send summary to the whole team - Report key performance data and back end performance data - Add notes for each test run - Keep test history: for team to compare test runs

  21. What we are doing – Cont. 7.Performance Tuning - Help identify the bottlenecks and degradation points to build an optimal system - Hardware, Configuration, Database, Software, etc - Drill down on transaction details, - e.g. webpage breakdown - Diagnostics - Show Extended Log to dev team - Data returned by server - Advanced Trace: Show logs of all VUser messages and function calls

  22. What we are doing – Cont. 8. Communication Plan - Internal Support Team: • - PM, BA, environment / development / architect, network, DBA, functional test lead, etc. - Resource plan

  23. Timeline/Activities - Example • Test Planning, Script Creation – 4 weeks • Test Execution – 4 weeks • Trail run - 2 days • Round 1 – Load Testing: Response time with SLA target load: 1 week • Round 2 – Stress Testing: find breaking point: 1 week • Round 3 – Duration (Reliability) test: 2 days • More performance tuning – 3 days • Document and deliver final report – 2-3 days

  24. Projects Projects : • All performance testing projects in T-Mobile’s IT dept • 40+ projects in <3 years • The Standard Performance Testing Process has worked very well on all projects

  25. Automation Tools - Mercury LoadRunner • Scripting: VUGen (Virtual User Generator) • Performance test execution: • Controller – build test scenarios according to business profile and load calculation • Load Generator – run virtual users • Performance test result analysis • Analysis • provides test reports and Graphs • Summarize the system performance

  26. Automation Tools – Performance Center • Web-enabled global load testing tool Performance Testing team can manage multiple, concurrent load testing projects across different geographic locations • User Site - conduct and monitor load tests. • Privilege Manager- manage user and project access rights • Administration Site - for overall resource management and technical supervision

  27. Automation Tools - Diagnostics • - Pinpoint Root Cause • Solve tough problems • Memory leaks and trashing • Thread deadlock and synchronization • Instance tracing • Exceptions

  28. Diagnostics Methodology in Pre-production • Start with monitoring of business process • Which transactions are problematic • Eliminate system and network components • Infrastructure monitors and metrics • Isolate application Tier and method • Triage (using Transaction Breakdown) • Correct behavior and re-test

  29. WebSphere J2EE/Portal Server WebLogic J2EE/Portal Server JBoss, Tomcat, JServ Oracle Application Server J2EE MS .NET Generic/Custom JAVA SAP Net/Weaver J2EE/Portal Oracle 11i Applications Siebel Broad Heterogeneous Platform Support

  30. Performance Engineering - Bridge the Gap • 80% of IT Organizations experience failures in apps that passed the test phases and rolled into production • HyPerformix – Performance Engineering • Production line: Designer, Optimizer and Capacity Manager • HyPerformix Optimizer (Capacity Planning): can bridge the gap between testing and production environments and leverage load test data to accurately show how the application will perform when in production.

  31. Performance Engineering - HyPerformix Optimizer • Configuration sizing, Capacity planning • Create production-scale models – Perf. Test team and Architect team work together • Load test and production perf. data are seamlessly integrated with Optimizer • Ensure capacity is match to current and future business requirements • Reduce risk before application deployment

  32. What Performance Testing can do for business? • Performance testing is critical. Competition in market is high: customer switch cost is low, cost to keep customers is high • Performance Testing can protect revenue by helping to isolate and fix problems in the software infrastructure • Improve availability, functionality, and scalability of business critical applications • Ensure products are delivered to market with high confidence that system performance will be acceptable • Proactive performance testing can decrease costs of production support and help desk • A good Performance Testing Process is essential to get performance testing done right and on time!

  33. Questions? Emily.Ren@T-Mobile.com EmilyRen2002@yahoo.com Tel: (425)748-6655 (desk) (425)922-7100 (cell)

More Related