1 / 42

Essence of Performance and Load Testing

Essence of Performance and Load Testing. April 30 th , 2009 Gihan B Madawala Argus Technologies Ltd CREO Products Inc Chancery Software Ltd Meridian Project Systems Ltd Modular Mining Systems Canada Ltd. Topics. Why do we need to test performance Myths surrounding performance testing

eugene
Download Presentation

Essence of Performance and Load Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Essence of Performance and Load Testing April 30th , 2009Gihan B Madawala Argus Technologies Ltd CREO Products Inc Chancery Software Ltd Meridian Project Systems Ltd Modular Mining Systems Canada Ltd.

  2. Topics • Why do we need to test performance • Myths surrounding performance testing • Performance / Scalability testing • Single User Performance testing • Worst-case scenario • Performance bottlenecks • Performance and Scalability Targets • Performance Testing Lab • Performance Team

  3. Topics Contd.. • Microsoft Testing Tools • Commercial Performance Testing Tools • Process of analyzing performance and Scalability issues • Tool selection strategy • Problems with manual perf testing • Tool Evaluation Process • Some Performance related issues/solutions • Books & Reference materials

  4. Bad comments/Questions/Answers • It’s the first release of this feature so of course it is going to be slow ! • It only takes couple of seconds so what is the big deal Q: Why is this page too slow ? A: It is because we have TWO users in the system • Q: How many users that we can support with this system? • A: 20 users per server

  5. Good comments/questions/Answers • “ The technology of the product is based upon current solid architecture and easy to work with” • “The system is flexible to build upon and can be managed easily by our personnel.” • Product has the architecture required to function in large projects. It not only performs well but it scales • Q: How many users can we support with our system? • A: 400 users per server

  6. Why do we need to test performance • Studies show performance was one of the most important qualities of a application • Experience shows Performance issues are the most likely to “bite with surprise” released into live operation • Lack of knowledge on Performance & Scalability testing

  7. Myths surrounding Performance testing • We do not need to measure System Performance • Our system is incredibly impressive with all the features, users will not mind if it is little slow • We do not really need precise measurement goals • Performance testing has to be done late in the project. • Anyone can measure performance; it does not require any skills or expertise • Any test lab/tool is about the same as any other one and you can mimic real life usage of the app

  8. Performance testing should start early in the project and should be a on going process

  9. Performance testing / Scalability Testing • The purpose of performance testing is to measure system’s performance for agreed requirements • Scalability testing (Load Testing, Stress testing) is measurement of performance under heavy load: Peak or worst-case conditions :- • Performance usually is measured as weighted mix of response time, throughput (Network Bandwidth) and availability • Response time – Measure of how long the system takes to complete a task or group of tasks

  10. Single user performance testing • How important is to measure single user performance • Gives an early indication of system design feasibility • Can be measured with existing set of tools available • Cost of single user testing is low and can be done prior to multiple user testing

  11. Worst-case scenario • If the test pass and scale well with worst-case we know for sure that the application will perform better for a lesser load. • Full data population is necessary • Project Ex: (worst-case) -Larger data packet size -More frequency data packets -More client threads

  12. Performance and Scalability Targets • Achieve the specified Response time for one user • Maintain the response time within a specified range when the system is under load • Maximize the number of active users per available bandwidth • Minimize network bandwidth usage • CPU usage of machines should be < 80% under heavy load

  13. Performance Bottlenecks • Client CPU • Database Server CPU • Client, Server Memory • Network Bandwidth • Disk I/O • Transaction processing speed • Serialization and de-serialization speed • Network latency • Limitations of the 2nd/3rd party servers/add-ins

  14. Performance Testing Lab • Simulation of realistic scenario • Having adequate hardware • Having feature rich testing tools • Test Automation and Unit tests • Performance testing expertise • Creating real test data

  15. Performance Team • Performance Test Engineer • Senior Developer • Database Specialist • Hardware Engineer • Network Specialist

  16. Microsoft Testing Tools • PERFMON • SQL Profiler • ACT (Application Centre Test) • WAST (Web Application Stress Tool) • TEAM TEST (Visual Studio Team System) • windebug /ddk

  17. Some Industry Leading Performance and Scalability Tools • Load Runner (Mercury now HP) * • Silk Performer (Segue now Boland) * • QA Load • ANTS • Open STA (Open Source) • E-Load (Emprix) • SHUNRA • SPIRENT Test Centre

  18. Process of Analyzing Performance and Scalability Issues • Non Invasive action (Low hanging fruits) 1 Analyzing of Performance counters 2 Profiling 3 Load testing tool analysis. Two steps - Run concurrent users - Run specific number of users continuously 4 Simulate different network conditions 5 Tracing and debugging .NET components 6 Use of windebug /ddk • Invasive Action 1.Re-factoring 2.Architectural change

  19. Tool selection Strategy

  20. Problems with Manual Performance testing in the Lab Difficult to mimic concurrency operations Cannot be repeated easily Difficult to setup and organize Lab could be expensive Very time consuming Cannot afford to have large scale performance labs in every development locations Full NFRs cannot be tested

  21. Commercial tools Vs Internal tools • Consider the scope of features needed • Estimate the ROI • How soon you need the tool • Does the test team has performance testing expertise • Customer requirements for benchmarks and proper test results

  22. Tool Evaluation Process • Phase 1 – Fact finding of requirements, suitable vendors and preparing a proposal for project stakeholders – 5 weeks • Phase 2 - Tool demonstrations in lab – 6 weeks (2 weeks for each vendor) • Phase 3 – Complete Evaluation Process – 1 week • Phase 4 – Negotiation and Purchase of a tool – 4 weeks • Phase 5 – Tool deployment – 1 week

  23. Issue # 1 – Testing Non Functional Requirements Unless it is tested we cannot guarantee that the application going to work for the stated NFRs. That is if we say that our application can support 500 mobile clients we need to test these scenarios before clamming the limits of our system. We cannot  test 50 clients and say that the system should work with 500 clients calculating performance parameters by interpolating them.

  24. Solution #1: Always test/simulate full NFRs • Shouldn’t interpolate single user data for multiple users • Try to test with worst-case scenario • Different bottlenecks restrain achieving better performance/Scalability. • Proper testing Lab • Use tools to simulate and test full NFRs

  25. Issue # 2 – Cost of fixing Performance Defects We need to start testing Performance and scalability early in the development life cycle. More you do this early it is easy to find solutions and less costly to the project.

  26. Solution #2: Reducing Project Costs

  27. Issue # 3 – Unable to gauge performance Performance optimizations shouldn’t be integrated haphazardly. We need to have mechanism and tools to monitor performance and scalability regularly. If it is not with weekly builds we should at least try to monitor this at least once a month with set of test scenarios covering good part of operations of the application. It would be ideal if we can automate this.

  28. Solution #3: Regular Performance Measurements – Heartbeat of the project

  29. Issue # 4- Unrealistic/Untested NFRs We need to better scrutinize the NFR requirements coming from Marketing to really understand that we can support them with the technology and the platform that we select for the product.

  30. Solution #4: Verify NFRs

  31. Issue # 5 – Agile Performance Testing If Agile method has been adopted for the development process try to do performance and scalability testing in each sprint tied down to user stories. This will give developers immediate feedback of the status of performance of the particular area.

  32. Solution # 5 – Traditional Performance Testing

  33. Solution # 5 – Agile Performance Testing

  34. Issue # 6 – Testing with unrealistic data We need to test systems with more realistic data as possible. We should create different sizes/profiles of databases and benchmark results for each category. We can develop these data sets together with the development team as this is important for them as well.

  35. Solution # 6 – Data Generation We need to programmatically generate application data keeping the data integrity. Do this for different profiles (standard, heavy and ceiling) Update and maintain this data This is a responsibility of both test and dev teams

  36. Issue # 7 – lack of customer feedback tillend of the project Many projects we find major issues because our process does not has room to get customer feedback a early in the project.

  37. Solution # 7 – More input from Support Dept and Customers Provide Implementation/Support department with working software/system before the application transitioned to collect feedback Get customer feedback if possible before the release Expose identified dev and test team members for real customer environments if possible.

  38. Issue # 8: Growing customer demands • Once a customer buys your product with time their requirements overpass the original application requirements. Mostly this is to do with handling increase amount of data. Some customers may even require to keep even 10 years worth of data in the running system. This could be a major challenge if not planned properly.

  39. Solution # 8: Capacity Planning • Collecting data from Customers/field • Consolidate this data for a 6 months/1 year targets • Focus more on most frequent & most important features • Get the Performance Team to start on these targets early • Do Capacity planning for both software and hardware/network

  40. Microsoft Performance Testing Labs • Can use for benchmarking • Can use testing tools with 1000’s of user licenses • Can use Microsoft domain experts on Performance • Worth investment towards improving system performance

  41. Books & Reference materials • MSDN articles • Pattern & Practices (Performance tuning MS .NET Applications) – By MS ACE Team) • Improving .NET Application Performance and Scalability – by Rico Mariani, Scott Barber)

  42. Discussion Q&A

More Related