1 / 15

Automated Capacity Testing Model

Automated Capacity Testing Model. Jose Fajardo Email: jfajardo@octanesystems.net President Octane Systems, Inc. http://www.octanesystems.net. Summary. This presentation delineates a methodology for conducting capacity tests. The methodology introduces the various stages associated

Download Presentation

Automated Capacity Testing Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Capacity Testing Model Jose Fajardo Email: jfajardo@octanesystems.net President Octane Systems, Inc. http://www.octanesystems.net

  2. Summary This presentation delineates a methodology for conducting capacity tests. The methodology introduces the various stages associated with performing a capacity test. The author elucidates the crucial activities that should be performed and monitored before a capacity test takes place. Furthermore, the author shows a roadmap for the initial and concluding stages that will lead to the successful completion of a capacity test.

  3. Stage 7 Stage13 Stage 8 Stage 2 Stage 9 Stage14 Stage 3 Stage 4 Stage10 Stage11 Stage 6 Select processes to be recorded Automate scripts Document findings in report. Specify type tests that will be conducted Assemble the team (all resources) Ensure all “open” issues are resolved Review risk plan. Conduct TRR Appoint an “owner” for process Execute test Analyze and interpret results Have a kick-off meeting Methodology Stage 1 Identify the need Stage12 Repeat tests if needed

  4. Identify the need Why are you doing a capacity test? Possible reasons: • New hardware, servers • Changes to underlying database (i.e. new tables, indices, etc) • Newly written or modified interfaces, modified/added SQL queries • End user complaints about system performance • Routers reconfigured • Upgraded or revamped LAN/WAN • Changes to application (i.e. new firewall, modified configuration, etc) • Meet newly documented SLAs (Service Level Agreements) • Changes at the GUI level (i.e. different GUI due to new software version) • More end users will be added to application in production environment

  5. Types of testing Decide which type of capacity test is needed • Performance • Load • Stress • Soak • Benchmark • Volume Etc.

  6. Tools/Resources • Have you got automated test tools? - To have repeatable tests an automated software solution will be needed - Ensure that automated software solution supports the AUT (Application Under Test) • Where are the resources (people, equipment) ? Assemble the team…… • If in house expertise with test tools is missing either train resources or hire contractors • Testing resources will need to automate test scripts and analyze test results • Other resources will need to be cross matrixed from other teams (i.e. DBA, Network, Hardware, SME, Application experts, Middleware engineers, etc) • Is there sufficient hardware (RAM, computers) to emulate end users • Monitoring groups in charge of LAN, Servers, etc.

  7. Test Lab/ Environment • Is the test bed environment production sized? Is this a shared environment with other teams? • Are scripts going to be developed and replayed in the same environment? • Is there a dedicated lab for the test ? Does the lab have connectivity to the same LAN/WAN as the production environment? • Do the computers in the lab have the same configuration and operating environment ? Do they have enough RAM to emulate end users ? • Has the AUT and automated testing software been installed on all computers in the lab?

  8. Ownership and Accountability • Often an unknown: Who owns and is responsible for the successful completion of the capacity test ? • Appoint a lead with authority over cross matrixed resources and monitoring groups during the capacity test. • Schedules meeting after capacity test is completed to ensure closure of any “open” issues. • Facilitates meetings to interpret results from test (graphs, charts, etc) • Lead should have a kick-off meeting to inform stakeholders of expectations, testing procedures and testing schedule (often neglected) • Lead should draft list of resources and contact information for all participants in the capacity test. Set up a war room during test execution. • Should draft the schedule of activities and monitor completion of activities. Should request test re-execution for fixed problems.

  9. What is automated (80:20 rule) • 20% of processes generate 80% of traffic • Focus on most important features/functionality of application (i.e. most employees do time entry) • Provide templates to collect system’s traffic and user’s community profiles • Obtain system usage from historical data-if available • Produce task distribution diagrams for: typical daily/weekly demand, seasonal demand, quarterly demand for end of quarter activities, peak demand, etc • Focus on processes that are critical and high on server/database activity

  10. Before Automation Begins……. • Ensure that the test bed is stable, all code is frozen, and has successfully undergone functional testing and contains latest patches • Test scripts are documented and test script developers can get navigational support for the AUT from SMEs and business analysts • Data is available to parameterize test scripts • Test script developer has network topology diagram and system architecture diagram for the AUT • Test script developer has dummy user ids created for the emulated end users with appropriate access/authorization levels • Test script developer understands procedures for storing test scripts and naming conventions • Select all processes that need to be recorded and properly sequence the order in which the processes need to be recorded and played back

  11. Script Automation Guidelines • Set up rendezvous points (i.e. multiple end users press “purchase” button at the same time) – this might be needed for stress tests • Prevent data locking and data caching (parameterized with sufficient unique data values) • Record process accurately (i.e. a script that purchases 1000 books and checks out in one iteration is NOT the same as a script that purchases 1000 books and checks out in 1000 iterations) • Correlate scripts (i.e. output data from one process becomes input data for another process, web session id is maintained throughout a single session) • Emulate throughput consistently (1 emulated user = 1 end user) • Maintain appropriate wait times and think times • Have necessary roles and authorizations for recording scripts • Record scripts at the GUI level to get end to end response times • For scripts recorded at the protocol level ensure that the AUT’s database is not corrupted due to improperly inserted records • Replay scripts with multiple sets of data, and have trial runs (proof of concept runs with minimal loads)

  12. Critical Considerations • Contingency and risk plan. What happens if the network crashes or a server? Develop a risk mitigation strategy. Review any previous lessons learned. • What will be monitored and how? What performance monitors will be turned on during the test (network sniffers, etc)? • How will the DB be reset or the application restarted expeditiously ? • What happens if tests need to be repeated? What is the availability of resources/test environment? • Is there enough data for processes that have unique data constraints? • What are the hours for executing the test (i.e. Middle of the night)? If so will there be support from all stakeholders including the vendor of the automated test tool software ? • In which tool will defects be documented and what is the procedure for assigning resources to defects? • How fast do defects need to be resolved and do they need approval from the defect management review board ?

  13. Results Interpretation • Many graphs, charts, reports generated BUT what do all of these mean ? What are they saying ? • Focus on spikes in graphs, report and share results from automated software solution with other monitoring groups • Draw conclusions/patterns from statistical significant sample sizes • Document when a particular problem occurred during testing and compare against graphs (i.e. after executing the scripts for 1hr and 5 more emulated users were added the application “slowed” down) • Compare graphs from all teams how does the DBA graphs compare to the graphs from the test tool • Some automated test tools with monitoring capabilities provide “white box” measurements which allows engineers to hone-in on specific problems

  14. Report Findings: Finger Pointing Time • No one wants their baby to be called ugly. Identifying problems and assigning resources to resolve problems is political • Get screen captures for any problems noticed during the test (i.e. AUT crashed after 100 users, AUT reported cryptic messages after interfaces were launched, etc) • Problems should be tracked and reported in defect tracking tool • Problems need to be isolated to a particular area based on results from graphs, charts, or engineer’s intuition. • After problems are identified and resolved REPEAT the test to ensure that the problems have in fact been corrected. This is why automating a capacity test is critical to repeatability. • Set realistic expectations if the test environment is substantially different from the production environment the capacity test results may hold invalid

  15. Document report - sections • Lessons learned • Any problems reported and identified during the test and their resolution (if possible paste screen captures) • Recommendations and “outstanding” issues • What was tested (i.e. 1000 outline agreements in 2 hours) • Type of tests conducted • Describe the test environment (i.e. a replica of the production environment) • Show graphs and charts from automated test tools and other monitoring groups • What areas are of concern (i.e. for instance the systems works with 1000 emulated users but at 1200 emulated users the resource utilization increases to over 90%) • Anything any workarounds implemented during the test or anything that prevented the team from continuing the test

More Related