1 / 20

Test process analysis of Gateway GPRS Support Node Author: Jorma Tuominen Supervisor: Professor Sven-Gustav Häggman Noki

Test process analysis of Gateway GPRS Support Node Author: Jorma Tuominen Supervisor: Professor Sven-Gustav Häggman Nokia Networks. Agenda. Introduction GGSN in GPRS and 3G packet core network GGSN in Nokia Intelligent Content Delivery system Software development process and testing phases

arabella
Download Presentation

Test process analysis of Gateway GPRS Support Node Author: Jorma Tuominen Supervisor: Professor Sven-Gustav Häggman Noki

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Test process analysis ofGateway GPRS Support NodeAuthor: Jorma TuominenSupervisor: Professor Sven-Gustav HäggmanNokia Networks

  2. Agenda • Introduction • GGSN in GPRS and 3G packet core network • GGSN in Nokia Intelligent Content Delivery system • Software development process and testing phases • Research problem • Research method • Results • Conclusions

  3. Mobile ISP NewsSportsGames Trading BTS NMS WAPMMS CG BSC LIG 2G SGSN FW IP backbone GGSN Intranetservices Internet BS 3G SGSN RNC FW BG Corporatecustomer Inter-PLMN network GPRS and 3G packet core network

  4. Nokia Intelligent Content Delivery System ICD LDAP CG OSC NSM GTP’ RADIUS orDIAMETER LDAP user data user data user data GGSN TA + RADIUS user data CA

  5. SW development process and testing phases

  6. Research problem Testing requires tradeoffs between cost, time and quality. How available testing time and resources can be used as effectively as possible to find and correct the critical defects from the product before it is released?

  7. Research method • Participating in GGSN system testing • Literature study of GPRS and 3G packet core network elements and protocols • Literature study of software process and software testing • Collecting and analyzing testing metrics • Defect analysis • Risk analysis • Planning risk based testing

  8. Testing metrics • Testing effort metrics • Distribution of working hours by test phases • Defect metrics • Number of discovered defects per week • Percentage of “correction not needed” defect reports • Test case metrics • Number of test cases • Percentage of not relevant test cases • Testing effectiveness metrics • Hours needed to discover one defect • Number of test cases needed to discover one defect • Defect Removal Effectiveness

  9. Testing metricsexamples • Distribution of working hours by test phases(lessons learned) • Number of discovered defects per week(real-time metrics)

  10. Defect analysis • Search for the error prone functionality areas • Defects discovered in the earlier product releases • Defects discovered in the earlier testing phases • Defects discovered by the customers • Can be done after product release or during testing • Pareto 80-20 phenomena • Many software phenomena follow a Pareto distribution: 80% of contribution comes from 20% of the contributors [Boehm 1989] • Examples: • 20% of the software modules contribute 80% of the errors. • 20% of the errors cause 80% of the down time

  11. Risk analysis • System requirements must be prioritised • What is the likelihood that this feature will fail to operate? • What would be the impact on the user if this feature will fail to operate? • The likelihood of failure depends on the following: • error proneness of the requirement • newness of the requirement • complexity of the requirement • Likelihood of failure: 1 = very high, ... 5 = very low • Impact on the user: 1 = severe, ... 5 = negligible • Risk priority = (Likelihood of failure) x (Impact on the user)

  12. Risk 100% 50% E2 E1 Testing effort Risk based testing Risk based testing Traditional testing

  13. Results • Requirements have been divided into three groups: • High, medium and low risk requirements • Test case titles will be planned for each requirement • Test cases will be designed and executed as follows: • High risk requirements: 100% of test cases • Medium risk requirements: 70% of test cases • Low risk requirements: 30 % of test cases • Test design and execution effort can be reduced about 30%. • Effect on the quality of the product can not be estimated yet, because the product is not yet released.

  14. Conclusions and future work • The availability of the real-time testing metrics should be improved. • Co-operation between the product development, the testing and the customer support should be strengthened. • The most important future work is to harmonize the usage of requirement management database, test case database and defect tracking database.

  15. Thank You!Questions?

  16. Additional slides

  17. Cost of defects

  18. Early involvement of the testers • Important aspects of early involvement of the testers are: “Testers need a solid understanding of the product so they can devise better and more complete test plans, designs, procedures, and cases. Early test-team involvement can eliminate confusion about functional behavior later in the project lifecycle. In addition, early involvement allows the test team to learn over time which aspects of the application are the most critical to the end user and which are the highest-risk elements. This knowledge enables testers to focus on the most important parts of the application first, avoiding over-testing rarely used areas and under-testing the more important ones.” [Dustin 2003].

  19. Thoughts about testing • “Program testing can be used to show the presence of bugs, but never to show their absence!” [Dijkstra 1969]. • Testing completion criteria:Risk of stopping testing = Risk of continuing testing • “In most cases ‘what’ you test in a system is much more important than ‘how much’ you test” [Craig 2002]. • “Prioritise tests so that, when ever you stop testing, you have done the best testing in the time available” [ISEB 2003]. • No risk  No test

  20. References • Myers G. The art of software testing, USA, John Wiley & Sons, ISBN 0-471-04328-1, 1979, 177 p. • Hetzel B. Complete Guide to Software Testing, USA, John Wiley & Sons, ISBN 0-471-56567-9, 1988, 284 p. • Craig R., Jaskiel S. Systematic Software Testing, USA, Artech House Publishers, ISBN 1-58053-508-9, 2002, 536 p. • Dijkstra E. W. Structured programming. Proceedings of the Second NATO Conference on Software Engineering Techniques, Rome Italy, NATO, 1969, pp. 84-88. • Dustin E. Effective Software Testing, USA, Addison-Wesley, ISBN 0-201-79429-2, 2003, 271 p. • Bender R. Requirements Based Testing presentation. Proceedings of StarEast 2005, Software testing analysis & review conference, USA Orlando, Bender RBT Inc. 2005, 232 p. • Beizer B. Black-Box Testing: Techniques for Functional Testing of Software and Systems, USA, John Wiley & Sons, ISBN 0-471-12094-4, 1995, 294 p. • Koomen T., Pol M. Test Process Improvement: A practical step-by-step guide to structured testing, Great Britain, Addison-Wesley, ISBN 0-201-59624-5, 1999, 218 p. • Hutcheson M. Software Testing Fundamentals: Methods and Metrics, USA, John Wiley & Sons, ISBN 0-471-43020-X, 2003, 408 p. • Boehm B. W. Software Risk Management, USA, IEEE Press, ISBN 0-8186-8906-4, 1989, 495 p. • Grove Consultants, ISEB testing foundation course material, Great Britain, Grove Consultants, 2003, 98 p.

More Related