1 / 23

Leveraging Traceability Recovery in Test Planning and Optimization

Leveraging Traceability Recovery in Test Planning and Optimization. Tariq M. King IBM China Research Lab June 13, 2008. Agenda. Motivation Storyboard Overview Step By Step Implementation Use Case Capturer Prototype Design Evaluation Plan. Motivation.

sheba
Download Presentation

Leveraging Traceability Recovery in Test Planning and Optimization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Leveraging TraceabilityRecovery in Test Planningand Optimization Tariq M. King IBM China Research Lab June 13, 2008

  2. Agenda • Motivation • Storyboard • Overview • Step By Step • Implementation • Use Case Capturer • Prototype Design • Evaluation Plan

  3. Motivation • Businesses without mature processes struggle to maintain traceability between software artifacts→ poor traceability, makes testing more difficult • Furthermore, many practitioners still view testing as a separate phase that follows implementation → disjointed, program-based testing strategies • Much active research in area of automatically recovering of traceability links between artifacts • Can we leverage traceability recovery in test planning and optimization?

  4. Problem Overview RequirementsArtifacts Specification-BasedTesting Activities Software DesignActivities Use Cases, Formal &Informal Specifications TestArtifacts DesignArtifacts Test Cases, Test Logs Architectures, Object Designs Program-BasedTesting Activities ProgrammingActivities ImplementationArtifacts Source Code, Binaries Typical Software Process Traceability Links 4

  5. Problem Overview RequirementsArtifacts ? ? In this case it is no longer possible to trace testing activities to requirements TestArtifacts DesignArtifacts Program-BasedTesting Activities ProgrammingActivities ImplementationArtifacts Question:How do missing links affect testing? Negatively.

  6. Problem Overview RequirementsArtifacts Traceability recovery could be leveraged for test case selection and regression testing TestArtifacts DesignArtifacts Program-BasedTesting Activities ProgrammingActivities ImplementationArtifacts Approach: Recover links and optimize future testing effort 6

  7. Storyboard: Roles • Jane, Business Manager from the client-side who is interested in receiving a quality product that maximizes her business value, while avoiding unnecessary costs CLIENT-SIDE • Hugo, Chief Test Officer from the client-side who has devised a test strategy for Jane’s project but is uncertain as to whether the ongoing testing effort can be improved • Tariq, Test Architect from BIM Testing Consultants Inc. who has been contracted to assess the current testing effort and optimize any further testing activities PROVIDER • Alex, Test Lead from BIM Testing Consultants Inc. who is in charge of realizing an optimized test plan for the client by designing detailed test cases

  8. Storyboard: High Level Overview Acquire Project Test Plan Analyze and Pre-Process Artifacts Build Traceability Matrix Model Identify “Risky” Requirements Create Optimized Test Plan

  9. 1. Acquire Current Project Test Plan Description • A project test plan describes the overall strategy for testing the final application and products leading up to the completed application [1]. • In this example, Hugo (CTO) provides Tariq with a use case driven system test plan, based on decision support factors including criticality and risk estimates. • Use Case ID Risk Criticality # Tests Hugo(Client-Side CTO)

  10. 2. Analyze and Pre-Process Artifacts Description • Many tools exist for analyzing artifacts such as source code for metrics. Artifacts that are in a standardized format are more amenable to automated techniques. • In this example, Alex (Test Lead) uses home-grown and third party tools to analyze and pre-process the following: Alex(BIM Test Lead)

  11. 3. Build Traceability Matrix Model Description • A traceability matrix records the relationships between various software artifacts. • In this example, Alex builds a traceability model to support the strategy proposed by Tariq (Test Architect). Decompose use case flows into fine-grain requirements Apply approach by Zhao et. al [3] to recover traceability links between source code and fine-grain requirements Build a traceability matrix that correlates use case flows with their implementation metrics, unit test results and coverage. Tariq(BIM Test Architect)

  12. 4. Identify “Risky” Requirements (1) Description • Test plans based on decision support (DS) use estimates for the risks of implementing business requirements . • However, these estimations may be inaccurate due to over-emphasized or missing factors. • Source code metrics, unit test results and coverage information can be mapped to requirements to provide concrete evidence to support or refute DS estimates. • For example, requirements implemented in units that exhibit a high level of failures can be considered more risky than those implemented in low failure units.

  13. 4. Identify “Risky” Requirements (2) • Description (cont’d) • In this example, Tariq proposes the following method of assigning risk weights to the use case flow models: • Calculate weight for each source code unit u, denoted wuaccording to the following formula: • wu= Complexity (u) + Failure Level (u) + Coverage (u) • Complexity := High (3) | Medium (2) | Low (1) • Failure Level := High (3) | Medium (2) | Low (1) • Coverage := Poor (3) | Average (2) | Good (1) • The weight of a flow event e, denoted we, is given by: we = ∑wu_e+ | e | (Note: |e| factors for interaction failures)

  14. 4. Identify “Risky” Requirements (3) • Description (cont’d) • Finally, the weight of a use case flow f, denoted wf, is given by the sum of the weights of all its events: wf= ∑we • Rationale (Hypothesis) for Risk Calculation • Highly complex units are likely to contain more defects (or more critical defects). • If a large number of defects is found in a specific unit then more defects are likely to be found. • Code not covered by unit tests pose some level of additional risk to further testing efforts. Tariq(BIM Test Architect)

  15. 5. Create Optimized Test Plan (1) Description • Use case flows are then ranked and classified using the risk weights produced by the proposed method. • In this example, Alex generates the following risk assessment using the data from Hugo’s project : • Risk Assessment after Unit Testing Alex(BIM Test Lead) 15

  16. 5. Create Optimized Test Plan (2) Description • Tariq compares the risks from Hugo’s initial test plan with the risk assessment derived from unit test feedback (UTF). • Discrepancies are identified and the testing effort is then updated to reflect the new risk values: • Optimized Testing Effort 16

  17. 5. Create Optimized Test Plan (3) Description (cont’d) • Alex then designs a set of detailed test cases for an optimized test plan that adheres to the new testing effort • Members from both the client-side and service provider meet to review and approve the new test plan OK, it follows my proposed test strategy OK, it is technically sound • Test Plan Approval OK, it can be implemented by my testing team OK, it addresses business concerns • 06/13/2008 17

  18. Use Case Capturer (1) - ScreenShot

  19. Use Case Capturer (2) – XML Output

  20. Top-Level Design

  21. Detailed Design Major Classes in Algos Subsystem • WeightAlgorithm – computes weights for source units, and cumulative weights for use case flows • PathComparator, UseCaseComparator – used by Java Collections library to sort use cases by their risk weights Major Classes in Models Subsystem • ReqParser, ClassParser, TestParser, CoverageParser • ReqMBuilder, ClassMBuilder, TestMBuilder • FlowEvent – a single use case flow eventFlowList – an entire use case flow 21

  22. Evaluation Plan Mutation on Decision Support • Modify decision support risk values for actual project to simulate bad estimates • Effectiveness of our approach will be judged by how many of the mutant risk values are “killed” after analysis. Comparison of Multiple Project Results • Apply approach three (3) software engineering projects • Consider test plans based on decision support vs. our approach against the results of system testing • Manually determine if our test plan showed any significant improvement 22

  23. References [1] M. Lormans and A. van Deursen. Reconstructing re-quirements traceability in design and test using latent semantic indexing. Technical report, Delft University of Technology, April 2007. [2] Y. Zhang, R. Witte, J. Rilling, and V. Haarslev. An Ontology-based Approach for the Recovery of Traceability Links. In 3rd Int. Workshop on Metamodels, Schemas, Grammars, and Ontologies for Reverse Engineering (ATEM 2006), Genoa, Italy, October 2006 [3] W. Zhao, L. Zhang, Y. Liu, J. Luo, and J. Sun. Understanding how the requirements are implemented in source code. In APSEC '03: Proceedings of theTenth Asia-Pacific Software Engineering Conference Software Engineering Conference, page 68, Washington, DC, USA, 2003. IEEE Computer Society. [4] E. Gamma and K. Beck. JUnit 3.8.1, 2005. http://www.junit.org/index.htm (June 2008) [5] M. Doliner, G. Lukasik, and J. Thomerson. Cobertura 1.9, 2002. http://cobertura.sourceforge.net/ (June 2008)

More Related