1 / 21

Software Verification & Validation

Software Verification & Validation. Karen Smiley ABB Inc., US Corporate Research. Classifications of Testing. Distinguishing Questions: Verification – did we build the thing right? (correctly) Validation – did we build the right thing? (customer desired)

noleta
Download Presentation

Software Verification & Validation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Verification & Validation Karen Smiley ABB Inc., US Corporate Research Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  2. Classifications of Testing • Distinguishing Questions: • Verification – did we build the thing right? (correctly) • Validation – did we build the right thing? (customer desired) • Customer requirements should be “testable”. • Test Categories: • White (Clear) Box • Black Box • “Gray” Box Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  3. Testing ModelsTraditional (“V”) Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  4. Testing ModelsContemporary (“W”) Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  5. Agile Testing ModelsTest-Driven Development (TDD) High-Level Design for planned features (“user stories”) Start Development Add 1 new user story All features done all UTs pass (old and new) Run all Unit Tests Write unit test(s)for new user story any UT fails old UTfails new UTdoesn’tfail Write functionalityfor new user story all old UTs pass; all new UTs fail Run all unit tests “Write” => detailed design and coding, with “refactoring” as needed Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  6. V&V in the Software Lifecycle Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  7. Cost-Effectiveness of V&V Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  8. Types of “Tests” • Static • Do not involve executing the software under test • Examples: inspections, reviews, scenario walkthroughs, tools • Integration • Check unit-tested components for how well they fit together • Examples: SW modules, SW-HW, “smoke”, GUI , I/Fs • Functional • Execute the software to see how well it performs • Examples: unit test, component, system, transaction, compliance • Non-functional • Verify compliance with non-functional requirements • Examples: configuration, usability, security • Performance • Verify resilience and reliability of system • Examples: stress, reliability/soak, availability, fail-over Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  9. Static Verification Methods • Reviews • Personal reviews • Peer reviews • Inspections • Walkthroughs • Automated tools All are white-box, can be done on any kind of software, and are far more effective than testing at finding defects. Note: “Pair Programming” involves continuouspeer {review/inspection} of design and code. Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  10. Personal Reviews • Use your own personal checklists • look for the defects you tend to inject in each type of work product • Review code before you compile – why? • Have to review to find all of your syntax errors anyhow • Reviews are 1-pass; getting to a clean compile can take multiple passes • Allows you to use the compiler as an objective way to assess (before testing) how effective your code review was Yield = number of bugs found by your review . number of bugs that existed when you started review* * you don’t know this total until later, when you find bugs that were missed Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  11. Peer Reviews Asking one or more of your peers to review your work and provide feedback will: • Leverage the experience of your colleagues to benefit • your work products: authors are inherently biased reviewers • you: learn what kinds of problems they have learned, from their experience, to look for • Improve a development team’s “truck number”  Peer reviews can be done: • real-time, as a group • asynchronously (e.g., via email) “Human beings, who are almost unique in having the ability to learn from the experience of others, are also remarkable for their apparent disinclination to do so.” – Douglas Adams Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  12. Inspections • More formal than peer reviews • example: “Fagan inspections” • Also use checklists, but apply them more thoroughly • line by line, paragraph by paragraph • Require advance preparation by: • Author (e.g., clean up typos, fix any compilation errors) • Multiple reviewers • Moderator • Take more time (slower ‘review rate’ than reviews) • Can deliver excellent overall yield • “Capture-recapture” method can predict how many defects are likely to remain Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  13. Key Success Factors for Reviews and Inspections • Training in how to do them • Proper preparation by participants • Good checklists • Reasonable review rate(amount of work inspected per hour) • Review rate can be a good predictor of yield • Optimal: neither too slow (low) nor too fast (high) • If review rates are too high, consider re-reviewing or re-inspecting. • If modules prove to be more defective in later phases than expected, consider re-reviewing / re-inspecting or redesigning / rewriting the module. Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  14. Predicting Software Quality • Measure, for each type of development activity: • Size, time, and defects • Calculate: • yields (# of defects found vs. # present) • typical yields from reviews and inspections: ~70-90% • typical yields from System Test and Acceptance Test: ~50% • projected number of defects remaining • # of defects remaining after ST/AT ~= the # you found in ST/AT • defect densities (# of defects found/size) • reviews, inspections, compile, unit test • time ratios • design review vs. design(>=100% for Pair Programming?) • code review vs. coding(>=100% for Pair Programming?) • design vs. coding Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  15. PSPSM/TSPSM Quality Profile Design TimeCode Time (Ratio >= 100%) Design Review Time Design Time(Ratio >= 50%) Code Review Time Code Time(Ratio >= 50%) Unit Test Defect Density (< 5 defects/KLOC) Compile Defect Density (< 10 defects/KLOC) All values=0 in center, =1 at outer edge if goal value is met or bettered. (PQI=product of all 5 values) >= .4 indicates a likely high quality component. Source: Software Engineering Institute (http://sei.cmu.edu) at Carnegie Mellon University, TSP2001.04 See http://www.sei.cmu.edu/publications/articles/quality-profile/index.html for further information on the Quality Profile. PSP, TSP, Personal Software Process, and Team Software Process are service marks of CMU. Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  16. Programming Defensively • Date input • First UI program – line input overflow • Same type of problem is still being exploited by modern viruses!Most security vulnerabilities in software are due to [design] flaws. • Many such bugs can be found with a good review or inspection • Range checking • Dates (Y2K) “The best defense is a good offense.” Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  17. Test Programming & Automation • Test software - “quick and dirty” or real? • Automation gotchas – e.g., using WinRunner • Testing the tests / Use of simulators • Building in testability • Troubleshooting on regression and soak tests Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  18. Key Questions in Test Planning Why test? And how do you know ... • what to test? (and what not to test) • Should 100% of the work be reviewed and/or inspected? • What level of “test coverage” should be targeted? • What should the system be able to do at this point of the project? • how to execute the tests? • where a bug really is • in the product?(which part?) • in the test?(procedure or software) • when you’ll be done testing? (so the product can be delivered to its customers) Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  19. Some Testing Experiences • MPLOT – expandability & test automation • Space Shuttle database – accommodating spare parts, impact of expanding the database structure, changing dash numbers • NavCal – ‘fixed’ bugs resurfacing, pseudo-code “N” error, table automation • Fleet Advisor – PCMCIA card removal, scale reading • Domino – interactions with French ISDN and Win DLLs • Pliant 3000 – integrating host and remote, SW + HW, GR303, SNMP Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  20. Contact Information Email: karen.smiley@us.abb.com Phone: 919-856-3054 Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

  21. References • “Software Testing 101”, Rick Clements, http://www.geocities.com/rick_clements/SWtest/test101.htm • “Unit Testing – Approaches, Tools, and Traps”, Tom Rooker, Aug. 2003 presentation at RTP SPIN, http://www.rtpspin.org/Information/TomRooker_UT_Presentation030928.ppt • “Software Testing and PR – What They Didn’t Teach You in School”, Bob Galen, Dec. 2002 presentation at RTP SPIN, http://www.rgalen.com/t_files/Software_Testing_and_PR.ppt • Agile Testing (and links), http://www.testing.com/agile/ • “Test Infected: Programmers Love Writing Tests”, Kent Beck & Erich Gamma, http://www.junit.org/junit/doc/testinfected/testing.htm • Exploratory testing articles, http://www.testingcraft.com/exploratory.html • Cem Kaner, James Bach, and Bret Pettichord, “Lessons Learned in Software Testing: A Context-Driven Approach”, John Wiley & Sons, Dec. 2001. See http://www.testinglessons.com/ . • Bret Pettichord’s Publications, http://www.io.com/~wazmo/papers/ • “Software Testing Hotlist”, http://www.testinghotlist.com/ Karen Smiley – Duke SE, Fall 2003 – “Software Verification and Validation”

More Related