1 / 35

CS 501: Software Engineering

CS 501: Software Engineering. Lecture 21 Reliability 3. Administration. Security and People. People are intrinsically insecure: • Careless (e.g, leave computers logged on, use simple passwords, leave passwords where others can read them)

ktanya
Download Presentation

CS 501: Software Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 501: Software Engineering Lecture 21 Reliability 3

  2. Administration

  3. Security and People People are intrinsically insecure: • Careless (e.g, leave computers logged on, use simple passwords, leave passwords where others can read them) • Dishonest (e.g., stealing from financial systems) • Malicious (e.g., denial of service attack) Many security problems come from inside the organization: • In a large organization, there will be some disgruntled and dishonest employees • Security relies on trusted individuals. What if they are dishonest?

  4. Design for Security: People • Make it easy for responsible people to use the system • Make it hard for dishonest or careless people (e.g., password management) • Train people in responsible behavior • Test the security of the system • Do not hide violations

  5. Suggested Reading Trust in Cyberspace, Committee on Information Systems Trustworthiness, National Research Council (1999) http://www.nap.edu/readingroom/books/trust/ Fred Schneider, Cornell Computer Science, was the chair of this study.

  6. Validation and Verification Validation: Are we building the right product? Verification: Are we building the product right? In practice, it is sometimes difficult to distinguish between the two. That's not a bug. That's a feature!

  7. The Testing Process Unit, System and Acceptance Testing are major parts of a software project • It requires time on the schedule • It may require substantial investment in test data, equipment, and test software. • Good testing requires good people! • Documentation, including management and client reports, are important parts of testing. What is the definition of "done"?

  8. The Heisenbug

  9. Test Design Testing can never prove that a system is correct. It can only show that (a) a system is correct in a special case, or (b) that it has a fault. • The objective of testing is to find faults. • Testing is never comprehensive. • Testing is expensive.

  10. Testing Strategies • Bottom-up testing. Each unit is tested with its own test environment. • Top-down testing. Large components are tested with dummy stubs. user interfaces work-flow client and management demonstrations • Stress testing. Tests the system at and beyond its limits. real-time systems transaction processing

  11. Methods of Testing Closed box testing Testing is carried out by people who do not know the internals of what they are testing. Open box testing Testing is carried out by people who know the internals of what they are testing. (a) What is the advantage of each approach? In each case, how do you set about selecting test cases?

  12. Stages of Testing Testing is most effective if divided into stages Unit testing unit test System testing integration test function test performance test installation test Acceptance testing

  13. Testing: Unit Testing • Tests on small sections of a system, e.g., a single class • Emphasis is on accuracy of actual code • Test data is chosen by developer(s) based on their understanding of specification and knowledge of the unit •Can be at various levels of granularity •Open box: by the developer(s) of the unit If unit testing is not thorough, system testing becomes almost impossible. If your are working on a project that is behind schedule, do not rush the unit testing.

  14. Testing: System and Sub-System Testing • Tests on components or complete system, combining units that have already been thoroughly tested • Emphasis is on integration and interfaces • Uses trial data that is typical of the actual data, and/or stresses the boundaries of the system, e.g., failures, restart • Is carried out systematically, adding components until the entire system is assembled •Open or closed box: by development team or by special testers System testing is finished fastest if each component is completely debugged before assembling the next

  15. Testing:Acceptance Testing • Closed box: by the client • The entire system is tested as a whole • The emphasis is on whether the system meets the requirements • Uses real data in realistic situations, with actual users, administrators, and operators The acceptance test must be successfully completed before the new system can go live or replace a legacy system Completion of the acceptance test may be a contractual requirement before the system is paid for

  16. Variants of Acceptance Testing Alpha Testing: Clients operate the system in a realistic but non-production environment Beta Testing: Clients operate the system in a carefully monitored production environment Parallel Testing: Clients operate new system alongside old production system with same data and compare results

  17. Test Cases Test cases are specific tests that are chosen because they are likely to find faults. Test cases are chosen to balance expense against chance of finding serious faults. • Cases chosen by the development team are effective in testing known vulnerable areas. • Cases chosen by experienced outsiders and clients will be effective in finding gaps left by the developers. • Cases chosen by inexperienced users will find other faults.

  18. Test Case Selection: Coverage of Inputs Objective is to test all classes of input • Classes of data -- major categories of transaction and data inputs. Cornell example: (undergraduate, graduate, transfer, ...) by (college, school, program, ...) by (standing) by (...) • Ranges of data -- typical values, extremes • Invalid data, reversals, and special cases.

  19. Test Case Selection: Program Objective is to test all functions of each computer program • Paths through the computer programs Program flow graph Check that every path is executed at least once • Dynamic program analyzers Count number of times each path is executed Highlight or color source code Can not be used with time critical software

  20. Test Strategies: Program (a) Statement analysis (b) Branch testing If every statement and every branch is tested is the program correct?

  21. Statistical Testing • Determine the operational profile of the software • Select or generate a profile of test data • Apply test data to system, record failure patterns • Compute statistical values of metrics under test conditions

  22. Statistical Testing Advantages: • Can test with very large numbers of transactions • Can test with extreme cases (high loads, restarts, disruptions) • Can repeat after system modifications Disadvantages: • Uncertainty in operational profile (unlikely inputs) • Expensive • Can never prove high reliability

  23. Regression Testing Regression Testing is one of the Key Techniques of Software Eengineering Applied to modified software to provide confidence that modifications behave as intended and do not adversely affect the behavior of unmodified code. • Basic technique is to repeat entire testing process after every change, however small.

  24. Regression Testing: Program Testing 1. Collect a suite of test cases, each with its expected behavior. 2. Create scripts to run all test cases and compare with expected behavior. (Scripts may be automatic or have human interaction.) 3. When a change is made, however small (e.g., a bug is fixed), add a new test case that illustrates the change (e.g., a test case that revealed the bug). 4. Before releasing the changed code, rerun the entire test suite.

  25. Documentation of Testing Testing should be documented for thoroughness, visibility and for maintenance (a) Test plan (b) Test specification and evaluation (c) Test description (d) Test analysis report

  26. A Note on User Interface Testing User interfaces need two categories of testing. • During the design phase, user interface testing is carried out with trial users to ensure that the design is usable. This design testing is also used to develop graphical elements and to validate the requirements. • During the implementation phase, the user interface goes through the standard steps of unit and system testing to check the reliability of the implementation. Acceptance testing is then carried out on the complete system.

  27. A CS 501 Project: Methodology • How we’re user testing: • One-on-one, 30-45 min user tests with staff levels • Specific tasks to complete • No prior demonstration or training • Pre-planned questions designed to stimulate feedback • Emphasis on testing system, not the stakeholder! • Standardized tasks / questions among all testers

  28. A CS 501 Project: Methodology • How we’re user testing: • Types of questions we asked: • Which labels, keywords were confusing? • What was the hardest task? • What did you like, that should not be changed? • If you were us, what would you change? • How does this system compare to your paper based system • How useful do you find the new report layout? (admin) • Do you have any other comments or questions about the system? (open ended)

  29. A CS 501 Project: Results What we’ve found: Issue #1, Search Form Confusion!

  30. A CS 501 Project: Results What we’ve found: Issue #2, Inconspicuous Edit/ Confirmations!

  31. A CS 501 Project: Results What we’ve found: Issue #3, Confirmation Terms

  32. A CS 501 Project: Results What we’ve found: Issue #4, Entry Semantics

  33. Results, Addressing What we’ve found: #5, Search Results Disambiguation & Semantics

  34. Fixing Bugs Isolate the bug Intermittent --> repeatable Complex example --> simple example Understand the bug Root cause Dependencies Structural interactions Fix the bug Design changes Documentation changes Code changes

  35. Moving the Bugs Around Fixing bugs is an error-prone process! • When you fix a bug, fix its environment • Bug fixes need static and dynamic testing • Repeat all tests that have the slightest relevance (regression testing) Bugs have a habit of returning! • When a bug is fixed, add the failure case to the test suite for the future.

More Related