1 / 14

Software Engineering Lecture 11

Software Engineering Lecture 11. Vladimir Safonov , Professor, head of laboratory St. Petersburg University Email: v_o_safonov@mail.ru WWW: http://user.rol.ru/~vsafonov. Software testing.

aviv
Download Presentation

Software Engineering Lecture 11

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SoftwareEngineeringLecture11 Vladimir Safonov, Professor, head of laboratory St. Petersburg University Email: v_o_safonov@mail.ru WWW: http://user.rol.ru/~vsafonov

  2. Software testing • Testing is systematic checking the program for the purpose of detecting and fixing bugs, and also for the purpose of investigating its functional characteristics and the resources taken • E. Dijkstra: “Testing can prove the presence of bugs only but would never prove their absence” • The general task of formal verification of programs (instead of their testing) is not yet solved. Successful are only verification efforts for certain classes of programs (embedded systems, telecommunication systems) • “Testing vs. debugging” : testing and debugging is not the same (C) Vladimir O. Safonov, 2004

  3. Kinds and methods of testing • Testing all bug fixes(regression testing) • Compatibility testing (testing of conformance to the specification); examples – JCK forJava; BSI 5.5 forSun Pascal Compiler • Performancetesting, or benchmarking – SPEC, etc.; testing of time / memory resources taken by the program, and the performance it demonstrates • Stress testing – testing for stability under the circumstances of very extensive use: millions of object generations; millions of requests to the server, etc. (C) Vladimir O. Safonov, 2004

  4. Testing strategies • White box – the source of the program under testing is available. The goalis to achieve maximum test coverage(block coverage, branch coverage, condition coverage, method coverage, etc.) • Black box – the source of the program under testing is unavailable. The goal is to achieve 100% method coverage, and to test against “reasonably maximum” of combinations of the argument values (C) Vladimir O. Safonov, 2004

  5. Strategies to choose the sets of arguments for testing (G. Myers) • Boundary values – testing for boundary values, like 0, 1, -1, null • Equivalence class partitioning – choose one value from each typical equivalence class (a subset of argument values), such that for each value from that equivalence class the module being tested is supposed to behave the same way. Example:GCD(x, y), wherex > 0 is some concrete integer value • For testingP(x, y) all in allM * N sets of argument values are possible, whereM is the number of selected values of X; N is the number of selected values forY (C) Vladimir O. Safonov, 2004

  6. Classification of the kinds of testing • Static and dynamictesting • Static testing (in average helps to find 70% of bugs) : rewriting the code;individual through code review, group through code review, group inspection – analysis against typical bugs; analysis using source code verifier(lint, Flexelint, etc.) • Typical bugs: - using undefined (“garbage”)values of variables - array index out of bounds - pointer bugs(nil/null, etc.) - interface bugs: calling P(y, x) instead ofP(x, y), wherex, y are of the same type (C) Vladimir O. Safonov, 2004

  7. Organization of testing at commercial companies • Nightly, weekly, monthly building / testing (“heartbeat tests”) • Test engineers: SQE (Software Quality Engineers) – developers of tests and testing tools; SQA engineers (software quality assurance) – as a rule, just “testers” (those who “presses buttons”, runs the ready test suites, testsGUI by hand, etc.) • Test plan – the plan of testing prepared by the SQE manager • Testing tools: - test harness– a tool for running tests and displaying and analyzing the results: JavaTest, JUnit, etc. - test base – set of test suites consisting of tests and their configuration and other accompanying files: * test – set oftest cases * golden file(s) – files with expected results (BUT: when random testing is necessary, instead of golden files “check correct” functions are used * configuration files, exclude lists, etc. - test workspace(s) – the workspace for storing the log and the results of testing – KEEP IT APART FROM THE TEST BASE • Typical approach (bug) of small companies – “no resources forSQE” (causes poor quality of the product) (C) Vladimir O. Safonov, 2004

  8. Maintenance (sustaining) • The most resource consuming stage of software lifecycle • Includes: installing new versions of the product, training users and answering their questions, fixing bugs, product enhancement based on users’ requests • As a rule, the resources allocated for maintenance are too small (“0.25 engineer”) • Please note that the product dies if it’s not maintained • Maintenance is very often performed by the engineers different from the authors of the original version of the product • Typical technological issue: locate an aspect implementing the functionality to be fixed (C) Vladimir O. Safonov, 2004

  9. Bugs: evaluation and fixing • Bug tracking database – the database containing information on all known bug reports. It is proprietary and confidential;available for open / shared source products only(e.g. bugzilla) • Bug id – the ordering number of the bug • Synopsis – brief description of the bug, with the appropriate keywords • Description – detailed description of the problem • Priority – the urgency of the bug; defined by the customerand decreased by product manager before shipping the release As a rule, priority 1 means the bug is extremely urgent; the bug shoul be fixed not later than in a week • Severity – the internal functional seriousness of the bug (C) Vladimir O. Safonov, 2004

  10. Bugs: evaluation and fixing (cont.) • The stages of processing a bug: - submitted– the initial stage; performed by anCTE (Customer Technical Escalations) engineer - accepted– initially processed by the responsible manager of the project who appointed the responsible engineer for fixing the bug - evaluated– the responsible engineer investigated the problem and entered into the bug tracking database the result of his evaluation, in particular, his suggested fix – the proposed correction of the bug (in the form of collection of updates of the sources) - fixed– the bug fix is entered into thesource code workspaceand tested - integrated– the fix is integrated into the master source code workspace; forP1 andP2 bugs, there should be made a patch (a new version of the product with the appropriate corrections) (C) Vladimir O. Safonov, 2004

  11. Bugs: the alternative outcomes of their processing • Fixed/integrated/verified – the bug fix is integrated and tested by a release engineer (independent testing) • Closed because: - fixed and verified - the bug is a duplicate of any already known bug - not reproducible - will not fix (no resources to fix the bug); undesirable No matter the bug is closed, the bug report is stored on the bug tracking database (C) Vladimir O. Safonov, 2004

  12. Manufacturing the product • Versions: dot-release (2.0, 2.1, …) – containsmajor new features; dot-dot-release (“bug fix release”) – contains newbug fixes only • Early Access, Alpha. Beta, FCS (FirstCustomerShipment)– the stages of manufacturing the product; FCS => no P1 bugs; Beta => no P1/P2 bugs • Alpha andBeta testing (by universities and individual volunteers) • Before each release – QA cycle • Documentation + product notes (the latter document issued just before the release and contails the list of known but not-yet-fixed bugs) (C) Vladimir O. Safonov, 2004

  13. Software Process • Chief Programmer’s Team – by F.P. Brooks (IBM, 1970s): surgeon, the second pilot, adnimistrator, language expert, tester, toolmaster, archive keeper, editor, secretary of administrator, secretary of editor – total 10 roles (persons if possible) • Capability Maturity Model (CMM) – CMU SEI; Motorola (level 5): five levels of software process organization – initial, repeated, documented, supported by tools, optimizing • Extreme Programming (XP) – paired coding, developing tests beforeimplementing new functionality, refactoring, collective code ownership • Microsoft’s approach (software project survival guide):tiger teams • Psychological and moral issues (C) Vladimir O. Safonov, 2004

  14. Email as a toolfor organizing software process • Email is a working tool; all project decisions should be discussed by email • To: the concrete engineer; Cc: the manager and the technical group • Subject: should be explicitly present; should be very concrete • Friendly reminder (up to 10-15 times); please be patient • Email should be used for: technical discussions, event notifications; bug reports notifications;source code workspace notifications (automated); forwarding the working documentation; shipping the updates in the product(automated; most important for remote work) • Politeness: Dear John, <content> Thanks, Ivan • No rude, no spam. Each email should be concrete and up to the place (C) Vladimir O. Safonov, 2004

More Related