1 / 50

Software Testing 2

Software Testing 2. Damian Gordon. Edsger W. Dijkstra. Born May 11, 1930 Died August 6, 2002 Born in Rotterdam, Netherlands A Dutch computer scientist, who received the 1972 Turing Award for fundamental contributions to developing programming languages. Edsger W. Dijkstra.

tavita
Download Presentation

Software Testing 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing 2 Damian Gordon

  2. Edsger W. Dijkstra • Born May 11, 1930 • Died August 6, 2002 • Born in Rotterdam, Netherlands • A Dutch computer scientist, who received the 1972 Turing Award for fundamental contributions to developing programming languages.

  3. Edsger W. Dijkstra • “Testing shows the presence, not the absence of bugs” • “Program testing can be used to show the presence of bugs, but never to show their absence!”

  4. Principles of Testing • Let’s call that Principle #1: • Testing shows the presence of defects, but if no defects are found that is no proof of correctness.

  5. Principles of Testing • Principle #2: • Exhaustive Testing is impossible, all combinations of inputs and preconditions are impossible to test, instead of this it is important to focus on risk analysis and priorities.

  6. Principles of Testing • Principle #3: • Early Testing is important, test as soon as possible and focus on defined objectives.

  7. Principles of Testing • Principle #4: • Defect Clustering, a small section of code may contain most of the defects.

  8. Principles of Testing • Principle #5: • Pesticide Paradox, using the same test cases over and over again will never lead to finding new defects – review them regularly.

  9. Principles of Testing • Principle #6: • Testing is Context Dependent, safety-critical software is tested differently to an e-commerce site.

  10. Principles of Testing • Principle #7: • Absence-of-Errors fallacy, if the system does not fulfil the users needs, it is not useful.

  11. The Test Process

  12. Planning and Control Test Process

  13. Planning and Control Analysis and Design Test Process

  14. Planning and Control Analysis and Design Implementation and Execution Test Process

  15. Planning and Control Analysis and Design Implementation and Execution Test Process Evaluating Exit Criteria and Reporting

  16. Planning and Control Analysis and Design Implementation and Execution Test Process Evaluating Exit Criteria and Reporting Test Closure Activities

  17. Test Planning and Control • Understanding the goals and objectives of the customers, stakeholders, and the project, and the risks that testing is intended to address. • This gives us the mission of testing or the test assignment.

  18. Test Planning and Control • To help achieve mission, the test strategy and test policies are created • Test Strategy: An overall high-level approach, e.g. “system testing is carried out by independent software testers” • Test Policies: Rules for testing, e.g. “we always review design documents” • From here we can define a test plan.

  19. Test Planning and Control • Test Plan: • Determine the scope and risks and identify the objectives of testing. We consider what software, components, systems or other products are in scope for testing

  20. Test Planning and Control • Test Plan: • Determine the test approach (techniques, test items, coverage, identifying and interfacing with the teams involved in testing, testware)

  21. Test Planning and Control • Test Plan: • Implement the test policy and/or the test strategy

  22. Test Planning and Control • Test Plan: • Determine the required test resources (e.g. people, test environment, PCs): from the planning we have already done we can now go into detail; we decide on our team make-up and we also set up all the supporting hardware and software we require for the test environment.

  23. Test Planning and Control • Test Plan: • Schedule test analysis and design tasks, test implementation, execution and evaluation: we will need a schedule of all the tasks and activities, so that we can track them and make sure we can complete the testing on time.

  24. Test Planning and Control • Test Plan: • Determine the exit criteria: we need to set criteria such as coverage criteria (for example, the percentage of statements in the software that must be executed during testing) that will help us track whether we are completing the test activities correctly

  25. Test Analysis and Design • This phase focuses on moving from more general objectives to tangible test conditions and test designs.

  26. Test Analysis and Design • We start by designing some Black Box tests based on the existing specification of the testing. This process will result in the specifications themselves being updated, clarified, and disambiguated.

  27. Test Analysis and Design • Next we identify and prioritise the tests, and select representative tests that relate to the software that carry risks or are of particular interest.

  28. Test Analysis and Design • Next we identify the data that will be used to test this software. So this will include specifically design test data as well as “like-live” data. • It may be necessary to generate a large volume of “like-live” data to stress-test the system. • It is important that the “like-live” data doesn’t include real customers names, etc. If there would be any confidentiality issues.

  29. Test Analysis and Design • Finally we design the test environment set-up and identify any required infrastructure and tools. • Including support tools such spreadsheets, word processors, project-planning tools, etc.

  30. Test Implementation and Execution • We build the tests based on test cases. • We may set up testware for automated testing. • We need to plan that the set-up and configuration will take significant time.

  31. Test Implementation and Execution • Implementation • Prioritise test cases • Group together similar test cases into a test suite, which usually share test data • Verify that the test environment has been set up correctly

  32. Test Implementation and Execution • Execution • Execute the test suites following the test procedures. • Log the outcome of test execution and record all important information. • Compare actual results with expected results, and report any discrepancies as incidents. • Repeat test activities as a result of action taken for each discrepancy.

  33. Evaluating Exit Criteria and Reporting • Comparing test execution to stated objectives of the development. • Exit criteria (c.f. Fagan inspection) are requirements which must be met to complete a specific process, might be “testing team completes and files testing report”)

  34. Evaluating Exit Criteria and Reporting • Check test logs against criteria, is all evidence present and fully documented • Assess if more tests are needed • Write a Test Summary Report for stakeholders.

  35. Test Closure Activities • Test closure activities include: • Check all deliverables have been delivered • Finalise and archive all testware such as scripts, infrastructure, etc. For future projects. • Evaluate how testing went and analyse lesson learned for future projects.

  36. Psychology of Testing • Independent Testing – who is a tester? • Why do we sometimes not get on with the rest of the team?

  37. Psychology of Testing • Independent Testing – who is a tester? • A tester is coming to the process from a critical perspective, unlike a designer or developer who are working positively or optimistically towards the project, the tester must always focus on judging and appraising.

  38. Psychology of Testing • Independent Testing – who is a tester? • Suppose you were going to cook a meal to enter in a competition for chefs. You select the menu, collect the ingredients, cook the food, set the table, and serve the meal. If you want to win, you do each task as well as you can. • Suppose instead you are one of the judges evaluating the competition meals. You examine everything critically, including the menu, the ingredients, the methods used, keeping to time and budget allowances, choice of ingredients, the elegance of the table setting and the serving, and the look and taste of the meal. To differentiate between the competition chefs, you'll praise every good aspect of their performances but you'll also note every fault and error each chef made. • So it is with software testing: building the software requires a different mindset from testing the software.

  39. Psychology of Testing • Independent Testing – who is a tester? • This is not to say that a programmer cannot be a tester, or a tester cannot be a programmer, and in fact in many organisations people assume both roles at different time. • The programmer is usually the first line of testing since they will be testing elements of the programs as they are writing the code, and will be finding and eliminating errors before anyone else sees the code.

  40. Psychology of Testing • Independent Testing – who is a tester? • The big problem is that it’s hard to spot issues with your own work, so someone who is independent of the development process is better to review and judge.

  41. Psychology of Testing • Independent Testing – who is a tester? • Several levels of independence can be identified, listed here from the lowest level of independence to the highest: • tests by the person who wrote the item under test • tests by another person within the same team, such as another programmer • tests by a person from a different organizational group, such as an independent test team • tests designed by a person from a different organization or company, such as outsourced testing or certification by an external body

  42. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • Some people on a team want to get the project done and delivered as quickly as possible, whereas others what to ensure that a bug-free system is delivered.

  43. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • If people are unclear as to the amount of testing that has to be done to complete the test strategy, or if development takes longer than anticipated, there may be pressure to truncate the test process.

  44. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • When a tester is checking code, if they detect errors they perceive this as a good thing, we have found defects or failures in the system.

  45. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • However many developers take great pride is their work, and the software they write becomes personal to them, from their point-of-view any flaws that are uncovered about the system could be taken as a personal attack on them.

  46. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • So.....

  47. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • Communicate findings on the product in a neutral, fact-focused way without criticizing the person who created it. For example, write objective and factual incident reports and review findings. • Don't gloat - you are not perfect either! • Don't blame - any mistakes are probably by the group rather than an individual. • Be constructively critical and discuss the defect and how you are going to log it.

  48. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • Explain that by knowing about this now we can work round it or fix it so the delivered system is better for the customer. • Say what you liked and what worked, as well as what didn't work. • Show what the risk is honestly - not everything is high priority. • Don't just see the pessimistic side - give praise as well as criticism. • Show what risks have been uncovered and the benefits of the review or test.

  49. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • Start with collaboration rather than battles. Remind everyone of the common goal of better quality systems. • Be polite and helpful, collaborate with your colleagues. • Try to understand how the other person feels and why they react as they do. • Confirm that the other person has understood what you have said and vice versa. • Explain how the test or review helps the author - what's in it for them. • Offer your work to be reviewed, too

  50. Psychology of Testing • Why do we sometimes not get on with the rest of the team? • Maybe we could use the Six Thinking Hats?

More Related