1 / 46

Software Quality Assurance

Software Quality Assurance. Training Course – DAY 3 Neven Dinev By courtesy of. Review From Day2. Test Cases Checklist Triangle example Testing types. Unit Testing Definition and Objectives. Definition :

Download Presentation

Software Quality Assurance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Quality Assurance Training Course – DAY 3 Neven Dinev By courtesy of

  2. Review From Day2 • Test Cases • Checklist • Triangle example • Testing types

  3. Unit TestingDefinition and Objectives Definition: Unit testing is the testing of an individual unit of software, typically by its developer or by a peer programmer. Objectives: • Enable the identification of unit-level defects by causing corresponding failures to occur. • Identify defects that are not easily identified during other kinds of testing. • Enable the developer to confidently iterate and refractor the software unit, knowing that any defect will be quickly discovered during regression testing. • Enable the developer to determine if the unit: • Is complete. • Fulfills its responsibilities and does not violate its assertions (the oracles for unit testing). • Works as designed. • Is ready to be submitted for integration as part of a code build.

  4. Unit Testing Completion Criteria • Unit testing is typically complete when the following postconditions hold: • A complete test suite of test cases exists for every: • Public software unit. • Domain class and interface. • Object-oriented classes. Unit testing is complete if a minimal test suite of test cases that meet the following coverage criteria has been successfully ran for each class: • Each test should evaluate the actual versus expected: • Value (if any) returned by the object under test (OUT). • Outbound exceptions (if any) that can be raised by the OUT. • State of the OUT. • Messages (if any) to be sent by the OUT. • Inbound exceptions (if any) handled by the OUT.

  5. Unit Testing Completion Criteria • Blackbox test coverage criteria: • Every responsibility has at least one test case. • Every individual assertion (i.e., precondition, postcondition, and invariant) has at least two test cases (one for true and one for false). • Every public operation (method) of the class under test that is not a pure getter or setter has at least one test case. This includes tests for: • Newly defined operations are developed. • Unmodified inherited operations are rerun (regression testing). • Overridden inherited operations are iterated or developed. • Every constructor has at least one test case. • State based testing. At least one test case for every public operation in every state (i.e., state of the object under test, state of every message parameter, state of every collaborator, and state of every exception handled) ensures coverage of every transition from every state. • Whitebox test coverage criteria: • Statement coverage (absolute minimum used for simple classes only). • Branch coverage (default for average classes). • Condition/branch coverage (for complex, defect-prone classes). • Every define/use path for every variable attribute.

  6. Unit Testing Completion Criteria • Procedural functions. Unit testing is complete if: • A test suite containing test cases exists for: • Every state of every parameter. • Every basis path through the function has been tested. • HTML webpages. Unit testing is complete if: • Tag testing (e.g., using WebLint) is performed to determine if the HTML is syntactically correct. • A test suite containing test cases exists for: • Every link (to determine if the link is broken). • Every parameter state of every input parameter. • Every client side code fragment (e.g., buttons that only affect the client and do not communicate with the server). • These test suites execute properly. • No failures are reported.

  7. Integration TestingDefinition, Objectives & Guidelines Definition • Integration testing is the testing of a partially integrated application to identify defects involving the interaction of collaborating components. Objectives • Determine if components will work properly together. • Identify defects that are not easily identified during unit testing. Guidelines • The iterative and incremental development cycle implies that integration testing is regularly performed in an iterative and incremental manner. • Integration testing must be automated if adequate regression testing is to occur.

  8. Define Incremental and Iterative • incremental • a property of a development process whereby units of work are repeated to produce additional new work products or capabilities of work products. Development cycles are typically incremental because applications are too large and complex to be built all at once in a big bang fashion. • iterative • a property of a development process whereby work units are repeated on existing work products to improve them (e.g., to fix defects and adapt it to changes in requirements).

  9. Regression testing A testing technique consisting of the repetition of a test after the work product under test has been iterated. Regression testing is used to identify any defects were inadvertently introduced (i.e., to determine if the work product has regressed) since the previous test. Note: the iterative and incremental nature of the object-oriented development cycle greatly increases the frequency of regression testing and therefore increases the need to automate regression testing.

  10. Alpha testingDefinition and Objectives Definition • Alpha testing is the launch testing consisting of the development organization’s initial internal dry runs of the application’s acceptance tests in the production environment. Objectives • Cause failures that only tend to occur in the production environment. • Report these failures to the development teams so that the associated defects can be fixed. • Help determine the extent to which the application is ready for: • Beta testing. • Acceptance testing. • Launch. • Provide input to the defect trend analysis effort.

  11. Alpha Testing Completion Criteria and Guidelines Completion Criteria • Alpha testing is typically complete when the following post conditions hold: • An initial version of the acceptance test suites exists. • The customer representative has approved these acceptance test suites. • The acceptance tests execute on the production environment. • Acceptance testing does not discover any: • Severity one defects. • Severity two defects that do not have adequate workaround Guidelines • To the extent practical, reuse the tests from system testing when performing alpha testing rather than producing new tests.

  12. Beta testingDefinition and objectives Definition • Beta testing is the launch testing of the application in the production environment by a few select users prior to acceptance testing and the release of the application to its entire user community. Objectives • Cause failures that only tend to occur during actual usage by the user community rather than during formal testing. • Report these failures to the development teams so that the associated defects can be fixed. • Obtain additional user community feedback beyond that received during usability testing. • Help determine the extent to which the system is ready for: • Acceptance testing. • Launch. • Provide input to the defect trend analysis effort.

  13. Beta TestingCompletion and Guidelines Completion Criteria Beta testing is typically complete when: • The time period scheduled for beta testing ends. • The users have reported any failures observed to the development organization. • These failures have been passed on to the development teams. Guidelines • Limit the user test group to users who are willing to use a lower quality version of the application in exchange for obtaining it early and having input into its iteration. • Beta testing is critical if formal usability testing was not performed during system testing. • Beta testing often uses actual live data rather than data created for testing purposes

  14. Acceptance TestingDefinition and Objectives Definition • Acceptance testing (a.k.a., qualification testing) is the final launch testing of an application in its production environments to determine if it is acceptable to its customer organization. Objectives • Determine whether the application satisfies its acceptance criteria. • Enable the customer organization to determine whether to accept the application. • Determine if the application is ready for deployment to the full user community. • Report any failures to the development teams so that the associated defects can be fixed.

  15. Acceptance TestCompletion Criteria Completion Criteria • Acceptance testing is typically complete when: • Test scripts of test cases exist that test the system against its acceptance criteria. • Acceptance test suites execute successfully. • Acceptance testing does not discover any severity one defects. • Acceptance testing does not discover any severity two defects that do not have adequate work arounds.

  16. Acceptance Testing Guidelines • To the extent practical, reuse the tests from system testing when performing acceptance testing rather than producing new tests. • There may be acceptance criteria unrelated to acceptance testing (e.g., completion and acceptability of documentation). • Acceptance testing is either performed in front of the customer or by the customer. • Acceptance testing is often performed using actual data. • Acceptance testing may be performed incrementally. • The decision resulting from the performance of acceptance testing include: • System is acceptable as is. • System is acceptable for use, but some defects must be fixed prior to the next release. • System is acceptable for use, but some defects must be fixed in the next release. • System is not acceptable for use until some defects have been fixed.

  17. Smoke, Sanity and Monkey Tests • Smoke tests get their name from the electronics industry. The circuits are laid out on a bread board and power is applied. If anything starts smoking, there is a problem. In the software industry, smoke testing is a shallow and wide approach to the application. You test all areas of the application without getting too deep. This is also known as a Build Verification test or BVT. • In comparison, sanity testing is usually narrow and deep. That is they look at only a few areas but all aspects of that part of the application. A smoke test is scripted--either using a written set of tests or an automated test--whereas a sanity test is usually unscripted. • A monkey test is also unscripted, but this sort of test is like a room full of monkeys with a typewriter (or computer) placed in front of each of them. The theory is that, given enough time, you could get the works of Shakespeare (or some other document) out of them. This is based on the idea that random activity can create order or cover all options.

  18. Exercise • Define Smoke Tests for Notepad • Prepare checklist only

  19. Performance TestingDefinition and Goals Definition • Performance testing is the system testing of an integrated, blackbox, [partial] application against its performance requirements under normal operating circumstances. Goals • Identify inefficiencies and bottlenecks with regard to application performance. • Enable the underlying defects to be identified, analyzed, fixed, and prevented in the future.

  20. Performance TestingObjectives • Partially validate the system (i.e., to determine if it fulfills its performance requirements). • Cause failures relating to performance requirements: • Capacity (the maximum number of objects the application/databases can handle). • Latency (the average and maximum time to complete a system operation). • Response time (the average and the maximum application response times). • Throughput (the maximum transaction rates that the application can handle). • Report these failures to the development teams so that the associated defects can be fixed.

  21. Performance TestingObjectives (cont) • Provide information that will assist in performance tuning under various workload conditions, hardware configurations, and database sizes (e.g., by helping identify performance bottlenecks). • Reduce hardware costs by providing information allowing systems engineers to: • Identify the minimum hardware necessary to meet the performance requirements. • Tune the application for maximum performance by identifying the optimal system configuration (e.g., by repeating the test using different configurations). • Help determine the extent to which the system is ready for launch. • Provide input to the defect trend analysis effort.

  22. Performance testingGuidelines • Guidelines • A system can fulfill its operational requirements and still be a failure if it does not have adequate performance. • The iterative and incremental development cycle implies that performance testing is regularly performed in an iterative and incremental manner. • Performance testing must be automated if adequate regression testing is to occur. • Performance testing can elicit failures prior to launch. • Performance testing can begin prior to the distribution of software components onto system components in order to identify gross performance defects. • To the extent practical, reuse functional test cases as performance test cases.

  23. Load TestingDefinition and Goals Definition • Load testing is the system testing of an integrated, blackbox application that attempts to cause failures involving how its performance varies under normal conditions of utilization (e.g., as the load increases and becomes heavy). Goals • Cause the application to fail to scale gracefully under normal conditions so that the underlying defects can be identified, analyzed, fixed, and prevented in the future.

  24. Load TestingObjectives The typical objectives of load testing are to: • Partially validate the application (i.e., to determine if it fulfills its scalability requirements): • Scalability requirements (e.g., the number of users increases). • Distribution and load-balancing mechanisms. • Cause failures concerning the load requirements that help identify defects that are not efficiently found during unit and integration testing. • Report these failures to the development teams so that the associated defects can be fixed. • Determine if the application will support typical production load conditions. • Identify the point at which the load becomes so great that the application fails to meet performance requirements. • Locate performance bottlenecks including those in I/O, CPU, network, and database. • Help determine the extent to which the application is ready for launch. • Provide input to the defect trend analysis effort.

  25. Load TestingGuidelines • An application can fulfill its operational requirements and still be a failure if it does not scale. • The iterative and incremental development cycle implies that load testing is regularly performed in an iterative and incremental manner. • Load testing must be automated if adequate regression testing is to occur. • Load testing can elicit failures prior to launch. • Perform workload analysis to determine the typical production workloads. • Develop test scripts simulating real-life workloads. • Perform load testing for several minutes to several hours. • To the extent practical, reuse functional test cases as load test cases.

  26. Stress TestingDefinition and Goals Definition • Stress testing is the system testing of an integrated, blackbox application that attempts to cause failures involving how its performance varies under extreme but valid conditions (e.g., extreme utilization, insufficient memory inadequate hardware, and dependency on over-utilized shared resources). Goals • Cause the application to fail to scale gracefully under extreme conditions so that the underlying defects can be identified, analyzed, fixed, and prevented in the future.

  27. Stress Testing Objectives • Partially validate the application (i.e., to determine if it fulfills its scalability requirements). • Determine how an application degrades and eventually fails, as conditions become extreme. For example, stress testing could involve an extreme number of simultaneous users, extreme numbers of transactions, queries that return the entire contents of a database, queries with an extreme number of restrictions, or an entry at the maximum amount of data in a field. • Report these failures to the development teams so that the associated defects can be fixed. • Determine if the application will support "worst case" production load conditions. • Provide data that will assist systems engineers in making intelligent decisions regarding future scaling needs. • Help determine the extent to which the application is ready for launch. • Provide input to the defect trend analysis effort.

  28. Stress testingGuidelines • A system can fulfill its operational requirements and still be a failure if it does not scale. • The iterative and incremental development cycle implies that stress testing is regularly performed in an iterative and incremental manner. • Stress testing must be automated if adequate regression testing is to occur. • Stress testing can elicit failures prior to launch. • Develop test scripts simulating exceptional workloads. • Perform stress testing for several minutes to several hours. • To the extent practical, reuse functional test cases as stress test cases

  29. Exercise • We have webmail application, Which should support 1000 users. Each user has 100MB space to use for mails. Define Some Performance Load and Stress test

  30. UsabilityDefinition • usability (a.k.a., user-friendliness) • (1) a user-oriented security quality requirement specifying the degree to which an application or component (e.g., user interface, help facilities) and its documentation shall enable a specified set of users to easily and efficiently: • Learn and remember how to use it. • Perform a specified set of their tasks while making a minimum number of errors (e.g., create and inputs, obtain and understand outputs). • (2) a quality factor measuring the degree to which an application or component actually enable users to easily and effectively use it.

  31. Usability testingObjective • The objective of usability testing is to determine how well the user will be able to use and understand the application. This includes the system functions, publications, help text, and procedures to ensure that the user comfortably interacts with the system. Usability testing should be performed as early as possible during development and should be designed into the system. Late usability testing might be impossible, because it is locked in and often requires a major redesign of the system to correct serious usability problems. This may make it economically infeasible.

  32. Usability testingItems to look at • Overly complex functions or instructions • Difficult installation procedures • Poor error messages, e.g., “syntax error” • Difficult syntax to understand and use • Nonstandard GUI interfaces • User forced to remember too much information • Difficult login procedures • Help text not context sensitive or not detailed enough • Poor linkage to other systems • Unclear defaults • Interface too simple or too complex • Inconsistency of syntax, format, and definitions • User not provided with clear acknowledgment of all inputs

  33. Usability Lab

  34. GUI Testing Checklist • Check list for GUI – What is this? • Discussion and presentation of GUI checklist for MS Window (separated document)

  35. Exercise • Open MS Word and go trough Tools Options Window. Use GUI check list to analyze Options window

  36. Security TestingDefinition and Goals Definition • Security testing (a.k.a., penetration testing) is the testing of an application against its security requirements and the implementation of its security mechanisms. Goals • Cause the application to fail to meet security requirements or fail to properly implement security mechanisms so that the underlying defects can be identified, analyzed, fixed, and prevented in the future.

  37. Security Testing Objectives. Requirements • Requirements.Partially validate the application (i.e., to determine if it fulfills its security requirements): • Identification • Authentication • Authorization • Content Protection • Integrity • Intrusion Detection • Nonrepudiation • Privacy • System Maintenance

  38. Security Testing Objective. Mechanisms • Mechanisms.Determine if the system causes any failures concerning its implementation of its security mechanisms: • Encryption and Decryption • Firewalls • Personnel Security: • Passwords • Digital Signatures • Personal Background Checks • Physical Security: • Locked doors for identification, authentication, and authorization • Badges for identification, authentication, and authorization • Cameras for identification, authentication, and authorization

  39. Security TestingObjective. Cause Falures • Cause Failures.Cause failures concerning the security requirements that help identify defects that are not efficiently found during other types of testing: • The application fails to identify and authenticate a user. • The application allows a user to perform an unauthorized function. • The application fails to protect its content against unauthorized usage. • The application allows the integrity of data or messages to be violated. • The application allows undetected intrusion. • The application fails to ensure privacy by using an inadequate encryption technique.

  40. Security Testing Objectives • Report Failures.Report these failures to the development teams so that the associated defects can be fixed. • Determine Launch ReadinessHelp determine the extent to which the system is ready for launch. • Project Metrics.Help provide project status metrics. • Trend Analysis.Provide input to the defect trend analysis effort.

  41. Security Testing Guidelines • Guidelines • Security tests may test software components or completed systems. • Security tests may be either automated using a security tool or test harness or else performed manually (e.g., tests of physical security). • The scope of security testing includes: • Internet, intranet, or extranet clients • Networks • Data center with servers • Clients • Security testing should detect the following security failures: • The system fails to identify and authenticate a user. • The system allows a user to perform an unauthorized function. • The system fails to protect its content against unauthorized usage. • The system allows the integrity of data or messages to be violated. • The system allows undetected intrusion. • The system fails to ensure privacy by using an inadequate encryption technique. • In a sense, security testing is never complete because security testing should be repeated on a regular basis. • Batch scripts can be applied to perform registry fixes.

  42. Configuration TestingDefinition and Goals Definition Configuration testing is the system testing of different variations of an integrated, blackbox application against its configurability requirements. Goals Cause the application to fail to meet its configurability requirements so that the underlying defects can be identified, analyzed, fixed, and prevented in the future.

  43. Configuration TestingObjectives • Objectives • Partially validate the application (i.e., to determine if it fulfills its configurability requirements). • Cause failures concerning the configurability requirements that help identify defects that are not efficiently found during unit and integration testing: • Functional Variants. • Internationalization (e.g., multiple languages, currencies, taxes and tariffs, time zones, etc.). • Personalization • Report these failures to the development teams so that the associated defects can be fixed. • Determine the effect of adding or modifying hardware resources such as: • Memory • Disk and tape resources • Processors • Load balancers • Determine an optimal system configuration.

  44. Configuration TestingGuidelines Guidelines • The iterative and incremental development cycle implies that configuration testing is regularly performed in an iterative and incremental manner. • Configuration testing must be automated if adequate regression testing is to occur. • To the extent practical, reuse functional test cases as configuration test cases.

  45. Exercise • You are working for multinational company which has offices in Bulgaria, Germany, France and USA. Company was founded before 5 years and probably has some old but still working desktop machines • You have new client server application. Client part is a desktop application that will be widely used in your organization. Server is only one and will be placed in USA • Define what configuration test you should perform. Checklist only.

  46. Installation Testing • Testing of full, partial, or upgrade install/uninstall processes • Configuration vs Installation testing • Configuration and Installation testing • Review Installation checklist

More Related