1 / 43

TESTABILITY

TESTABILITY. Operability: Testin işlevselliğinin hızlı ve verimli olmasıdır. Observability: Testin açık, net ve takip edilebilir olmasıdır. Controllability: Testin ve sonuçlarının kontrol edilebilir olmasıdır.

nitsa
Download Presentation

TESTABILITY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TESTABILITY • Operability: Testin işlevselliğinin hızlı ve verimli olmasıdır. • Observability: Testin açık, net ve takip edilebilir olmasıdır. • Controllability: Testin ve sonuçlarının kontrol edilebilir olmasıdır. • Decomposability: Testin bileşenlerine ayrılabilir ve değerlendirilebilir olmasıdır. • Simplicity: Testin kendisinin ve sonuçlarının basit ve anlaşılabilir olmasıdır. • Stability: Testin sağlam, geçerli ve sürekli olmasıdır. • Understandability: Testin anlaşılabilir olmasıdır.

  2. TESTABILITY • Kaner, Falk and Nguyen [KAN93] suggest the following attributes of a “good” test: • A good test has a high probability of finding an error. (İyi bir testin bir hata bulma olasılığı yüksektir) • A good test is not redundant. (İyi bir test lüzumsuz değildir) • A good test should be “best of breed” [KAN93]. (İyi bir test türünün en iyisi olmalıdır) • A good test should be neither too simple nor too complex. (İyi bir test ne çok basittir nede çok karmaşık olmalıdır)

  3. WHITE-BOX TESTING • Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. • We often believe that a logical path is not likely to be executed when, in fact it may be executed on a regular basis. • Typographical errors are random.

  4. BASIS PATH TESTING • Flow Graph Notation • Cyclomatic Complexity • Deriving Test Cases • Graph Matrices

  5. Flow Graph Notation The structured constructs in flow graph from: Case Sequence If While Until where each circle represents one or more nonbranching PDL or source code statements Figure 1.1. Graph Notation

  6. CONTROL STRUCTURE TESTING • Condition Testing (Durum Testi). • Data Flow Testing (Veri Akışı Testi). • Loop Testing (Döngü Testi).

  7. BLACK-BOX TESTING • Graph -Based Testing Methods (Grafik Tabanlı Test Metotları) • Equivalence Partitioning (Eşdeğer Bölümleme) • Boundary Value Analysis (Sınır Değerleri Analizi) • Comparison Testing (Mukayese Testleri)

  8. TESTING FOR SPECIALIZED ENVIRONMENTS AND APPLICATIONS • Testing GUI’ s • Testing of Client/Server Architectures • Testing Documentation and Help Facilities • Testing for Real-Time Systems

  9. TESTING GUI’ S (Graphical User Interface) • For Windows: • Will the window open properly based on related typed or menu- based mands? • Can the window be resized, moved, and scrolled? • Is all data content contained within the window properly addressable mouse, function, keys, directional arrows, and keyboard? • Does the window properly regenerate when it is overwritten and the called? • Are all functions that relate to the window avaible when needed? • Are all functions that relate to the window operational?

  10. TESTING GUI’ S (Graphical User Interface) • Are all relevant pull-down menus, tool bars, scroll bars, dialog boxes, buttons, icons, and other controls avaible and properly displayed for the windows? • When multiple windows are displayed, is the name of the window properly represented? • Is the active window properly highlighted? • If multitasking is used, are all windows updated at appropriate times? • Do multiple or incorrect mouse picks within the window cause unexpected side effects? • Does the window properly close?

  11. TESTING GUI’ S (Graphical User Interface) • For pull-down menus and mouse operations: • Is the appropriate menu bar displayed in the appropriate context? • Does the application menu bar display system related features (e.g., a clock display)? • Do pull-down operations work properly? • Do breakaway menus, palettes, and tool bars work properly? • Are all menu function and pull-down subfunctions properly listed? • Are all menu functions properly addressable by the mouse? • Is text typeface, size, and format correct? • Are menu functions highlighted (or grayed-out) based on the context of current operations within a window?

  12. TESTING GUI’ S (Graphical User Interface) • Does each menu function perform as advertised? • Are the names of menu functions self -explanatory? • Is help avaible for each menu item, and is it context sensitive? • Are mouse operations properly recognized throughout the interactive context? • If multiple clicks are required, are they properly recognized in context? • If the mouse has multiple buttons, are they properly recognized in context? • Do the cursor, processing indicator (e.g., an hour glass or clock), and pointer properly change as different operations are invoked?

  13. TESTING GUI’ S (Graphical User Interface) • Data Entry: • Is alphanumeric data entry properly achoed and input to the system? • Do graphical modes of data entry (e.g., a slide bar) work properly? • Is invalid data properly recognized? • Are data input messages intelligible?

  14. A STRATEGIC APPROACH TO SOFTWARE TESTING • Verification and Validation • Organizing for Software Testing • A Software Testing Strategy • Criteria for Completion of Testing

  15. Verification and Validation Formal Technical Reviews Software Engineering Methods Measurement quality Standarts And Procedures Testing SQA Figure 1.2. Achieving software quality

  16. Organizing for Software Testing • Two point of view: • Constructive (Yapıcı) • Destructive (Yıkıcı)

  17. A Software Testing Strategy System engineering S ST Requirements R V Design D I C U Code Unit test Integration test Validation test System test Figure 1.3. Testing strategy

  18. High-order tests İntegration test Unit test A Software Testing Strategy requirements design code Testing “direction” Figure 1.4. Software testing steps

  19. UNIT TESTING • Unit Test Considerations • Unit Test Procedures

  20. interface local data structures boundary conditions independent paths error handling paths test cases Unit Test Considerations Module ---------- ............ ---------- ............ Figure 1.5. Unit test

  21. interface local data structures boundary conditions independent paths error handling paths test cases Unit Test Procedures driver module to be tested stub stub RESULTS Figure 1.6. Test environment

  22. INTEGRATION TESTING • Top – Down Integration • Bottom – Up Integration • Regression Testing

  23. Top-Down Integration M1 M2 M3 M4 M5 M6 M7 M8 Figure 1.7. Top-down integration

  24. Top-Down Integration Stub A Stub B Stub C Stub D = Direction of data flow Figure 1.8. Stubs

  25. Bottom-Up Integration Mc Ma Mb D1 D2 D3 Cluster 3 Cluster 1 Cluster 2 Figure 1.9. Bottom-up integration

  26. Driver A Driver B Driver C Driver D Bottom-Up Integration Y B A Send parameter from a table (or external file) Invoke subordinate Display parameter A combination of drivers B and C = Direction of information flow Figure 1.10. Drivers

  27. VALIDATION TESTING (Testi Doğrulama) • Validation Test Criteria (Test Kriterlerinin Doğrulanması) • Configuration Review (Yapının Tekrar Gözden Geçirilmesi) • Alpha and Beta Testing (Alfa ve Beta Testi)

  28. SYSTEM TESTING (Sistem Testi) • Recovery Testing (İyileştirme Testi) • Security Testing (Güvenlik Testi) • Stress Testing (Aşırı Yükleme – Stres Testi) • Performance Testing (Performans Testi)

  29. THE ART OF DEBUGGING (Hata Yöntemleri) • The Debugging Process (Hata Ayıklama Süreci) • Physiological Considerations (Fizyolojik Etmenler) • Debugging Approaches (Hata Ayıklama Yaklaşımları)

  30. The Debugging Process (Hata Ayıklama Süreci) Test cases Execution of cases Additional tests Results Suspected causes Regression tests Debugging Identified causes Corrections Figure 1.11. Debugging

  31. SOFTWARE QUALITY • McCall’ s Quality Factors • Furps • The Transition to a Quantitative View

  32. McCall’ s Quality Factors • Correctness. (Doğrulanabilirlik) • Reliability. (Güvenirlilik) • Efficiency. (Etkinlik) • Integrity. (Bütünlük) • Usability. (Kullanılabilirlik) • Maintainability. (Sürdürülebilirlik) • Flexibility. (Esneklik) • Testability. (Test Edilebilirlik)

  33. McCall’ s Quality Factors Maintainability Flexibility Testability Portability Reusability Interoperability PRODUCT TRANSITION PRODUCT REVISION PRODUCT OPERATION Correctness Reliability Usability Integrity Efficiency Figure 1.12. McCall’ s software quality factors

  34. Figure 1.13. Quality factors and metrics

  35. A FRAMEWORK FOR TECHNICAL SOFTWARE METRICS • The Challenge of Technical Metrics • Measurement Principles • The Attributes of Effective Software Metrics

  36. Measurement Principles • Formulation (Formulasyon) • Collection (Toplama - Derleme) • Analysis (Analiz) • Interpretation (Yorumlama) • Feedback (Geribildirim)

  37. The Attributes of Effective Software Metrics • Simple and computable • Empirically and intuitively persuasive • Consistent and objective • Consistent in its use of units and dimensions • Programming language independent • An effective mechanism for quality feedback

  38. METRICS FOR THE ANALYSIS MODEL • Function-Based Metrics • The Bang Metric • Metrics for Specification Quality

  39. Function-Based Metrics Sensors test sensor password SafeHome User Interaction Function User zone inquiry zone settings sensor inquiry messages User panic button sensor status activate/deactivate activate/deactivate Monitoring & Response Subsystem alarm alert password, sensors... System configuration data Figure 1.14. Sort of the analysis model for SafeHome software

  40. Function-Based Metrics • The data flow diagram is evaluated to determine the key measure required for computation of the function point metric: • number of user inputs • number of user outputs • number of user inquiries • number of files • number of external interfaces

  41. Function-Based Metrics Weighting Factor Inquirement parameter count simple average complex 3 number of user inputs x 4 6 = number of user outputs x 5 7 = number of user inquiries x 4 6 = number of files x 10 15 = number of external interfaces x 7 10 = total 3 9 4 8 2 3 6 2 7 7 1 5 20 4 50 Figure 1.15. Computing function-points: SafeHome user interaction function

  42. METRICS FOR THE DESIGN MODEL • High – Level Design Metrics • Component – Level Design Metrics • Interface Design Metrics

  43. node arc a b c d e f g i j k l h m n p q r High-Level Design Metrics depth width Figure 1.16. Morphology metrics

More Related