1 / 41

Overview

ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Lecture 28 Instructor Paulo Alencar. Overview. Software Maintenance Legacy Software Maintenance Metrics. Legacy Systems. Older software systems that remain vital to an organization

yamka
Download Presentation

Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 453 – CS 447 – SE 465 Software Testing & Quality AssuranceLecture 28InstructorPaulo Alencar

  2. Overview • Software Maintenance • Legacy Software • Maintenance Metrics

  3. Legacy Systems • Older software systems that remain vital to an organization • Software systems that are developed specially for an organization have a long lifetime • Many software systems that are still in use were developed many years ago using technologies that are now obsolete • These systems are still business critical that is, they are essential for the normal functioning of the business • They have been given the name legacy systems

  4. Legacy Systems Replacement • There is a significant business risk in simply scrapping a legacy system and replacing it with a system that has been developed using modern technology • Legacy systems rarely have a complete specification. During their lifetime they have undergone major changes which may not have been documented • Business processes are reliant on the legacy system • The system may embed business rules that are not formally documented elsewhere • New software development is risky and may not be successful

  5. Legacy Systems Change • Systems must change in order to remain useful • However, changing legacy systems is often expensive • Different parts implemented by different teams so no consistent programming style • The system may use an obsolete programming language • The system documentation is often out-of-date • The system structure may be corrupted by many years of maintenance • Techniques to save space or increase speed at the expense of understandability may have been used • File structures used may be incompatible

  6. The Legacy Dilemma • It is expensive and risky to replace the legacy system • It is expensive to maintain the legacy system • Businesses must weigh up the costs and risks and may choose to extend the system lifetime using techniques such as re-engineering.

  7. Legacy System Structures • Legacy systems can be considered to be socio-technical systems and not simply software systems (e.g., involves issues such as communication, user satisfaction, resistance, group interaction, knowledge) • System hardware - may be mainframe hardware • Support software - operating systems and utilities • Application software - several different programs • Application data - data used by these programs that is often critical business information • Business processes - the processes that support a business objective and which rely on the legacy software and hardware • Business policies and rules - constraints on business operations

  8. Legacy System Components

  9. System Change • In principle, it should be possible to replace a layer in the system leaving the other layers unchanged • In practice, this is usually not possible • Changing one layer introduces new facilities and higher level layers must then also change to make use of these • Changing the software may slow it down so hardware changes are then required • It is often not possible to maintain hardware interfaces because of the wide gap between mainframes and client-server systems

  10. Legacy Application System

  11. Database-Centred System

  12. Legacy Data • The system may be file-based with incompatible files. The change required may be to move to a database-management system • In legacy systems that use a DBMS the database management system may be obsolete and incompatible with other DBMSs used by the business

  13. Legacy System Design • Most legacy systems were designed before object-oriented development was used • Rather than being organised as a set of interacting objects, these systems have been designed using a function-oriented design strategy • Several methods and CASE tools are available to support function-oriented design and the approach is still used for many business applications

  14. Evolving Systems • It is usually more expensive to add functionality after a system has been developed rather than design this into the system • Maintenance staff are often inexperienced and unfamiliar with the application domain • Programs may be poorly structured and hard to understand • Changes may introduce new faults as the complexity of the system makes impact assessment difficult • The structure may be degraded due to continual change • There may be no documentation available to describe the program

  15. Maintenance Management • Maintenance has a poor image amongst development staff as it is not seen as challenging and creative • Maintenance costs increase as the software is maintained • The amount of software which has to be maintained increases with time • Inadequate configuration management often means that the different representations of a system are out of step

  16. Change Processes Fault repair process Iterative development process

  17. System Documentation • Requirements document • System architecture description • Program design documentation • Source code listings • Test plans and validation reports • System maintenance guide

  18. Document Production • Structure documents with overviews leading the reader into more detailed technical descriptions • Produce good quality, readable manuals - they may have to last 20 years • Use tool-generated documentation whenever possible

  19. Maintenance Cost Factors • Module independence • It should be possible to change one module without affecting others • Programming language • High-level language programs are easier to maintain • Programming style • Well-structured programs are easier to maintain • Program validation and testing • Well-validated programs tend to require fewer changes due to corrective maintenance

  20. Maintenance Cost Factors • Documentation • Good documentation makes programs easier to understand • Configuration management • Good CM means that links between programs and their documentation are maintained • Application domain • Maintenance is easier in mature and well-understood application domains • Staff stability • Maintenance costs are reduced if the same staff are involved with them for some time

  21. Maintenance Cost Factors • Program age • The older the program, the more expensive it is to maintain (usually) • External environment • If a program is dependent on its external environment, it may have to be changed to reflect environmental changes • Hardware stability • Programs designed for stable hardware will not require to change as the hardware changes

  22. Maintenance Metrics • Measurements of program characteristics which would allow maintainability to be predicted • Essentially technical, how can technical factorsabove be quantified • Any software components whose measurements are out of line with other components may be excessively expensive to maintain. Perhaps perfective maintenance effort should be devoted to these components

  23. Maintenance Metrics • Control complexity Can be measured by examining the conditional statements in the program • Data complexity Complexity of data structuresand component interfaces. • Length of identifier names Longer names imply readability • Program comments Perhaps more commentsmean easier maintenance

  24. Maintenance Metrics • Coupling How much use is made of other components or data structures • Degree of user interaction The more user I/O, the more likely the component is to require change • Speed and space requirements Require tricky programming, harder to maintain

  25. Maintenance Metrics: Coupling • d(i) – number of input parameters • d(o) – number of output parameters • c(i) – number of input control parameters • c(o) – number of output control parameters • g(d) – number of global variables used as data • g(c) – number of global variables used as control • w – number of modules called (fan-out) • r – number of modules calling this (fan-in) m(c) = k / M , where k = 1 (can be adjusted) and • M= d(i)+(k1 * c(i))+d(o)+(k2 * c(o))+g(d)+(k3 * g(c))+w+r • where k1, k2, k3 are constants and may be adjusted • The lower the value, the weaker the coupling

  26. Maintenance Metrics: Maturity Software Maturity Index • M = number of modules in current release • F(c) = number of changed modules in current release • F(a) = number of added modules in current release • F(d) = number of modules removed from previous • SMI = [M-(F(a)+F(c)+F(d)]/M • As SMI approaches 1 product begins to stabilize

  27. Maintenance Metrics: Software Purity Level This metric estimates the relative change in the failure rate from the beginning of a phase (e.g., maintenance) to the time of the fth failure detection: PL = purity level = (Z(t0) – Z(tf)) / Z(t0) where: • t0 = start of the specified phase • tf = length of time in current phase when fth failure detected • f = total number of failures in a given time interval • Z(t) = estimated failure rate at time t

  28. Maintenance Metrics: Cost Basili, Vallet (NASA Goddard Space Flight Center) et al. (1995) developed a predictive cost model for maintenance. By estimating the size of a release, an effort estimate can be determined: Effort in hours = (0.36 * SLOC) + 1040

  29. Maintenance Metrics: Fault Days This metric evaluates the number of days between the time an error is introduced into a system and when the fault is detected and removed: FD = fault days for total system = SUM (FDi) for i=1 to I where • FDi = fault days for ith fault = fout – fin • fin = date error was introduced into the system • fdet = date error was detected • fout = date fault was removed from the system • I = total number of faults found to date

  30. Maintenance Metrics: Fault Days Other similar metrics can be calculated: FDfind = days to find an error = SUM (FDfindi) for i=1 to I where • FDfindi = days to find ith error = fdeti – fini FDfix = days to fix faults = SUM (FDfixi) for i=1 to I where • FDfixi = days to fix ith error = fouti – fdeti

  31. Maintenance Metrics: Fault Days The following averages can be calculated: • average days to find an error = FDfindi / I • average days to fix an error = FDfixi / I • average error duration = (FDfindi + FDfixi) / I

  32. Maintenance Metrics: Staff-Hours The staff-hours per major defect detected metric is defined as follows: SH = staff-hours per major defect detected = (SUM (T1 + T2)i (i= 1 to I)) / SUM(Si) (i=1 to I) where: • T1 = preparation time spent by the team for the i-th inspection • T2 = time spent by the team to conduct the ith inspection • Si = number of nontrivial defects detected during the ith inspection • I = total number of inspections conducted to date

  33. Maintenance Metrics: Failure Density SSFD: Software System Failure Density: SSFD = NYF / KLMC WSSFD: Weighted Software System Failure Density WSSFD = WYF / KLMC where: NYF = number of software failures detected during a year of maintenance service WYF = weighted number of yearly software failures detected during a year of maintenance service KLMC = thousands of lines of maintained source code

  34. Maintenance Metrics: Failure Density WSSFF: Weighted software system failures per function point WSSFF = WYF / NMFP where: WYF = weighted number of yearly software failures detected during a year of maintenance service NMFP = number of function points designated for the maintained software

  35. Maintenance Metrics: Failure Severity ASSSF: Average severity of software system failures ASSSF = WYF / NYF MRepF: Maintenance repeated repair failure MRepF = RepYF / NYF where: RepYF = number of repeated software failure calls (service failures)

  36. Maintenance Metrics: Availability FA: Full availability FA = (NYSerH – NYFH) / NYSerH VitA: Vital availability VitA = (NYSerH – NYVitFH) / NYSerH where NYSerH = number of hours software system is in service during one year NYFH = number of hours where at least one function is unavailable (failed) during one year, including total failure NYVitFH = number of hours when at least one vital function is unavailable (failed) during one year NYTFH = number of hours of total failure (all systems functions failed) during one year

  37. Maintenance Metrics: Availability TUA: Total Unavailability TUA = NYTFH / NYSerH where: NYSerH = number of hours software system is in service during one year NYTFH = number of hours of total failure (all systems functions failed) during one year

  38. Maintenance Metrics: Productivity and Effectiveness CMaiP: Corrective maintenance productivity CMaiP = CMaiYH / KLMC FCMP: Function point corrective maintenance productivity FCMP = CMaiYH /NMFP where CMaiYH = total yearly working hours invested in the corrective maintenance of the software system KLMC = thousands of lines of maintained software code NMFP = number of function points designated for the maintained software

  39. Maintenance Metrics: Productivity and Effectiveness CMaiE: Corrective maintenance effectiveness CMaiE = CMaiYH / NYF where CMaiYH = total yearly working hours invested in the corrective maintenance of the software system NYF = number of software failures detected during a year of maintenance service

  40. Maintenance Metrics • Log maintenance effort on a per component basis • Choose set of possible metrics which may be related to maintenance • Assess possible metrics for each maintained components • Look for correlation between maintenance effort and metric values

  41. Examples of Software Metrics

More Related