1 / 36

MSc Software Maintenance MS Viðhald hugbúnaðar

mælingar. MSc Software Maintenance MS Viðhald hugbúnaðar. Fyrirlestrar 3 & 4 Measurements to Manage Software Maintenance. Case Study Dæmisaga MWSSS The Missile Warning and Space Surveillance Sensors program. Reference Measurements to Manage Software Maintenance , George E Stark,

bian
Download Presentation

MSc Software Maintenance MS Viðhald hugbúnaðar

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. mælingar MSc Software MaintenanceMS Viðhald hugbúnaðar Fyrirlestrar 3 & 4 Measurements to Manage Software Maintenance Dr Andy Brooks

  2. Case StudyDæmisaga MWSSS The Missile Warning and Space Surveillance Sensors program. • Reference • Measurements to Manage Software Maintenance, George E Stark, • CROSSTALK The Journal of Defense Software Engineering, July 1997 • http://stsc.hill.af.mil/crosstalk/about.html Dr Andy Brooks

  3. In 1994 • The MWSSS program management office took over the maintenance of seven systems. • Software running in 10 locations worldwide. • 8.5 million source lines of code. • Software written in 22 different languages. • Latest system became operational in 1992. • To manage and understand the software maintenance effort, a measurement program was started based on Basili´s Goal-Question-Metric paradigm. Dr Andy Brooks

  4. Goal-Question-Metric Dr Andy Brooks

  5. Goal-Question-Metric Dr Andy Brooks

  6. Goal-Question-Metric Also addresses the goal of maximizing customer satisfaction. Dr Andy Brooks

  7. Using metrics information • Attention can be directed to problems that need to be addressed. • Supports making decisions about how to solve problems. • Keeps everyone informed of progress • “how are they doing?” Dr Andy Brooks

  8. MWSSS Software Maintenance Process Overview from the CROSSTALK article. Dr Andy Brooks

  9. Activities in the Maintenance Process • An analyst at the user´s location checks problem reports for completeness and duplications, and that the user has properly understood the system. • Further, this analyst categorizes the problem as either a software, hardware, or communications equipment problem. • For software problems, a software change form (SCF) is created: • 18 items including proposed change, justification, and resource estimates. Dr Andy Brooks

  10. Activities in the Maintenance Process • A software maintenance engineer independently evaluates each SCF: • estimates effort based on taxonomy of change types • estimates computer resources required • estimates impact on quality of service • If the independent evaluation of effort is 20% greater or less than the original estimate, the user and engineer meet to resolve the difference. • SCFs can be generated for systems by teams who are not directly responsible for these systems but who are otherwise dependent on them. • The User Configuration Control Board categorizes an SCF as either a modification or as a fix (fault correction). Dr Andy Brooks

  11. Activities in the Maintenance Process • The User Configuration Control Board recommends a priority to the SCF: • emergency to avoid downtime or meet high-priority mission requirements • urgent if needed in next delivery or to fix a problem arising from a change in operations • routine e.g. unit conversions, default value changes, fixing printouts Dr Andy Brooks

  12. Activities in the Maintenance Process • Version content for next release is negotiated involving the reviewing of release complexity, software reliability, software maintainability, and computer resource utilization. • Maintenance Configuration Control Board reviews the release plan and schedules the release. • The software engineering team completes the design, code, test, installation, and quality assurance of the release. Dr Andy Brooks

  13. Metric: Current Change Backlog Dr Andy Brooks

  14. Metric: Current Change Backlog • “Managers use this chart to allocate computer and staff resources, plan release content, and track the effect of new tools or other process improvement programs over time.” • It is better to be in a state of equilibrium i.e. incoming change requests are closed in the next release (zero backlog) • Chart indicates that several releases were on schedule. • Chart based on data for one project. Dr Andy Brooks

  15. Metric: Software Reliability Dr Andy Brooks

  16. Metric: Software Reliability • Downtime incidents as a result of a software failure are counted (as reported in monthly maintenance logs). • Chart shows failure rate for the last nine releases of one product. • If the failure rate is below 4 per 1,000 hours of operation, it might be decided to incorporate more difficult changes in the next release. • If the failure rate is above 4 per 1,000 hours of operation, it might be decided to revert to a previous version or only work on fault correction • postpone making enhancements Dr Andy Brooks

  17. Metric: Software Reliability • Failure rate can be used to determine the probability of the software supporting a complete mission. • the probability of no failures over a week is exp (-168 hrs/wk * .002 fails/hr) = 0.71 • From the Poisson distribution • Historical failure rate can be used as a quality requirement when trading off the cost and schedule of a major upgrade. Dr Andy Brooks

  18. Metric: Change Cycle Time From the time the user writes the change request From the time the requirement has been approved. Dr Andy Brooks

  19. Metric: Change Cycle Time • The horizontal distance between the two lines on the chart is the time taken to approve the change. • Chart average for this in-process time is 54 days. • 80% of priority change requests are ready within 90 days of user board approval. • 80% of priority change requests are ready within 170 days from when they were written. Dr Andy Brooks

  20. Metric: Cost Per Delivery Chart used for long term budget planning. Dr Andy Brooks

  21. Metric: Cost Per Activity • Cost categories: • 1. software development activities • 2. configuration management • 3. quality assurance • 4. security • 5. administrative support • 6. travel • 7. project management • 8. system engineering • 9. hardware system maintenance • 10. system management • 11. finance • 1. & 9. accounted for 88% of the cost of a typical release i.e. most of the money is going into productive work. Dr Andy Brooks

  22. Computational Incorrect operand in equation Incorrect use of parentheses Incorrect/inaccurate equation Rounding or truncation error Logic Incorrect operand in logical expression Logic out of sequence Wrong variable being checked Missing logic or condition test Loop iterated incorrect number of times Input Incorrect format Input read from incorrect location End-of-file missing or encountered prematurely Data Handling Data file not available Data referenced out-of-bounds Data initialization Variable used as flag or index not set properly Data not properly defined/dimensioned Subscripting error Output Data written to different location Incorrect format Incomplete or missing output Output garbled or misleading Software Change Taxonomy based on 8 releases Dr Andy Brooks

  23. Interface Software/hardware interface Software/user interface Software/database interface Software/software interface Operations COTS/GOTS software change Configuration control Performance Time limit exceeded Storage limit exceeded Code or design inefficient Network efficiency Specification System/system interface Specification incorrect/inadequate Requirements specification incorrect/inadequate User manual/training inadequate Improvement Improve existing function Improve interface Software Change Taxonomy GOTS: Government Off The Shelf Dr Andy Brooks

  24. Metric: Number Of Changes By Type Pareto Diagram Data for last eight releases. 67 modifications 110 fixes Dr Andy Brooks

  25. Metric: Staff Days Expended/Change Type Dr Andy Brooks

  26. Metric: Staff Days Expended/Change Type • Changes in requirements or interface specifications account for 42% of the total effort. • By categorising change requests, estimates can be made of the staff effort required to design, code, and test individual changes. • Average effort for an interface specification change is 36 days with a standard deviation of 43 days. • Average effort for a requirements change is 22 days with a standard deviation of 24 days. • Taxonomy and cost information is updated at the completion of a release. Dr Andy Brooks

  27. Metric: Percentage of Invalid Change Requests On average, 72 change requests are evaluated per quarter. On average, 8% are withdrawn. Estimated loss of $7,500 per year. Dr Andy Brooks

  28. Metric: Complexity Measurement • Spreadsheet-based tool used to calculate release complexity on a scale of 0 to 1. • Set of objective and subjective data contribute to the complexity measure: • Product characteristics (e.g. age and size). • Management processes (e.g. V&V) • Staff experience (e.g. group dynamics) • Environment (e.g. tools) • Aim to keep complexity below 0.5. Dr Andy Brooks

  29. Metric: Complexity Measurement • For each release proposal, steps can be taken to reduce complexity and manage the risks associated with complexity. • The best team can be assigned to the most complex work. • An improved process might be introduced. • Tools or better tools might be acquired. Dr Andy Brooks

  30. Metric: Percentage Content Changes By Delivery Dr Andy Brooks

  31. Metric: Percentage Content Changes By Delivery • Once a delivery plan has been agreed, requirement volatility becomes an important factor. • A customer at a preliminary design review might decide to: • Add to the delivery content • Delete some delivery content • Change the scope of some requirements Dr Andy Brooks

  32. Quantifying impact of requirement volatility 25.0% Dr Andy Brooks

  33. Quantifying impact of requirement volatility • SQRT to decrease the contribution of large values. • 100% means the schedule was met. Values greater than 100% indicate late delivery. • Least squares fit to develop a predictive model of schedule volatility (combining chart data with delivery effort data). • Estimate goes up even if changes are all deletions – some disagreement here regarding model validity. Dr Andy Brooks

  34. Application of schedule volatility model • In one case, a version contained 15 planned requirements for delivery in 91 calendar days. • At preliminary design, the customer wanted to drop two planned requirements and change the scope of a third. • Using the model, a schedule slip of 18% or 16 days was predicted. • The customer decided the schedule slip was unacceptable and decided to ask only for the scope change to be undertaken. • The model provided a basis for objective communication about release plans and so helped with customer relations. Dr Andy Brooks

  35. Limitations • No way found as yet to measure maintainability. • No method as yet to predict computer resource impacts. • Systems are old and require constant capacity planning to keep the CPU less than 100% busy and memory below 98% full. • No account taken of the variation in size of individual changes. • SLOC or function points could be used. Dr Andy Brooks

  36. Objective Communication If you do not have a measurement program, can you really hope to have informed and friendly discussions between the user organisation and the organisation doing the maintenance? Dr Andy Brooks

More Related