1 / 38

IMPROVING SOFTWARE ECONOMIC S Summary:

IMPROVING SOFTWARE ECONOMIC S Summary: Modern software technologies enabling systems to be built with fewer human generated source lines. Modern software processes are iterative. Modern software development and maintenance environments are the delivery mechanism for process automation.

wshay
Download Presentation

IMPROVING SOFTWARE ECONOMIC S Summary:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IMPROVING SOFTWARE ECONOMICS Summary: Modern software technologies enabling systems to be built with fewer human generated source lines. Modern software processes are iterative. Modern software development and maintenance environments are the delivery mechanism for process automation.

  2. The key to improvement • Reducing the size or complexity of what needs to be developed. • Improving the development process. • Using more-skilled personnel and better teams (not necessarily the same thing). • Using better environments (tools to automate the process). • Trading off or backing off on quality thresholds.

  3. Important trends in improving software economics Cost Model Parameters Trends Size Abstraction and component based development technologies Process Methods and techniques Personnel People factors Environment Automation technologies and tools Quality Performance, quality, accuracy Higher-order languages (C++, Ada 95), Object-oriented (analysis, design, programming), reuse, commercial components Iterative development, process maturity levels, architecture first development, acquisition reform Training and personnel skill development, teamwork, win-win conditions Integrated tools (visual modeling, compiler, editor, debugger, change management), open systems, hardware platform performance, automation of coding, documentation, testing, analysis Hardware platform performance, demonstration-based assessment, statistical quality control

  4. 1. REDUCING SOFTWARE PRODUCT SIZE • Minimum amount of human-generated source material: • Reuse of commercial components (OS,DBMS, Middleware) • object-oriented technology (UML, Visual modeling tools) • automatic code production (GUI builders, Visual modeling tools) • higher-order programming languages. ( C++, Java, VB) • Component-based development • Mature and reliable size reduction technologies are extremely powerful at producing economic benefits.

  5. Advantages • Reduce size of team • Reduce time for development

  6. Drawbacks • when size-reducing technologies are used, they reduce the number of human-generated source lines. However, all of them tend to increase the amount of computer-process-able executable code. • Immature size reduction technologies may reduce the development size but require so much more investment in achieving the necessary levels of quality and performance that they have a negative impact on overall project performance.

  7. 1.1 LANGUAGES

  8. In modern languages the levels of expressability is very attractive. Each language has a domain of usage. Visual Basic is very expressive and powerful in building simple interactive applications, but it would not be a wise choice for a real-time, embedded, avionics program. Similarly, Ada95 might be the best language for a catastrophic cost-of-failure system that controls a nuclear plant, but it would not be the best choice for a highly parallel, scientific, number-crunching program running on a supercomputer. Universal function points can be used to indicate the relative program sizes required to implement a given functionality.

  9. Commercial components and automatic code generators (such as CASE tools and GUI builders) can further reduce the size of human-generated source code, which in turn reduces the size of the team and the time needed for development.

  10. OBJECT-ORIENTED METHODS AND VISUAL MODELING • By providing more formalized notations for capturing and visualizing software abstractions, the fundamental impact of object-oriented technology is in reducing the overall size of what needs to be developed

  11. An object-oriented model of the problem and its solution encourages common vocabulary between the end users of a system and its developers, thus creating a shared understanding of the problem being solved. Improves teamwork and interpersonal communications. • The use of continuous integration creates opportunities to recognize risk early and make incremental corrections without destabilizing the entire development effort. • An object-oriented architecture provides a clear separation of concerns among disparate elements of a system, creating firewalls that prevent a change in one part of the system from rending the fabric of the entire architecture.

  12. Characteristics of successful object oriented project (Booch principles) • A ruthless focus on the development of a system that provides a well-understood collection of essential minimal characteristics. • The existence of a culture that is centered on results, encourages communication, and yet is not afraid to fail. • The effective use of object-oriented modeling. • The existence of a strong architectural vision. • The application of a well-managed iterative and incremental development.

  13. REUSE Software design methods have always dealt implicitly with reuse in order to minimize development costs while achieving all the other required attributes of performance, feature set, and quality. Common architectures, precedent experience, and common environment are all instances of reuse. The key metric in identifying whether a component is truly reusable is to see whether some organization is making money on it. Without this economic motive, reusable components are rare. “open” reuse libraries sponsored by nonprofit organizations lack economic motivation, trustworthiness, and accountability for quality, support, improvement, and usability. Truly reusable components have the following characteristics: (1) they have an economic motivation for continued support (2) they take ownership of improving product quality, adding new features, and transitioning to new technologies, (3) they have a sufficiently broad customer have to be profitable

  14. Many-project solution: Opening with high value per unit investment, typical of commercial products 3. Project solution: 125% more cost and 150% more time 2. Project solution: 50% more cost and 100% more time 1. Project solution: $N and M months Cost and schedule investments necessary to achieve reusable components

  15. Reuse is an important discipline that has an impact on the efficiency of all workflows and the quality of most artifacts. There have been very few success stories in software component reuse except for commercial products such as operating systems, DBMS, middleware, networking, GUI builders, and other applications. On the other hand, every software success story has probably exploited some key avenues of reuse (without calling it that) to achieve results efficiently. COMMERCIAL COMPONENTS A common approach being pursued today in many domains is to maximize integration of commercial components and off-the-shelf products. While the use of commercial components is certainly desirable as a means of reducing custom development, it has not proven to be straightforward in practice. IMPROVING SOFTWARE PROCESSES Metaprocess: an organization’s policies, procedures, and practices for pursuing a software intensive line of business. The focus of this process is on organizational economics, long-term strategies, and a software ROI.

  16. Macroprocess: a project’s policies, procedures, and practices for producing a complete software product within certain cost, schedule, and quality constraints. The focus of the macroprocess is in creating an adequate instance of the metaprocess for a specific set of constraints. Microprocess: a project team’s policies, procedures, and practices for achieving an artifact of the software process. The focus of the microprocess is on achieving an intermediate product baseline with adequate quality and adequate functionality as economically and rapidly as possible. Although these three levels of process overlap somewhat, they have different objectives, audiences, metrics, concerns, and time scales. All project processes consist of productive activities and overhead activities. Productive activities include prototyping, modeling, coding, debugging, and user documentation. Overhead activities include plan preparation, documentation, progress monitoring, risk assessment, financial assessment, configuration control, quality assessment, integration, testing, late scrap and rework, management, personnel training, business administration, and other tasks. The objective of process improvement is to maximize the allocation of resources to productive activities and minimize the impact of overhead activities on resources such as personnel, computers, and schedule.

  17. Early scrap and rework is a productive necessity for most projects to resolve the innumerable unknowns in the solution space, but it is clearly understandable in the later phases of the life cycle. With a good process, it is clearly unnecessary. • The quality of the software process strongly affects the required effort and therefore the schedule for producing the software product. In practice, the difference between a good process and a bad one will affect overall cost estimates by 50% to 100%, and the reduction in effort will improve the overall schedule. • Schedule improvement has at least three dimensions: • We could take an N-step process and improve the efficiency of each step. • We could take an N-step process and eliminate some steps so that it is now only an M-step process. • We could take an N-step process and use more concurrency in the activities being performed or the resources being applied. • The primary focus of process improvement should be on achieving an adequate solution in the minimum number of iterations and eliminating as much downstream scrap and rework as possible.

  18. In a perfect software engineering world with an immaculate problem description, an obvious solution space, a development team of experienced geniuses, adequate resources, and stakeholder with common goals, we could execute a software development process in one iteration with almost no scrap and rework. Because we work in an imperfect world, however, we need to manage engineering activities so that scrap and rework profiles do not have an impact on the win conditions of any stakeholder.

  19. IMPROVING TEAM EFFECTIVENESS • Just formulate a good team • Balance and coverage are two of the most important aspects of excellent teams. • Teamwork is much more important than the sum of the individuals.

  20. Some maxims of team management include the following • A well-managed project can succeed with a nominal engineering team. • A mismanaged project will almost never succeed, even with an expert team of engineers. • A well-architecture system can be built by a nominal team of software builders. • A poorly architecture system will flounder even with an expert team of builders.

  21. Boehm’s staffing principle • The principle of top talent: Use better and fewer people. • The principle of job matching: Fit the tasks to the skills and motivation of the people available. • The principle of career progression: An organization does best in the long run by helping its people to self-actualize. • The principle of team balance: Select people who will complement and harmonize with one another. • The principle of phase out: Keeping a misfit on the team doesn’t benefit anyone.

  22. Some crucial attributes of successful software project managers: • Hiring skills: placing the right person in the right job seems obvious but is surprisingly hard to achieve. • Customer interface skill: Avoiding adversarial stakeholder relationships among stakeholders is a prerequisite for success. • Decision-making skill: We all know a good leader when we run into one, and decision-making skill seems obvious despite its intangible definition. • Team-building skill: teamwork requires that a manager establish trust, motivate progress, exploit eccentric prima donnas, transition average people into top performers, eliminate misfits, and consolidate diverse opinions into a team direction

  23. Selling skill: Successful project managers must sell all stakeholders (including themselves) on decisions and priorities, sell candidates on job priorities, sell candidates on job positions.

  24. IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS The tools and environment used in the software process generally have a linear effect on the productivity of the process. Planning tools, requirements management tools, visual modeling tools, compilers, editors, debuggers, quality assurance analysis tools, test tools, and user interface provide crucial automation support for evolving the software engineering artifacts.

  25. IMPROVING AUTOMATION THROUGH SOFTWARE ENVIRONMENTS (Cont) An environment that provides semantic integration (in which the environment understands the detailed meaning of the development artifacts) and process automation can improve productivity, improve software quality, and accelerate the adoption of modern techniques Automation of the design process provides payback in quality, the ability to estimate costs and schedules, and overall productivity using a smaller team

  26. Round-trip engineering is a term used to describe the key capability of environments that support iterative development. • Round-trip engineering is a functionality of software development tools that provides generation of models from source code and generation of source code from models; this way, existing source code can be converted into a model, be subjected to software engineering methods and then be converted back.

  27. Forward engineering is the automation of one engineering artifact from another, more abstract representation. Ex: compilers and linkers provide automated transition of source code into executable code • Reverse engineering is the generation or modification of a more abstract representation from an existing artifact.

  28. ACHIEVING REQUIRED QUALITY • Key practices that improve overall software quality include the following: • Focusing on driving requirements and critical use cases early in the life cycle, focusing on requirements completeness and traceability late in the life cycle, and focusing throughout the life cycle on a balance between requirements evolution, design evolution, and plan evolution. • Using metrics and indicators to measure the progress and quality of an architecture as it evolves from a high-level prototype into a fully compliant product.

  29. Providing integrated life-cycle environments that support early and continuous configuration control, change management, rigorous design methods, document automation, and regression test automation. • Using visual modeling and higher level languages that support architectural control, abstraction, reliable programming, reuse, and self-documentation. • Early and continuous insight into performance issues through demonstration-based evaluations

  30. Chronology of events in performance assessment • Project inception. The proposed design was asserted to be low risk with adequate performance margin. • Initial design review. Optimistic assessments of adequate design margin were based mostly on paper analysis or rough simulation of the critical threads. In most cases, the actual application algorithms and database sizes were fairly well understood. However, the infrastructure – including the operating system overhead, the database management overhead, and the inter-process and network communication overhead – and all the secondary threads were typically misunderstood. • Mid-life cycle design review. The assessments started whittling away at the margin, as early benchmarks and initial tests began exposing the optimum inherent in earlier estimates.

  31. Integration and test. Serious performance problems were uncovered, necessitating fundamental changes in the architecture. The underlying infrastructure was usually the scapegoat, but the real culprit was immature use of the infrastructure, immature architectural solutions, or poorly understood early design trade-offs.

  32. PEER INSPECTIONS: A PRAGMATIC VIEW • Improving software quality remains a key challenge. • Software development formal peer inspection has emerged as an effective approach to address this challenge. • Software peer inspection aims at detecting and removing software development defects efficiently and early while defects are less expensive to correct.

  33. Transitioning engineering information from one artifact set to another, thereby assessing the consistency, feasibility, understandability, and technology constraints inherent in the engineering artifacts. • Major milestone demonstrations that force the artifacts to be assessed against tangible in the context of relevant use cases. • Environment tools that ensure representation rigor, consistency, completeness, and change control. • Life-cycle testing for detailed insight into critical trade-offs, acceptance criteria, and requirements compliance. • Change management metrics for objective insight

  34. Useful Guideline • Keep the review team small • Find problems during reviews, but don't try to solve them • Limit review meetings to about two hours. • Require advance preparation

  35. Critical component deserves to be inspected by several people, preferably those who have a stake in its quality, performance, or feature set. An inspection focused on resolving an existing issue can be an effective way to determine cause or arrive at a resolution once the cause is understood. Random human inspections tend to degenerate into comments on style and first-order semantic issues. They rarely result in the discovery of real performance bottlenecks, serious control issues (such as deadlocks, races, or resource contention), or architectural weakness (such as flaws in scalability, reliability, or interoperability).

  36. Quality assurance is everyone’s responsibility and should be integral to almost all process activities instead of a separate discipline performed by quality assurance specialists. • Evaluating and assessing the quality of the evolving engineering baselines should be the job of an engineering team that is independent of the architecture and development team. • Their life-cycle assessment of the evolving artifacts would typically include change management, trend analysis, and testing, as well as inspection.

More Related