1 / 44

Schedule & effort

Schedule & effort. http://www.flickr.com/photos/28481088@N00/315671189/sizes/o/. Problem. Our ability to realistically plan and schedule projects depends on our ability to estimate project costs and development efforts

Download Presentation

Schedule & effort

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Schedule & effort http://www.flickr.com/photos/28481088@N00/315671189/sizes/o/

  2. Problem • Our ability to realistically plan and schedule projects depends on our ability to estimate project costs and development efforts • In order to come up with a reliable cost estimate, we need to have a firm grasp on the requirements, as well as our approach to meeting them • Typically costs need to be estimated before these are fully understood

  3. Planning big projects • Figure out what the project entails • Requirements, architecture, design • Figure out dependencies & priorities • What has to be done in what order? • Figure out how much effort it will take • Plan, refine, plan, refine, …

  4. What are project costs? • For most software projects, costs are: • Hardware Costs • Travel & Training costs • Effort costs

  5. Aggravating & mitigating factors • Market opportunity • Uncertainty/risks • Contractual terms • Requirements volatility • Financial health • Opportunity costs

  6. Cost drivers • Software reliability • Size of application database • Complexity • Analyst capability • Software engineering capability • Applications experience • Programming language expertise • Performance requirements • Memory constraints • Volatility of virtual machine • Environment • Use of software tools • Application of software engineering methods • Required development schedule

  7. What are effort costs? • Effort costs typically largest of the 3 types of costs (hardware, training and effort), and the most difficult to estimate. • Effort costs include: • Developer hours • Heating, power, space • Support staff; accountants, administrators, cleaners, management • Networking and communication infrastructure • Central facilities such as rec room & library • Social security and employee benefits

  8. Software cost estimation – Boehm (1981) • Algorithmic cost modeling • Base estimate on project size (lines of code) • Expert judgment • Ask others • Estimation by analogy • Cost based on experience with similar projects • Parkinson’s Law • Project time will expand to fill time available • Pricing to win • Cost will be whatever customer is willing to pay • Top-down estimation • Estimation based on function/object points • Bottom-up estimation • Estimation based on components

  9. Productivity metrics • Lines of code • Simple, but not very meaningful metric • Easy to pad, affected by prog language • How to count revisions/debugging etc? • Function points • Amount of useful code produced (goals/requirements met) • Less volatile, more meaningful, not perfect

  10. Function points Function points are computed by first calculating an unadjusted function point count (UFC). Counts are made for the following categories (Fenton, 1997): • External inputs – those items provided by the user that describe distinct application-oriented data (such as file names and menu selections) • External outputs – those items provided to the user that generate distinct application-oriented data (such as reports and messages, rather than the individual components of these) • External inquiries – interactive inputs requiring a response • External files – machine-readable interfaces to other systems • Internal files – logical master files in the system Each of these is then assessed for complexity and given a weighting from 3 (for simple external inputs) to 15 (for complex internal files).

  11. Unadjusted Function Point Count (UFC) Each count is multiplied by its corresponding complexity weight and the results are summed to provide the UFC

  12. Object points Similar to function points (used to estimate projects based heavily on reuse, scripting and adaptation of existing tools) • Number of screens (simple x1, complex x2, difficult x3) • Number of reports (simple x2, complex x5, difficult x8) • Number of custom modules written in languages like Java/C x10

  13. COCOMO II Model • Supports spiral model of development • Supports component composition, reuse, customization • 4 sub-models: • Application composition model – assumes system written with components, used for prototypes, development using scripts, db’s etc (object points) • Early design model – After requirements, used during early stages of design (function points) • Reuse model – Integrating and adapting reusable components (LOC) • Post architecture model – More accurate method, once architecture has been designed (LOC)

  14. Intermediate COCOMO • Computes software development effort as function of program size and a set of "cost drivers”. • Product attributes • Required software reliability • Size of application database • Complexity of the product • Hardware attributes • Run-time performance constraints • Memory constraints

  15. Intermediate COCOMO • Personnel attributes • Analyst capability • Software engineering capability • Applications experience • Virtual machine experience • Programming language experience • Project attributes • Use of software tools • Application of software engineering methods • Required development schedule

  16. Intermediate COCOMO • Each of the 15 attributes receives a rating on a six-point scale that ranges from "very low" to "extra high" (in importance or value). An effort multiplier from the table below applies to the rating. The product of all effort multipliers results in an effort adjustment factor (EAF). Typical values for EAF range from 0.9 to 1.4.

  17. Example: Twitter repression report Repressed citizen UC#1: Report repression UC#3b: View as RSS feed UC#3a: View on map UC#3: View reports UC#2: Clarify tweet Concerned public

  18. One possible architecture Twitter Façade Twitter Tweet processor Geocoder Façade Geocoder Database MySQL MappingWeb site RSS Web service Apache+PHP Google maps

  19. Activity graph: shows dependencies of a project’s activities Do Twitter facade 1a Do tweet processor 2 Do geocode facade 1c Test & debug components 1b 3 Design db Do map output Do RSS output Milestone 2: DB contains real data Milestone 3: DB contains real, reliable data Milestone 4: Ready for public use 3a Test & debug map 3b 4 Test & debug RSS Advertise

  20. Activity graph: shows dependencies of a project’s activities • Filled circles for start and finish • One circle for each milestone • Labeled arrows indicate activities • What activity must be performed to get to a milestone? • Dashed arrows indicate “null” activities

  21. Effort • Ways to figure out effort for activities • Expert judgment • Records of similar tasks • Effort-estimation models • Any combination of the above

  22. Effort: expert judgment • Not a terrible way to make estimates, but… • Often vary widely • Often wrong • Can be improved through iteration & discussion • How long to do the following tasks: • Read tweets from Twitter via API? • Send tweets to Twitter via API? • Generate reports with Google maps?

  23. Effort: records of similar tasks • Personal software process (PSP) • Record the size of a component (lines of code) • Breakdown # of lines added, reused, modified, deleted • Record time taken • Breakdown planning, design, implement, test, … • Refer to this data when making future predictions • Can also be done at the team level

  24. Effort: estimation models • Algorithmic (e.g.: COCOMO: constructive cost model) • Inputs = description of project + team • Outputs = estimate of effort required • Machine learning (e.g.: CBR) • Gather descriptions of old projects + time taken • Run a program that creates a model  You now have a custom algorithmic method • Same inputs/outputs as algorithmic estimation method

  25. Using COCOMO-like models • Assess the system’s complexity • Compute the # of application points • Assess the team’s productivity • Compute the effort

  26. Assessing complexity e.g.: A screen for editing the database involves 6 database tables, and it has 4 views.This would be a “medium complexity screen”. This assessment calls for lots of judgment. Pfleeger & Atlee

  27. Computing application points (a.p.) e.g.: A medium complexity screen costs 2 application points. 3GL component = reusable programmatic component that you create Pfleeger & Atlee

  28. Assessing team capabilities e.g.: Productivity with low experience + nominal CASE… productivity = (7+13)/2 = 10application points per person-month (assuming NO vacation or weekends!!!!!) Pfleeger & Atlee

  29. CASE (comp aided SE) TOOLS • It offer many benefits for developers building large-scale systems. • As spiraling user requirements continue to drive system complexity to new levels, CASE tools enable engineers to abstract away from the entanglement of source code, to a level where architecture & design become apparent and easier to understand and modify. • The larger a project, the more important it is to use a CASE tool in software development.

  30. CASE TOOLS • As developers interact with portions of a system designed by their colleagues, they must quickly seek a subset of classes and methods and assimilate an understanding of how to interface with them. • In a similar sense, management must be able, in a timely fashion and from a high level, to look at a representation of a design and understand what's going on. Hence case tools are used

  31. Identify screens, reports, components 3GL components - Tweet processor - Twitter façade - Geocoder façade Reports - Mapping web site - RSS web service Twitter Façade Twitter Tweet processor Geocoder Façade Geocoder Database MySQL MappingWeb site RSS Web service Apache+PHP Google maps

  32. Use complexity to computeapplication points 3GL components - Tweet processor - Twitter façade - Geocoder façade Reports - Mapping web site - RSS web service Simple model assumes thatall 3GL components are 10application points. 3*10 = 30 a.p. • Displays data from only a few database tables (3? 4?) • Neither has multiple sections. • Each is probably a “simple” report, 2 application points. 2*2 = 4 a.p. 30 + 4 = 34 a.p.

  33. Assess the team’s productivity& compute effort • Assume at your company the team has… • Extensive experience with websites, XML • But no experience with Twitter or geocoders • Since 30 of the 34 a.p. are on this new stuff, assume very low experience • Virtually no CASE support… very low •  therefore calculate the productivity as application points in the “person-months”. • Note: this assumes no vacation or weekends

  34. Distribute the person-months over the activity graph Do Twitter façade (1.25) 1a Do tweet processor (1.00) 2 Do geocode façade (1.25) 1c Test & debug components (3.75) 1b 3 Design db (0.25) Do map output (0.25) Do RSS output (0.25) 3a Test & debug map (0.25) 3b 4 Test & debug RSS (0.25) Advertise (1.0?)

  35. The magic behinddistributing person-months • Divide person-months between implementation and other activities (design, testing, debugging) • Oops, forgot to include an activity for testing and debugging the components… revise activity graph • Notice that some activities aren’t covered • E.g.: advertising; either remove from diagram or use other methods of estimation

  36. Do you believe those numbers? • Ways to get more accurate numbers: • Revise numbers based on expert judgment or other methods mentioned. • Perform a “spike”… try something out and actually see how long it takes • Use more sophisticated models to analyze how long components will really take • Use several models and compare • Expect to revise estimates as project proceeds

  37. Further analysis may give revised estimates… Do Twitter façade (1.50) 1a Do tweet processor (0.50) 2 Do geocode façade (0.75) 1c Test & debug components (4.25) 1b 3 Design db (0.25) Do map output (0.50) Do RSS output (0.25) 3a Test & debug map (0.25) 3b Test & debug RSS (0.25)

  38. Critical path: longest route through the activity graph • Sort all the milestones in “topological order” • i.e.: sort milestones in terms of dependencies • For each milestone (in order), compute the earliest that the milestone can be reached from its immediate dependencies

  39. Example: computing critical path 1.50 Do Twitter façade (1.50) 1a 2.00 Do tweet processor (0.50) 2 Do geocode façade (0.75) 1c 1.50 Test & debug components (4.25) 1b 3 Design db (0.25) 6.25 0.25 Do map output (0.50) Do RSS output (0.25) 6.75 3a 7.00 Test & debug map (0.25) 3b 6.50 Test & debug RSS (0.25)

  40. Example: tightening the critical path 1.50 Do Twitter façade (1.50) 1a 2.00 Do tweet processor (0.50) 2 Do geocode façade (0.75) 1c 1.50 1b 2.00 Design db (0.25) 3 0.25 Test & debug components (4.25) Do map output (0.50) What if we get started on the reports as soon as we have a (buggy) version of the database and components? Do RSS output (0.25) 2.50 3a 6.25 Test & debug map (0.25) 3b 2.25 Test & debug RSS (0.25)

  41. Gantt Chart • Shows activities on a calendar • Useful for visualizing ordering of tasks & slack • Useful for deciding how many people to hire • One bar per activity • Arrows show dependencies between activities • Milestones appear as diamonds

  42. Example Gantt chart Gantt chart quickly reveals that we only need to hire two people (blue & green)

  43. Two ways of scheduling • Scheduling with a set of requirements and an architecture? • In contrast, assume that you are scheduling before you have requirements and an architecture. How much different would that be? • What are the pros and cons of each approach?

  44. What’s next for you? • Updated vision statement • Your chance for extra credit!! • Thursday presentation: Each team is given 15 minutes to present how their vision has been more clear through this time (power-point presentation) • You can include your requirements gathering, constraints and other details of your work until now. • What are your future plans? • You will receive your midterms back tomorrow.

More Related