1 / 79

Measuring and Managing Performance in Organizations

Measuring and Managing Performance in Organizations. EECS 811: Software Project Management Presenter: Molly Coplen March 3, 2009. Contents. Introduction terminology why we measure activity, why we care Types of measures Basic problems with measuring activity

jock
Download Presentation

Measuring and Managing Performance in Organizations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring and Managing Performance in Organizations EECS 811: Software Project Management Presenter: Molly Coplen March 3, 2009

  2. Contents • Introduction • terminology • why we measure activity, why we care • Types of measures • Basic problems with measuring activity • Economic models and motivation theory • Software metrics • Case-Study • Conclusion - lessons learned

  3. Management and Measurement • Why do we care? • each organization has a purpose or a goal • we measure <parameter> in hopes of improving <goal>.

  4. Terminology A measure is a way to appraise or determine by comparing to a standard or unit of measurement. The act or process of measuring is referred to as a measurement. A metricis a quantitative measure of the degree to which a component, system or process posses a given characteristic or attribute.

  5. Why We Measure Activity • Focus workers’ attention on the task for a desired outcome (motivate workers). • Provide an objective basis for decision making – “You can’t control what you can’t measure.” Tom DeMarco • resource allocation • hold workers accountable for performance.

  6. Contents • Introduction • Types of measures • motivational • informational • Basic problems with measuring activity • Economic models and motivation theory • Software metrics • Conclusion

  7. Types of Measures Motivational measurements - intended to affect the people being measured, to provoke greater expenditure of effort in pursuit of organizational goals. Informational measurements – valued for the logistical, statistical and research information they convey, which allows better short-term management and long-term improvement of organizational processes.

  8. Informational Measurement Process refinement measurement – provides information that reveals the detailed understanding of a process are useful in redesigning the process. Coordination measure – provides information that allows short-term (real-time) management of organizational flows and schedules.

  9. Informational Measures • More accurate information, advanced warning of upcoming problems. • Dysfunction can arise when rewards are tied to performance. • Aggregating measures makes motivational measurement impossible. • Self-assessment tool, not diagnostic/ analysis tool. • Works well when workers are internally motivated.

  10. Informational MeasuresE. Nichols and G. Peterson (2007) • Design-time metrics – identify eaknesses early in the application’s life cycle • Deployment-time metrics –identify the amount of change, uncover patterns over time, and establish baselines for anomaly detection • Run-time metrics – determine the application’s behavior in production environment

  11. Contents • Introduction • Types of measures • Basic problems with measuring activity • economic costs • difficult to measure some things • measurement dysfunction • Economic models and motivation theory • Software metrics • Case-study • Conclusion – lessons learned

  12. Problems with Measurement Programs • Costs • establishing/maintaining measurement systems • workers take short-cuts which impact the quality, require re-work • Some activities are difficult to measure (i.e., are we using the correct measure?) • Potential for dysfunction

  13. Metrics cost a ton of money. It costs a lot to collect them badly and a lot more to collect them well. Sure, measurement costs money, but it does have the potential to help us work more effectively. At its best, the use of software metrics can inform and guide developers, and help organizations to improve. At its worst, it can do actual harm. And there is an entire range between the two extremes, varying all the way from function to dysfunction. --- Tom DeMarco

  14. Measurement Costs - Factors • Repetitiveness of task • offer more opportunities for observing all phases of agent’s work • permit use of statistical sampling techniques • inexpensive measurement enabling full supervision • Complexity of task • overtaxes principal’s cognitive capabilities • Newness of task • principal more likely to be familiar with older tasks • older tasks provide more measurement opportunity over time

  15. Measurement Costs - Factors • Specialized knowledge required by the task • difficult/expensive to find someone qualified to do measurement & interpret results • typically include tasks that are pushing technology beyond boundary • Interdependence and separability of effort • tasks of numerous workers tightly coupled • all have ability to hinder others tasks • impossible to distinguish how much effort each member is contributing

  16. Measurement Dysfunction “Consequences of organizational actions that interfere with the spirit of the stated intentions of the organization.” Measurement Indicators Level of performance True performance Time

  17. Characteristics of Measurement Dysfunction • We measure too few things or the wrong things - we reward “A” for hoping for “B”. • key measures do not provide clear account of performance • measured standards interfere with achievement of intended effect • employee rewards not associated with performance of organization

  18. Problems with Measurement Deffers,C and McQuaid, P . (2002) • Basic misunderstanding what is being measured • Incorrect use of measurement data –leads to additional costs (morale, quality, lost revenues) • Disregard for the human factor – how the cultural change of taking measurements will affect people – and the knowledge intensity of software development

  19. Problems with Measurement Ordonez, M. and Haddad, H . (2008) • Upfront commitment to gather necessary data for useful metrics, including time, cost, and resources factors • Developers reluctant to collect and archive data that could be (mis)used against the developer and project stakeholders

  20. Contents • Introduction • Types of measures • Basic problems with measuring activity • Economic models and motivation theory • internal and external motivation • theory X and theory Y • R-H and R-M models • Software metrics • Case-study • Conclusion – lessons learned

  21. Motivation External motivation – a tendency to react in response to promises or rewards for performance according to mutually known criteria set in advance Internal motivation – a tendency to act as compelled by private expectations of performance based on own personal criteria

  22. Theory X Managers believe people: • Dislike work and attempt to avoid it • Must be coerced, controlled, directed, threatened • Inherently unambitious and want to avoid responsibility • External motivation is a stronger and more reliable way of motivating employees • http://www.youtube.com/watch?v=nXN5s8fQvHY

  23. Theory Y Managers believe people: • Expenditure of effort as natural as rest or play • Exercise self-direction and self-control • Seek responsibility, exercise responsibility in solving problems Managers concerned with inspiring internal motivation and clearly communicating direction to employees

  24. Internal and External Motivation

  25. Ross – Holmstrom (R-H) Model • Principal • manager/supervisor • motivated by profits • Agent • worker • motivated by fondness for money, dislikes work • possible internal motivation • effort gets more expensive as its level of activity (output) increases

  26. Ross – Holmstrom (R-H) Model • Principal cannot determine the value of the agents production directly • Use measurements as a proxy of value • Agent & principal have opposite interests • principle wants maximum value (effort) for least cost • agent wants maximum value (money) for minimum effort

  27. Ross – Holmstrom (R-H) Model • The optimal payment schedule –payment to the agent increases as revenue increases • justification for merit-pay, pay-for-performance • Assumptions valid? • employees care about more than making money and avoiding work • managers care about more than profit • effort/mix problem

  28. Holmstrom and Milgroom • Effort must be devoted to each task • Internal motivation • finds pleasure to provide value to the customer • might agents be persuaded to enjoy doing more work that is beneficial to the principal without monetary reward? • work towards goals of the company by evoking feelings of identification, belonging

  29. Holmstrom and Milgroom • Contributions • abandoned the assumption that what is measured is a perfect indicator of the true value produced by the agent • true output depends jointly on effort devoted to all tasks • value is provided in a non-additive manner as agents devote effort to tasks • use multiple measures – have all key areas been measured

  30. Designing Measurement Systems • Comprehensiveness is desirable • must measure all critical dimensions • essential to prevent dysfunction • difficult to verify performance of knowledge workers • quality is notoriously difficult to measure • Motivational uses of measurement • agent can not be prevented from reacting to measure • results in competitive dynamics that distorts measure • causes warning signs to be concealed • extremely difficult to achieve informational only use

  31. Designing Measurement Systems • Use measurements for information only • convince agents that measure is informational only • promote people longer time • no sudden rising stars • reduces competition within employees • aggregation of measurement • individual see measures at their or higher level • need procedural methods to enforce aggregation • gives manager a self assessment • managerial job shifts from evaluation to inspiration and assistance to subordinates

  32. How to Make Jobs More Measurable • Standardization • almost all processes repetitive at some level • identify phases and establish standard methods of execution • Specification • construct detailed model of the process • standardize each step in a process • identify variances from specification

  33. How to Make Jobs More Measurable • Subdivison • decompose jobs into subtasks and group similar tasks • grouping makes repetition & self similarity visible making measurement easier • people doing similar activities can be assigned overseers that have specialized knowledge as workers

  34. Delegatory Management • Promote organizational intimacy • make hierarchy flatter & work groups smaller • reconfigure work spaces to promote casual contacts between managers and workers in same team • Promote trust • facilitate mutual trust - employer and employees • investments that suggest permanence of employee base such as investing heavily in training

  35. Delegatory Management • Provide better leadership • skilful expression of vision • management styles that emphasize inspiring and trusting employees rather than analyzing and coercing

  36. Internal Motivation vs External Reward • Offer of an external reward for that which would be provided through internal motivation may be insulting and lower internal motivation • “It’s demeaning... People are motivated by intrinsics and extrinsics and the intrinsics are much more powerful – the pride, the workmanship, the enjoyment of doing things with their colleagues....But the extrinsics shove the intrinsics aside. Say you give me an extrinsic reason to do this. I lose track of whether I’m having a good time working with my colleagues on this goal.” – DeMarco on performance measurement

  37. Contents • Introduction • Types of measures • Basic problems with measuring activity • Economic models and motivation theory • Software metrics • common software metrics • factors to consider when choosing a metric • characteristics of successful programs • human factor – the programmers • Case-study • Conclusion – lessons learned

  38. What is a Software Metric? A software metric is defined as a method of quantitatively determining the extent to which software processes, product, or project possess a certain attribute. (Daskalantonakis, 1992) Rationale - a metrics program should lead the software organization to more disciplined processes through an efficient feedback mechanism. (Gopal, Krishnan, Mukhopadhyay, and Goldenson, 2002)

  39. Common Software Metrics (1/2)

  40. Common Software Metrics (2/2)

  41. Common Software MetricsList of Project Metrics (1/2) • Order of growth • Lines of code • Cyclomatic complexity • Function points • Code coverage • Coupling • Cohesion requirements size • Application size • Cost • Schedule • Productivity • Number of software developers

  42. Common Software MetricsList of Product Metrics (2/2) • Specification quality metrics • System size metrics • Architectural metrics • Length metrics • Complexity metrics • Testing effectiveness

  43. Cem Kaner’s Factors to Consider When Choosing a Metric (1/2) • What is the purpose of the metric? • What is the scope of the measure? • What attribute are you trying to are you trying to measure? • What is the attribute’s natural scale? • What it the attribute’s natural variability? • What instrument are you using to measure the attribute, and what reading do you take from the instrument?

  44. Cem Kaner’s Factors to Consider When Choosing a Metric (2/2) • What is the instrument’s natural scale? • What is the reading’s natural variability (measurement error)? • What is the attribute’s relationship to the instrument? • What are the natural and foreseeable side effects of using this instrument? - Cem Kaner (2002) as cited by Dekkers, C.A. and McQuaid, P.A. (2002)

  45. Basic Guideline – Selecting a MetricDekkers and McQuaid • Research and analyze your organization’s measurement goals • Decide what questions will tell you whether you are progressing towards the goals. • Select the appropriate measures to answer the questions. • Recognize intentional and unintentional side effects of the chosen measures and metrics.

  46. Goal Questioning Metric (GQM)Victor Basili and David Weiss • Goals – define the objectives (scope and purpose) to be addressed by measurement • Question – defines what quantifiable information is required to determine whether there is progress toward the goal(s) • Metric – defines the particular attributes, definitions, observation (collection) frequency for measurement data, measurement theory, statistical design, and applicability of measurement data.

  47. Characteristics of Highly Successful Measurement Programs (1/4) • Set solid measurement objectives and plans • aim to achieve firm objectives • implement as a development project • Make measurement part of the process • measure key success factors • integrate data collection

  48. Characteristics of Highly Successful Measurement Programs (2/4) • Gain a thorough understanding of measurement – use metrics to: • implement corrective actions • measure requirements, not individual performance • Focus on cultural issues • involve developers in the measurement program’s development • recognize cultural changes affects how people view their work

  49. Characteristics of Highly Successful Measurement Programs (3/4) • Create a safe environment to collect and report true data. • “..understand the data that your people take pride in reporting. Don’t ever use it against them. Don’t even hint that you might.” Robert Grady – Hewlett Packard • give people access to the measurement data • collect data using consistent measurement definitions

  50. Characteristics of Highly Successful Measurement Programs (4/4) • Cultivate a predisposition to change • respond to the measurement data • Develop a complementary suite (dashboard) of measures -- Dekkers, C.A. and McQuaid, P.A. (2002)

More Related