1 / 22

Chidamber & Kemerer Suite of Metrics

Camargo Cruz Ana Erika Supervisor: Ochimizu Koichiro May 2008. Chidamber & Kemerer Suite of Metrics. Japan Advanced Institute of Science and Technology School of Information Science . CK Metrics: Outline. Objective Definition & Guidelines Thresholds CK in the literature (other uses).

vega
Download Presentation

Chidamber & Kemerer Suite of Metrics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Camargo Cruz Ana Erika Supervisor: Ochimizu Koichiro May 2008 Chidamber & Kemerer Suite of Metrics Japan Advanced Institute of Science and Technology School of Information Science

  2. CK Metrics: Outline • Objective • Definition & Guidelines • Thresholds • CK in the literature (other uses)

  3. CK Metrics: Objective CK metrics were designed [1]: • To measureunique aspects of the OO approach. • To measure complexityof the design. • To improve thedevelopment of the software • HOW?

  4. CK Metrics: ObjectiveSW development Improvement Managers can improve the development of the SWby : • Analysing CKmetrics through the identification of outlying values (extreme deviations), which may be a signal of: • high complexity and/or • possible design violations • Taking managerial decisions, such as: Re-designing and/or assigning extra or higher skilled resources (to develop, to test and to maintain the SW).

  5. CK Metrics: DefinitionWMC (Weighted Methods per Class) • Definition • WMC is the sum of the complexity of the methods of a class. • WMC = Number of Methods (NOM), when all method’s complexity are considered UNITY. • Viewpoints • WMC is a predictor of how much TIME andEFFORT is required to develop and tomaintain the class. • The larger NOM the greater the impact on children. • Classes with large NOM are likely to be more application specific, limiting the possibility of RE-USEand making theEFFORTexpended one-shot investment. • Objective: Low

  6. CK Metrics: DefinitionDIT (Depth of Inheritance Tree) • Definition The maximum length from the node to the root of the tree • Viewpoints The greater values of DIT : • The greater the NOM it is likely to inherit, making more COMPLEXto predict its behaviour • The greater the potential RE-USE of inherited methods • Small values of DIT in most of the system’s classes may be an indicator that designers are forsaking RE-USABILITY for simplicity of UNDERSTANDING. • Objective: Trade-off

  7. CK Metrics: DefinitionNOC (Number of Children) • Definition Number of immediate subclasses subordinated to a class in the class hierarchy • Viewpoints The greater the NOC is: • the greater is theRE-USE • the greater is the probability of improper abstractionof the parent class, • the greater the requirements of method's TESTING in that class. • Small values of NOC, may be an indicator of lack of communication between different class designers. • Objective: Trade-off

  8. CK Metrics: DefinitionCBO (Coupling Between Objects) • Definition It is a count of the number of other classes to which it is coupled • Viewpoints Small values of CBO : • Improve MODULARITY and promote ENCAPSULATION • Indicates independence in the class, making easier its RE-USE • Makes easier to MAINTAIN and to TEST a class. • Objective: Low

  9. CK Metrics: DefinitionRFC (Response for Class) • Definition It is the number of methods of the class plus the number of methods called by any of those methods. • Viewpoints If a large numbers of methods are invoked from a class (RFC is high): • TESTING and MAINTANACE of the Class becomes more COMPLEX. • Objective:Low

  10. CK Metrics: DefinitionLCOM (Lack of Cohesion of Methods) • Definition Measures the dissimilarity of methods in a class via instanced variables. • Viewpoints Great values of LCOM: • Increases COMPLEXITY • Does not promotes ENCAPSULATION and implies classes should probably be split into two or more subclasses • Helps to identified low-quality design • Objective: Low

  11. CK Metrics: Guidelines But How much is Low and High ?

  12. CK Metrics: Thresholds Thresholds of the CK metrics [2,3,4]: • Can not be determined before their use • Should be derived and use locally for each dataset • 80th and 20th percentiles of the distributions can be used to determine high and low values of the metrics. • Are not indicators of “badness” but indicators of difference that needs to be investigated.

  13. CK in the LiteratureCK Metrics & other Managerial performance indicators Chidamber & Kemerer study the relation of CK metrics with [2]: • Productivity SIZE [LOC] / EFFORT of Development [Hours] • Rework Effort for re-using classes • Effort to specify high-level design of classes

  14. CK in the LiteratureCK Metrics & Maintenance effort Li and Henry (1993) use CK metrics (among others) to predict [5]: • Maintenance effort, which is measured by the number of lines that have changed in a class during 3 years that they have collected the measurement .

  15. CK in the LiteratureDIT & Maintenance effort Daly et al. (1996) in his study concludes that[5]: • That subjects maintainig OO SW with three levels of inheritance depth performed maintaince tasks significantly quickier than those maintaining an equivalent OO SW with no inheritance.

  16. CK in the LiteratureDIT & Maintenance effort However, Hand Harrisson (2000) used DIT metric to demonstrate [5]: • That systems without inheritance are easier to understand and modify than systems with 3 or 5 levels of inheritance.

  17. CK in the LiteratureDIT & Maintenance effort Poels (2001) uses DIT metric, and demonstrate [5]: • The extensive use of inheritance leads to modls that are more difficult to modify.

  18. CK in the LiteratureDIT & Maintenance effort Prechelt (2003) concludes that [5]: • Programs with less inheritance were faster to maintain and • The code maintenance effort is hardly correlated with inheritance depth but rather depends on other factors such as number of relevant methods.

  19. CK in the LiteratureCK Metrics & Fault-proneness prediction CK : Chidamber & Kemerer, QMOOD: Quality Metrics for Object Oriented Design

  20. Conclusion • CK metrics measure complexity of the design • There are no thresholds defined for the CK metrics. However, they can be used identifying outlaying values. • CK metrics (while measure from the code) have been related to: fault-proneness, productivity, rework effort, design effort and maintenance.

  21. References [1] Chidamber Shyam, Kemerer Chris, “A metrics suite for object oriented design”, IEEE Transactions on Software Engineering, June1994. [2] Chidamber Shyam, Kemerer Chris, Darcy David, ”Managerial use of Metrics for Object-Oriented Software: an Exploratory Analysis”, IEEE Transactions on software Engineering, August 1998. [3] Linda Rosenberg, “Applying and Interpreting Object Oriented Metrics”, Software Assurance Technology Conference, Utah, 1998. [4] Stephen H. Kan, “Metrics and models in software Quality Engineering”, Addison-Wesley, 2003. [5] Genaros Marcela, Piattini Mario, Calero Coral, “A Survey of Metrics for UML Class Diagrams”, Journal of Object Technology, Nov.-Dec 2005.

  22. References [6] Victor R. Basili and Lionel C. Briand and Walcelio L. Melo, A Validation of Object-Oriented Design Metrics as Quality Indicators, IEEE Transactions on Software engineering, Piscataway, NJ, USA, October 1996. [7] Lionel C. Briand and Jurgen Wust and John W. Daly and D. Victor Porter, Exploring the relationships between design measures and software quality in object-oriented systems Journal of Systems and Software,2000. [8] Kanmani, S., and Uthariaraj V. Rymend, Object oriented software quality prediction using general regression neural networks, SIGSOFT Soft. Eng. Notes, New York NY, USA, 2004. [9] Nachiappan Nagappan, and Williams Laurie, Early estimation of software quality using in-process testing metrics: a controlled case study , Proceedings of the third workshop on Software quality, St. Louis, Missouri, USA. (2005) [10] Hector M. Olague and Sampson Gholston and Stephen Quattlebaum, Empirical Validation of Three Software Metrics Suites to Predict Fault-Proneness of Object-Oriented Classes Developed Using Highly Iterative or Agile Software Development Processes, IEEE Transactions Software Engineering, Piscataway, NJ, USA, 2007.

More Related