1 / 8

Report of results of technical session 2: The ETICS build process and metrics collection

This report discusses the current experience with the ETICS build process and metrics collection in Grid projects. It addresses the importance of testing, the use of tools and methodologies, guidelines and best practices, test plans, unit tests, integration testing, collecting test results, and metrics such as lines of code, code fragility, complexity, and code coverage. The report also mentions available tools and timelines for implementation.

paulhughes
Download Presentation

Report of results of technical session 2: The ETICS build process and metrics collection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Report of results of technical session 2: The ETICS build process and metrics collection

  2. Current experience in Grid Projects • Sometimes developers don’t feel the importance of testing since the beginning of a project • Missing a test plan, unit tests and so on • Some developers use their tools and methodologies to do the test (unit testing) • Many free tools are available but most of them are bugged • We need to provide guidelines, good examples and best practices

  3. Current ETICS Building Process ETICS Templates, Examples, Guidelines Sun java conventions (checkstyle, jalopy), user-defined rules, same approach for other languages, enforce the rules/report if broken (change the code?) init checkstyle Test Plan compile JavaCov  adds probes, compiles (code and test code), runs unit tests (junit, GJTester) (enable/disable  code is slower with instrumentation) test Test cases (static analysis, dependencies analysis) Give possibility of expressing acceptance criteria per project Integration testing(EBIT) System integrationtesting (mock objects)

  4. Collecting test results • We need a common schema to express the results • Result converters provided by ETICS for a few common tools • Developers will provide other converters • The results will be collected and stored in a repository for later processing

  5. Metrics (1) • Lines of code, defects/lines of code (interactions with bug systems?) • Code fragility/robustness (number of up-level/down-level dependencies) • Number of external dependencies • Complexity (per package, per class) • Check CMMi requirements • Historical reporting on data, trend analysis • Possible range of values -> recommended values depending on typical scenarios • Compare projects based on metrics, benchmarking, rank project

  6. Metrics (2) • Set goals and monitor metrics over time • Need to define a schema to express metrics • More feasible with static metrics, dynamic metrics have too many liabilities and/or dependencies • Code coverage is in itself a metric. Refers to the quality of testing more than the quality of code, but it’s possible to infer that the code is good as well as a consequence

  7. Tools • For Java there are lots of free tools that can be incorporated • For other languages there are not so many tools • Commercial tools (for C/C++,..) • GRAMMATECH (Codesurfer, not a free tool) • CTA++ (Testwelloy, unit testing tool), CTC++ (coverage analysis), gcov • SLOCcount, other line counters

  8. Timelines • Start with JUnit and JavaCov • Provide software and documentation (WP4) - 2 weeks • Design schemas for the results, possibly implement converters if the output format is not suitable (TBD) • Design a generic wrapper with plug-in for the different tools (TBD – end of July?) • Prototype implementation – September? • Collection of metrics and reporting/analysis is left for next year (it will not be part of the first release). • We should verify how complex is to call some of the existing tools. • The most complex issue is to provide useful targets/interpretation of the metrics

More Related