1 / 14

Towards a Benchmark for Evaluating Design Pattern Miner Tools

Towards a Benchmark for Evaluating Design Pattern Miner Tools. Date:102/1/17 Publisher:IEEE Software Maintenance and Reengineering, 2008. CSMR 2008. 12th European Conference on Author: Lajos Jen˝o F¨ul¨op ¤ , Rudolf Ferenc and Tibor Gyim´othy 組員 : 余世淇 , 游家瑋. Introduction.

saxon
Download Presentation

Towards a Benchmark for Evaluating Design Pattern Miner Tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Towards a Benchmark for Evaluating Design Pattern Miner Tools Date:102/1/17 Publisher:IEEESoftware Maintenance and Reengineering, 2008. CSMR 2008. 12th European Conference on Author:LajosJen˝oF¨ul¨op¤, Rudolf Ferenc and TiborGyim´othy 組員:余世淇,游家瑋

  2. Introduction • Recovering design pattern usage in source code is a very difficult task. Several tools are described in the literature for this purpose, but there is little work invested in evaluating them. The main reason for this is the lack of an approved benchmark for these tools.

  3. Architecture • We use the well-known issue and bug tracking system called Trac [34] (version 0.9.6) as the basis of the benchmark. Tracis written in Python and it is an easily extendible and customizable plug-in oriented system.

  4. Architecture

  5. Architecture

  6. Architecture • Initialization • Pattern Matching processing on GPU • Data Output

  7. True Positives (TP): true instances found by the tool (correctly). • False Positives (FP): false instances found by the tool (incorrectly). • False Negatives (FN): true instances not found by the tool (incorrectly). • The precisionvalue is defined as TP/TP+FP , which means the ratio of correctly identified instances with respect to all found instances. Therecall value is defined as TP/TP+FN , which means the ratio of correctly identified instances with respect to all existing real instances.

  8. Adding a New Tool • Upload. Uploading the results of a tool requires the name of the tool, name of the mined software, the programming language of the software, source location, and information about the found design pattern instances in comma separated value (CSV) file format

  9. Benchmark Contents

  10. Conclusion and FutureWork • This paper presents work in progress, the benchmark is operational, but will certainly need further development based on user feedback. A possible extension of the benchmark would be, for instance, a flexible solution for selecting siblings among the pattern instances. Currently, users can add comments to an instance to argue about the judgements. In the future, we want to extend this with categorizing the votes of a user to control the overrating of his own tool. • http://www.inf.u-szeged.hu/designpatterns/

More Related