1 / 32

Benchmarking the interoperability of ontology development tools

Benchmarking the interoperability of ontology development tools. Raúl García-Castro, Asunción Gómez-Pérez <rgarcia,asun@fi.upm.es > April 7th 2005. Table of Contents. The interoperability problem Benchmarking framework Experiment to perform Participating in the benchmarking.

jalia
Download Presentation

Benchmarking the interoperability of ontology development tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Benchmarking the interoperability of ontology development tools Raúl García-Castro, Asunción Gómez-Pérez <rgarcia,asun@fi.upm.es> April 7th 2005

  2. Table of Contents • The interoperability problem • Benchmarking framework • Experiment to perform • Participating in the benchmarking

  3. Ontology development tools interoperability problem • It appears due to ontology reuse. Tool 2 Tool 4 Ontology development tools Tool 1 Tool 3 Tool 5 Potential functionalities Real functionalities

  4. O. Corcho. A Layered Declarative Approach to Ontology Translation with Knowledge Preservation Frontiers in Artificial Intelligence and Applications, Volume 116, January 2005 Ontology development tools interoperability problem • Why is it difficult? • Different KR formalisms frames description logic conceptual graphs first order logic semantic networks • Different modelling components inside the same KR formalism • Some results: • It is difficult to preserve the semantics and the intended meaning of the ontology • Interoperability decisions… • At many different levels • Usually hidden in the programming code of ontology exporters/importers

  5. Protégé-2000 Knowledge Model comparison Partitions RDF(S) Subproperty-of Concept groups Containers Disjoint decompositions Exhaustive decompositions Collections Relation properties Statements Bibliographic references Template Slots/properties/ instance attributes Constants Literals Abbreviations Subclass-of Synonyms Data types Own slots/ Class attributes Instances Classes PAL constraints/ WAB axioms Metaclasses

  6. Thesis Thesis subclass subclass MSc Thesis PhD Thesis MSc Thesis PhD Thesis Partial loss Don’t export Total loss Insert ad-hoc RDF(S) <rdfsClass rdf:about=“#Thesis”> <a:disjoint rdf:Resource=“#MsC Thesis”> <a:disjoint rdf:Resource=“#PhD Thesis”> </rdfs:Class> Thesis Doesn’t import Elements outside RDF(S) K.M. EXPORT IMPORT Thesis Disjoint-subclass MSc Thesis PhD Thesis

  7. Protégé-2000 Example: WebODE and Protégé WebODE RDF(S) Protégé-2000 RDFS <rdf:Description rdf:about='#distanceToSkiResort'> <rdf:type rdf:resource=‘&rdf;Property'/> <rdfs:comment>The distance from the hotel to a ski resort</rdfs:comment> <rdfs:domain rdf:resource='#accommodation'/> <rdfs:range rdf:resource=‘&rdfs;Literal'/> </rdf:Description>

  8. RDFS <rdf:Property rdf:about=“#distanceToSkiResort“ a:maxCardinality="1“ a:minValue="0.0“ a:range="float“ rdfs:comment="The distance from the hotel to a ski resort“ rdfs:label="distanceToSkiResort"> <rdfs:domain rdf:resource=“#accommodation"/> <rdfs:range rdf:resource="&rdfs;Literal"/> </rdf:Property> Protégé-2000 generatesad hoc RDF(S) code Protégé-2000 Example: WebODE and Protégé WebODE RDF(S) Protégé-2000

  9. Table of Contents • The interoperability problem • Benchmarking framework • Experiment to perform • Participating in the benchmarking

  10. Benchmark and benchmarking Benchmarking • Systematic evaluation • Comparison with the best tools • Extraction of best practices

  11. García-Castro, Maynard, Wache, Foxvog and González-Cabero. Knowledge Web Deliverable 2.1.4 Specification of a methodology, general criteria, and benchmark suites for benchmarking ontology tools. December 2004. PLAN PHASE 1. B. goals identification 2. B. subject identification 3. Participant identification 4. B. proposal writing 5. Management involvement 6. B. partner selection 7. B. planning and resource allocation EXPERIMENT PHASE 8. Experiment definition 9. Experiment execution 10. Experiment results analysis IMPROVE PHASE 11. B. report writing 12. B. findings communication 13. Improvement planning 14. Improvement 15. Monitor General framework for benchmarking BENCHMARKING ITERATION Recalibration task • General evaluation criteria: • Interoperability • Scalability • Robustness • Benchmark suites for: • Interoperability • Scalability • Robustness • Benchmarking supporting tools: • Testing frameworks • Workload generators • Monitoring tools • Statistical packages

  12. B.P. Plan phase RDF(S) import and export capabilities Improve the interoperability of ontol. development tools Organisation's tools Identify ontology components exported/imported Need for benchmarking Benchmarking subject, tool functionalities, evaluation criteria Benchmarking goals, benefits, costs Benchmarking goals identification Benchmarking subject identification Participant identification List of involved members, benchmarking team Organisation goals and strategies Benchmarking proposal writing Organisation planning Benchmarking proposal Benchmarking partners, updated benchmarking proposal Benchmarking planning and resourceallocation Management support Management involvement Benchmarking partner selection Tools from outside the organisation Benchmarking planning

  13. test 1 • test 2 • test 3 • ... • test 1 • test 2 • test 3 • ... • test 1 • test 2 • test 3 • ... • test 1 • test 2 • test 3 • ... • test 1 • test 2 • test 3 • ... • test 1 • test 2 • test 3 • ... ... NO OK OK OK NO OK OK OK NO NO OK OK OK NO OK OK OK NO E.R. Experiment phase RDF(S) Import benchmark suites RDF(S) Export benchmark suites Benchmarking proposal Experiment definition, experimentation planning Experiment report Experiment results Experiment definition Experiment execution Experiment analysis Benchmarking planning

  14. Comparative analysis • Compliance with standards • Weaknesses • Recommendations on tools • Recommendations on practices Improve phase Updated benchmarking proposal Experiment report Benchmarking report Benchmarking findings communication Benchmarking report writing Organisation support Updated benchmarking report Improvement planning Improvement Necessary changes, improvement planning, improvement forecast Improved tool Monitorisation report Monitor

  15. Table of Contents • The interoperability problem • Benchmarking framework • Experiment to perform • Participating in the benchmarking

  16. Benchmarking goals Goal 1: • To assess and improve the interoperability of ontology development tools using RDF(S) for ontology exchange. • Goal 2: • To identify the subset of RDF(S) elements that ontology development tools can use to correctly interoperate. • Goal 3: • Next step: OWL.

  17. Experiment to perform RDF(S) 1. Export to RDF(S) • Check if ontology development tools can export the core elements of RDF(S). • Check if ontology development tools can export other elements of their knowledge models. 2. Import from RDF(S) • Check if ontology development tools can import the core elements of RDF(S). • Check if ontology development tools can import the non-core elements of RDF(S). • Check if ontology development tools can import the other elements of the knowledge model the tools exported to RDF(S). RDF(S)

  18. Sample export benchmarks Extended Benchmarks Particular to each tool Core Benchmarks Common to every tool Concept 1 Concept 2 Concept 3

  19. Sample export benchmarks Concept 1 Concept 2 Concept 3

  20. Sample export benchmarks Concept 1 Concept 2 Concept 3 Concept 4 Concept 2 Concept 3 Concept 4 Concept 1 Concept 1 Concept 2

  21. <rdf:RDF xmlns:rdf="http://www.w3.org/2-rdf-syntax-ns#" <rdfs:Class rdf:about="#Concept 1“> </rdfs:Class> <rdfs:Class rdf:about="#Concept 2“> <rdfs:subClassOf rdf:resource=“Concept 1”> </rdfs:Class> </rdf:RDF> Export ontology <rdf:RDF xmlns:rdf="http://www.w3.org/2-rdf-syntax-ns#" <rdfs:Class rdf:about="#Concept 1“> </rdfs:Class> <rdfs:Class rdf:about="#Concept 2“> <rdfs:subClassOf rdf:resource=“Concept 1”> </rdfs:Class> </rdf:RDF> <rdf:RDF xmlns:rdfs="http://www.w3.org/f-schema#"> <rdfs:Class rdf:about="#Concept 1“> </rdfs:Class> <rdfs:Class rdf:about="#Concept 2“> <rdfs:subClassOf rdf:resource=“Concept 1”> </rdfs:Class> </rdf:RDF> YES Compare result with expected = ? NO Export process Concept 1 Load ontology Concept 2 Export strategy: minimal knowledge loss in exports Steps can be manual or automatic

  22. Export results

  23. Sample import benchmarks Extended Benchmarks Common to every tool. From export extended benchmarks RDF(S) Core Benchmarks Common to every tool RDF(S) Extended Benchmarks Class 1 Class 2 Class 3 Class 1 Class 2 Class 3

  24. Sample import benchmarks Is_author Person Book Is_author Person Book

  25. Class 1 Import graph Class 2 YES Class 1 Class 1 Compare result with expected = ? Class 2 Class 2 NO Import process <rdf:RDF xmlns:rdf="http://www.w3.org/2-rdf-syntax-ns#" <rdfs:Class rdf:about="#Concept 1“> </rdfs:Class> <rdfs:Class rdf:about="#Concept 2“> <rdfs:subClassof rdf:resource=“Concept 1”> </rdfs:Class> </rdf:RDF> Load graph Steps can be manual or automatic

  26. Import results

  27. Table of Contents • The interoperability problem • Benchmarking framework • Experiment to perform • Participating in the benchmarking

  28. Benchmarking benefits • For the participants: • To know in detail the interoperability of their ODTs. • To know the set of terms in which interoperability between their ODTs can be achieved. • To show the rest of the world that their ODTs are able to interoperate and are among the best ODTs. • For the Semantic Web community: • To obtain a significant improvement in the interoperability of ODTs. • To know the best practices that are performed when developing the interoperability of ontology development tools. • To obtain instruments to assess the interoperability of ODTs. • To know the best-in-class ODTs regarding interoperability.

  29. Participating in the benchmarking • Every organisation is invited to participate: • If you are developers, with your own tool. • If you are users, with your preferred tool. • Supported by the Knowledge Web NoE. • The results will be presented in the EON 2005 workshop.

  30. Timeline If you want to participate in the benchmarking or have some further question/comment about it, please contact: Raúl García-Castro <rgarcia@fi.upm.es>

  31. KW Deliverable 1.2.2 Deliverable 1.2.2: “Semantic Web Framework Requirements Analysis” • Analyse applications and tools for identifying the set of requirements for interoperation and exchange of ontologies. • Identify the main components that an unified semantic web framework should have. Should contain: • The main systems developed on the field. • For each application or system, include its architecture, design criteria, main components, how they interoperate with other systems or exchange their ontologies. These studies should allow you to identify: • The main and additional requirements for each type of tool/system. • The main and additional functionalities. • Results on evaluation of the requirements of interoperation and exchange. • Results on evaluation of other criteria like scalability, etc. • Summarize in a table such criteria in order to have a clear picture of the field. Needs contributions from tool developers in order to obtain accurate descriptions of their tools.

  32. Benchmarking the interoperability of ontology development tools Raúl García-Castro, Asunción Gómez-Pérez <rgarcia,asun@fi.upm.es> April 7th 2005

More Related