1 / 21

OMEN: A Probabilistic Ontology Mapping Tool

OMEN: A Probabilistic Ontology Mapping Tool. Mitra et al. Mapping of two different ontologies. The Problem. We need to map databases or ontologies. The Problem. Mapping is difficult Most mapping tools are imprecise Even experts could be uncertain We deal with probabilistic mappings.

yon
Download Presentation

OMEN: A Probabilistic Ontology Mapping Tool

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OMEN: A Probabilistic Ontology Mapping Tool Mitra et al.

  2. Mapping of two different ontologies The Problem • We need to map databases or ontologies

  3. The Problem • Mapping is difficult • Most mapping tools are imprecise • Even experts could be uncertain • We deal with probabilistic mappings

  4. The Solution • Infer mappings based on previous ones • We use Bayesian Nets for inference • We use other tools for initial distributions • Preliminary results are encouraging

  5. T Basic Concepts • Bayesian network: Probabilistic graphical model that represents Random variables • Evidence nodes: The value is given

  6. Bayesian Network • Conditional Probability tables (CPT)

  7. C1 m(C1,C1’) C1’ Ontology 1 Ontology 2 Bayesian Nets in our approach • How do we build the Bayesian Net • Nodes are property or class matches • Classes are concepts • Properties are attributes of classes

  8. Building Bayesian Nets

  9. Our Bayesian Nets • All combinations of nodes is too many • We generate only “useful” nodes • The cutoff is k from evidence nodes • Up to 10 parents per node • Cycles are avoided (confidence ~.5)

  10. Our Bayesian Nets • We need evidence nodes and CPTs • Evidence nodes come from initialization • CPTs come from Meta-rules

  11. P1=x C1 C2 m(C1,C1’) m(C2,C2’) C1’ C2’ q q’ P2=x+c Meta-rules • Describes how other rules should be used • Basic Meta-rule

  12. Other Meta-rules • Range: Restriction of property values • Mappings between properties and ranges of properties • Single range • Specialization

  13. Other Meta-rules • Mappings between super classes Children matching depends on parents matching • Fixed Influence Method (FI): P=.9 • Initial Probability Method (AP): P= y+c • Parent Probability Method (PP): P= x+c

  14. Probability Distribution for mapping between C and C’ Probability Distribution

  15. Combining Influences • We assume that the parents are conditionally independent • P[C|A,B] = P[C|A] x P[C|B] • Fix of this for future work

  16. Results • 2 Sets of 11 and 19 nodes • Predicate matching was manual • Thresholds were .85 and .15

  17. Results

  18. Strengths • Innovative research • Published at ISWC • Mathematically oriented

  19. Weaknesses • Lots of typos • No comparison with current methods • Little literature research • Could use better explanation of basic concepts

  20. Future Work • Handling conditionally dependency of parent nodes • Handling of matching predicates • Automatic pruning and building of the network

  21. ?

More Related