1 / 65

Semantic Web Service Discovery in a Multi-Ontology Environment

Semantic Web Service Discovery in a Multi-Ontology Environment. Swapna Oundhakar Large Scale Distributed Information Systems (LSDIS) Lab, Department of Computer Science, The University of Georgia. Acknowledgements. Advisory Committee Dr. Amit P. Sheth ( Major – Advisor )

teneil
Download Presentation

Semantic Web Service Discovery in a Multi-Ontology Environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Semantic Web Service Discovery in a Multi-Ontology Environment SwapnaOundhakar Large Scale Distributed Information Systems (LSDIS) Lab, Department of Computer Science, The University of Georgia

  2. Acknowledgements • Advisory Committee • Dr. Amit P. Sheth ( Major – Advisor ) • Dr. Hamid Arabnia • Dr. Liming Cai • LSDIS Student Members • Kunal Verma

  3. Outline • Web services – Introduction • Multi-Ontology Environment • Approach • Testing • Contributions • Future Work • References

  4. WWW – Past & Present

  5. Web Service Promise • Easier Service Interoperation • Reduced Costs • Increased Efficiency

  6. Web Services – Definition • Web services are a new breed of Web application. They are self-contained, self-describing, modular applications that can be published, located, and invoked across the Web. (IBM web service tutorial) • A Web service is a software application identified by a URI, whose interfaces and bindings are capable of being defined, described and discovered by XML artifacts and supports direct interactions with other software applications using XML based messages via Internet-based protocols. (W3C definition)

  7. Web Services – Standards • Web Services revolve around UDDI, WSDL, SOAP • WSDL • Is a standard for describing a Web service. • It is an XML grammar for specifying properties of a Web service such as what it does, where it is located and how it is invoked. • Does not support semantic description of web services • SOAP • SOAP is an XML based messaging protocol. • It defines a set of rules for structuring the messages being exchanged. • UDDI • UDDI is a registry or a directory where Web service descriptions can be advertised and discovered • By itself does not support semantic description of services and depends on the functionality of the content language

  8. Web Services – Architecture

  9. Web Services – Problems • Heterogeneity and Autonomy • Syntactic description can not deliver all the intended semantics • Solution – Machine processable descriptions • Dynamic nature of business interactions • Demands – Efficient Discovery, Composition, etc. • Scalability (Enterprises  Web) • Needs – Automated service discovery/selection and composition

  10. Web Service Discovery – Problems • Current Discovery Approach – • UDDI Search • Keyword based – Checks for Service names containing the Keyword • Browse Categories • Problem • Once set of services returned user has to manually browse to find his requirement • Or has to know the Category in advance

  11. Bringing the web to its full potential Web Services UDDI, WSDL, SOAP Intelligent Web Services Dynamic WWW URI, HTML, HTTP Semantic Web RDF, RDF(S), OWL Static Solution • Semantic Web • Keyword based search improved by adding semantics to web pages • Apply semantics to service web – Semantic web services D. Fensel, C. Bussler, "The Web Service Modeling Framework WSMF", Technical Report, Vrije Universiteit Amsterdam

  12. What are Ontologies? • Ontology as “an explicit specification of a conceptualization” - [Gruber, 1993] • Ontology as a “shared understanding of some domain of interest” – [Uschold and Gruninger, 1996] • An ontology provides - • A common vocabulary of terms • Some specification of the meaning of the terms (semantics) • An agreement for shared understanding for people and machines.

  13. Single Global Ontology • Web services adhere to Large Single ontology • Not Practical • Why? • Variety of applications and the intent behind their development • The real world cannot be based on a single ontology • In a single domain, different domain experts, users, research groups and organizations can conceptualize the same real world entities differently leading to multiple domain ontologies

  14. Single Global Ontology (Continued) • Why? • The scope of an ontology or for that matter a domain is usually arbitrary and cannot be formally defined. • Multiple ontologies for the same function • Example – Travel Syndicators like Travelocity, Expedia partner with different airlines which may have different ontologies • Multiple Ontologies for an Activity • Example - Travel planning service from an organization may require ontologies from hospitality domain and travel domain • Coexistence of independent organization and organizational changes • Example - Cisco acquired 40 companies in a year

  15. Multiple Domain Ontologies • This leads to differences and overlaps between the models used for ontologies. • A more practical approach would be for Web services to conform to local, domain or application specific ontologies • In such an environment where the service requests and advertisements adhere to different ontologies, there arises a need for a matching algorithm which can deal with the domain semantics modeled by multiple ontologies

  16. Relationship to Previous Works on Multi-ontology Environment • In context of Data / Information • First reported effort - OBSERVER@LSDIS [Mena et.al, 1996] • Several other subsequent efforts on ontology mapping / alignment / merging [Kalfoglou and Schorlemmer, 2003] • In context of Services • Cardoso’s Ph.D. thesis [Cardoso 2002] - presents preliminary work on service discovery in multiple ontology environment • This thesis advances [Cardoso 2002] by introducing new measures: Context, Coverage • Higher quality matches

  17. Semantic Web Service Discovery –Approach • First the user’s service requirement is captured in a semantic search template • This search template is then matched against a set of candidate Web services • The result of this operation is then returned to the user as a set of ranked services

  18. Semantic Web Service Discovery –Service Template • Service Template (ST) • ST = < NST, DST, OpsST < NOP, DOP, OsST, IsST > > • NST : Name of the Web service to be found • DST : Textual description of the Web service • OPsST : Set of operations of the required Web service • The operations in turn are specified as • OPsST < NOP, DOP, OsST, IsST > • Where, • NOP : Name of the operation • DOP : Description of the operation • OsST : Outputs of the operation • IsST : Inputs of the operation

  19. Semantic Web Service Discovery –Service Template • Example Service Template

  20. Semantic Web Service Discovery –Candidate Service • Example Candidate Service (CS)

  21. Semantic Web Service Discovery –Matching • Matching ST and CS • The Search Template is matched against a set of candidate Web services in a registry • Match Scores are calculated for each (ST, CS) pair • These match scores are normalized over the interval of 0 to 1 • Finally the pairs are ranked in the descending order of the match score

  22. Algorithm

  23. Semantic Web Service Discovery – Service Similarity • Matching ST and CS • The overall service similarity is the weighted average of Syntactic and Functional similarities

  24. Semantic Web Service Discovery –Syntactic Similarity • Syntactic Similarity • Relies on name and description of ST and CS • It is calculated as the weighted average of the Name similarity (NameMatch (ST, CS)) and the Description Similarity (DescrMatch (ST, CS)).

  25. Syntactic Similarity - Example • Name Similarity using NGram algorithm

  26. Semantic Web Service Discovery –Functional Similarity • Functional Similarity • Calculated as the average Match Score of the operations of ST and CS

  27. Semantic Web Service Discovery –Matching Two Operations • Matching Two operations • Weighted average of Syntactic similarity, conceptual similarity and IO similarity

  28. Functional Similarity - Example • Matching Two operations • getStockQuote is matched with all three operation of CS individually • Best Match • As the “fs” value for getQuote is maximum it is picked as the matching operation Best

  29. Semantic Web Service Discovery –IO Similarity • Similarity between the inputs and outputs of operations • Calculated as geometric mean of • Similarity of inputs i.e. InputSim • Similarity of outputs i.e. OutputSim

  30. Semantic Web Service Discovery – Concept Matching • Concept Matching Algorithm • Inputs, Outputs and Operations of a Service Template and Candidate Web service are annotated with Ontological Concepts • Similarity of an individual input or output pair is calculated as the Similarity of the concepts they are annotated with • Concept similarity has four dimensions • Syntactic Similarity – Names and Descriptions of the concepts • Feature or property similarity – Most important • Coverage similarity – Signifies the abstraction level of the concept • Context similarity – Helps to understand concept better • Calculated as the weighted average of the above four values

  31. Semantic Web Service Discovery –Property Similarity • Property Similarity • A concept is defined using its properties, hence matching these properties is most important while matching two concepts • Syntactic Similarity – The syntactic information of the property i.e. name and description • Range Similarity – Similarity of the values property can take • Cardinality Similarity – How many values property can take • “c” – Constant (explained latter) • The property similarity for unmatched properties is also penalized with a penalty of 0.05 for each unmatched property

  32. Semantic Web Service Discovery –Property Similarity (Continued) • Property Similarity – Constant “c” • Calculated based on if the properties being mapped are inverse functional properties or not • Inverse Functional property is an OWL construct which tells that if a property is inverse functional then that property value is unique for every instance of that concept • Example – SSN is unique for a person and Stock Symbol is unique for every Stock. No two stocks can have same stock symbol and no two persons can have same SSN. • Such information gives more insight into the real world entity that is being captured by the concept being matched • For non-owl ontologies the second case of Equation 11 is considered.

  33. Semantic Web Service Discovery – Range Similarity • Range similarity • The values the properties can take are characterized by their ranges and hence range similarity is important • Range can either be a primitive data type or another ontological concept • Both the property ranges are primitive data types • Both the property ranges are Ontological concepts • Shallow Concept Match • Syntactic Similarity – Names of the concepts • Property Similarity – Similarity of names of the properties

  34. Semantic Web Service Discovery – Cardinality Similarity • Cardinality similarity (crdnSim) • Cardinality provides the information about how many range values a property can take at a time • Match value is less if the ST requirement is not satisfied completely

  35. Property Similarity – Example

  36. Semantic Web Service Discovery – Coverage Similarity • Coverage Similarity (cvrgSim) • Signifies the level of abstraction of the concept • Immediate parent of the requirement concept is being matched then the coverage similarity is reduced by 0.1, for a grandparent by 0.2 and so on • The reduction by a multiple of 0.05 is employed if the candidate concept is a sub-concept • Sub-concept satisfies the properties of the requirements completely • Still needs to be distinguished from a complete match – lesser penalty

  37. Semantic Web Service Discovery – Context Similarity • Attempt to understand more about a concept by considering the semantic similarity and semantic disjointness of the concepts in the vicinity of that concept • SameConceptSet • Set of the concepts which describes the same real world entities as described by the concept in question • DifferentConceptSet • Set of concepts which are semantically disjoint from the concept in question • Example, a Bond means a Fixed Income Fund and not Stocks; Bond is also a different concept from the concept Investment Company • SameConceptSet(Bond) = {FixedIncomeFund, Bond} • DifferentConceptSet(Bond) = {Stocks, InvestmentCompany}

  38. Semantic Web Service Discovery – Context Similarity • SameConceptSet • If an ontology specifies them as same or equivalent concepts • e.g. OWL language has constructs like sameClassAs or equivalentClass to describe that two concepts are similar to each other • Member concepts of the main concept i.e. concepts which are used to define main concept • e.g. OWL has collectionclasses, which are described in terms of other classes • The concept itself is also added in the SameConceptSet

  39. Semantic Web Service Discovery – Context Similarity • DifferentConceptSet • Concepts which are explicitly modeled as disjoint concepts of the main concept • e.g. in OWL concepts which are related with disjointWith or complementOf relationships • Concepts appearing as ranges of properties of main concept except if the range is concept itself • e.g. in the stocks ontology, Company concept has a property with Stocks concept as range and hence Company and Stocks do not represent same the concept • Concepts which has properties with main concept as range • e.g. Stocks appears as range of investsIn property of MutualFund concept and hence MutualFund goes in the DifferentConceptSet of Stocks • Siblings of the main concept • They depict an entirely different specialization than the main concept • e.g. EquityFund and FixedIncomeFund are siblings and can not replace each other in a request

  40. Semantic Web Service Discovery – Context Similarity

  41. Web Service Discovery Algorithm – Comparison

  42. Comparison with Single Ontology Approach • Similarity Measures • In a single ontology environment, syntactic and property similarity can give enough information about a concept to give good matches • In a multi-ontology environment, concepts can be modeled with different levels of abstraction and hence considering only syntactic and property information does not provide enough information about the concept • Measures used in this approach like Context and Coverage similarity provide this extra information

  43. Comparison with Single Ontology Approach • Linguistic Issues • In a single ontology environment, matching properties and concepts based on names is enough since properties are inherited for the related concepts • In a multi-ontology environment, names of properties and concepts can be synonyms, hypernyms, hyponyms, homonyms of each other and hence matching them syntactically can return bad match scores • This approach uses WordNet based algorithm, custom abbreviation dictionary etc. to tackle this problem

  44. Comparison with Single Ontology Approach • Model level issues • In a single ontology environment, single ontology model does not pose any structure and model level issues • In a multi-ontology environment, ontologies can have different modeling techniques which needs to be tackled before matching two concepts e.g. XML Schema models Collection concepts as coplexTypes or simpleTypes where as OWL models them as collection classes • Common Representation format helps to bridge this gap

  45. Testing

  46. Web Service Discovery – Testing

  47. Testing – Concept Matching Candidate Concepts from Same Ontology • The simplest and best case is when the candidate concept is the same as the ST concept • All four dimensions give a similarity of 1 and the overall match score is also 1 • Candidate concept is a sub-concept of the ST concept • Even if the requirement is satisfied completely, it is not the exact concept that is required • StockDetailQuote, the coverage similarity is reduced by 0.05 as it is the immediate child of StockQuote • As coverage similarity has a non-zero value and the concepts are from the same ontology, the context similarity is defaulted to 1 • This boosts the overall similarity even more to give a better match score

  48. Testing – Concept Matching (Continued) • Candidate concept is a super-concept of requirement • Does not satisfy all the properties of the requirement • Here a penalty of 0.05 is applied for each unmatched property. • This reduces the property similarity for StockQuickQuote to a negative value, which signifies that most of the properties of the requirement are not satisfied. • A deduction of 0.1 is applied to the coverage match • Non-zero coverage similarity, concepts are from the same ontology, the context similarity is defaulted to 1 • Overall concept similarity is reduced but the context similarity of 1 gives an advantage • Candidate concept is from the same ontology but not related • Treated in the same way as concepts from two different ontologies • FundQuote gives a context match of -1 as it is modeled as a sibling of StockQuote • The Coverage similarity is 0 as StockQuote and FundQuote do not have a subsumption relationship • Property penalty for unmatched properties applied • Overall match score below zero • Easier to discard this match

  49. Testing – Concept Matching (Continued) Candidate Concepts from Different Ontology • Same concept from different ontology • Same StockQuote concept with a bit of variety • All the properties are matched • Same name, the syntactic similarity is 1 • As all the properties for both the concepts match each other with a value of above 0.7, and the context and the coverage similarities are also 1 • Overall match score is boosted • Candidate concept from a different ontology, matches with a sub-concept • ExtendedQuote from xIgniteStocks matches better with StockDetailQuote of StockOnt • As StockDetailQuote is the immediate child of the ST concept, the coverage match has a value of 0.95 • Here, all the properties of the requirement are satisfied • Context similarity of 0.85 is obtained • Boosts the overall match score

  50. Testing – Concept Matching (Continued) • Concept from different ontology matches better with a super concept • QuickQuote as it matches to StockQuickQuote which is an immediate super concept of the ST concept • Coverage similarity is reduced by 0.1 • QuickQuote is modeled as the sibling of StockQuote in its ontology which matches best with the ST concept hence Value of -1 is assigned for context similarity • Penalty for unmatched properties • Results in a very low overall match score • It can be noted here that by considering only syntactic and property similarity gives a good match score

More Related