1 / 51

Data Intensive Query Processing for Semantic Web Data Using Hadoop and MapReduce

Data Intensive Query Processing for Semantic Web Data Using Hadoop and MapReduce. Dr. Mohammad Farhan Husain ( Amazan ) Dr. Bhavani Thuraisingham Dr. Latifur Khan Department of Computer Science University of Texas at Dallas. Outline.

eagan-pitts
Download Presentation

Data Intensive Query Processing for Semantic Web Data Using Hadoop and MapReduce

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Intensive Query Processing for Semantic Web Data Using Hadoop and MapReduce Dr. Mohammad Farhan Husain (Amazan) Dr. BhavaniThuraisingham Dr. Latifur Khan Department of Computer Science University of Texas at Dallas

  2. Outline • Semantic Web Technologies & Cloud Computing Frameworks • Goal & Motivation • Current Approaches • System Architecture & Storage Schema • SPARQL Query by MapReduce • Query Plan Generation • Experiment • Future Works

  3. Semantic Web Technologies • Data in machine understandable format • Infer new knowledge • Standards • Data representation – RDF • Triples • Example: • Ontology – OWL, DAML • Query language - SPARQL

  4. Cloud Computing Frameworks • Proprietary • Amazon S3 • Amazon EC2 • Force.com • Open source tool • Hadoop – Apache’s open source implementation of Google’s proprietary GFS file system • MapReduce – functional programming paradigm using key-value pairs

  5. Outline • Semantic Web Technologies & Cloud Computing Frameworks • Goal & Motivation • Current Approaches • System Architecture & Storage Schema • SPARQL Query by MapReduce • Query Plan Generation • Experiment • Future Works

  6. Goal • To build efficient storage using Hadoop for large amount of data (e.g. billion triples) • To build an efficient query mechanism • Publish as open source project • http://code.google.com/p/hadooprdf/ • Integrate with Jena as a Jena Model

  7. Motivation • Current Semantic Web frameworks do not scale to large number of triples, e.g. • Jena In-Memory, Jena RDB, Jena SDB, Jena TDB • AllegroGraph • Virtuoso Universal Server • BigOWLIM • RDF-3X • Sesame • There is a lack of distributed framework and persistent storage • Hadoop uses low end hardware providing a distributed framework with high fault tolerance and reliability

  8. Outline • Semantic Web Technologies & Cloud Computing Frameworks • Goal & Motivation • Current Approaches • System Architecture & Storage Schema • SPARQL Query by MapReduce • Query Plan Generation • Experiment • Future Works

  9. 1. http://biomanta.org/ Current Approaches • State-of-the-art approach • Store data in HDFS and process query outside of Hadoop • Done in BIOMANTA1 project (details of querying could not be found) • Our approach • Store RDF data in HDFS and query through MapReduce programming

  10. Contributions • Scalable, fault-tolerant framework for large RDF graphs • Schema optimized for MapReduce • Query rewriting algorithm leveraging schema and ontology • Query plan generation algorithms • Heuristics based greedy algorithms • Exhaustive search algorithm based on graph coloring • Query plan generation algorithm for queries with OPTIONAL blocks

  11. Outline • Semantic Web Technologies & Cloud Computing Frameworks • Goal & Motivation • Current Approaches • System Architecture & Storage Schema • SPARQL Query by MapReduce • Query Plan Generation • Experiment • Future Works

  12. System Architecture LUBM Data Generator 1. Query RDF/XML MapReduce Framework Preprocessor Query Rewriter N-Triples Converter 3. Answer Predicate Based Splitter Query Plan Generator Object Type Based Splitter Plan Executor 2. Jobs Preprocessed Data Hadoop Distributed File System / Hadoop Cluster 3. Answer

  13. Storage Schema • Data in N-Triples • Using namespaces • Example:http://utdallas.edu/res1 utd:resource1 • Predicate based Splits (PS) • Split data according to Predicates • Predicate Object based Splits (POS) • Split further according to the rdf:type of Objects

  14. Example D0U0:GraduateStudent20 rdf:type lehigh:GraduateStudent lehigh:University0 rdf:type lehigh:University D0U0:GraduateStudent20 lehigh:memberOf lehigh:University0 P File: rdf_type D0U0:GraduateStudent20 lehigh:GraduateStudent lehigh:University0 lehigh:University PS File: lehigh_memberOf D0U0:GraduateStudent20 lehigh:University0 File: rdf_type_GraduateStudent D0U0:GraduateStudent20 File: lehigh_memberOf_University D0U0:GraduateStudent20 lehigh:University0 File: rdf_type_University D0U0:University0 POS

  15. Space Gain • Example Data size at various steps for LUBM1000

  16. Outline • Semantic Web Technologies & Cloud Computing Frameworks • Goal & Motivation • Current Approaches • System Architecture & Storage Schema • SPARQL Query by MapReduce • Query Plan Generation • Experiment • Future Works

  17. SPARQL Query • SPARQL – SPARQL Protocol And RDF Query Language • Example SELECT ?x ?y WHERE { ?z foaf:name ?x ?z foaf:age ?y } Query Data Result

  18. Schema Based SPARQL Query Rewriting • Example querySELECT ?p WHERE{ ?x rdf:type lehigh:Department ?p lehigh:worksFor ?x ?x subOrganizationOf http://University0.edu} • Rewritten querySELECT ?p WHERE{ ?p lehigh:worksFor_Department ?x ?x subOrganizationOf http://University0.edu}

  19. Schema Based SPARQL Query Rewriting • Based on rdfs:range defined in ontology • Example:ub:subOrganizationOf rdfs:range ub:University • Rewritten query:SELECT ?p WHERE{ ?p ub:worksFor_Department ?x ?x ub:subOrganizationOf_University http://University0.edu}

  20. INPUT subOrganizationOf_University Department1 http://University0.edu Department2 http://University1.edu worksFor_Department Professor1 Deaprtment1 Professor2 Department2 Inside Hadoop MapReduce Job MAP Map Map Filtering Object == http://University0.edu SHUFFLE&SORT Department1 SO#http://University0.edu Department1WF#Professor1 Department2WF#Professor2 REDUCE Reduce Department1SO#http://University0.edu WF#Professor1 Department2WF#Professor2 OUTPUT Output WF#Professor1

  21. Outline • Semantic Web Technologies & Cloud Computing Frameworks • Goal & Motivation • Current Approaches • System Architecture & Storage Schema • SPARQL Query by MapReduce • Query Plan Generation • Experiment • Future Works

  22. Query Plan Generation • Challenge • One Hadoop job may not be sufficient to answer a query • In a single Hadoop job, a single triple pattern cannot take part in joins on more than one variable simultaneously

  23. Example • Example query:SELECT ?X, ?Y, ?Z WHERE { ?X pred1 obj1 subj2 ?Z obj2 subj3 ?X ?Z ?Y pred4 obj4 ?Y pred5 ?X } • Simplified view: • X • Z • XZ • Y • XY

  24. Join Graph &Hadoop Jobs 2 X 2 2 2 Z Z Z Z 3 X 3 3 3 X X X 1 X 1 1 1 X X X X 5 X X X 5 5 5 Y Y Y Y 4 4 4 4 Join Graph Valid Job 1 Valid Job 2 Invalid Job

  25. Possible Query Plans • A. job1: (x, xz, xy)=yz, job2: (yz, y) = z, job3: (z, z) = done 2 2 Z Z 2 3 3 2 X X Z 1,2,3,4,5 Z 1 1 1,3,5 X X 1,3,4,5 X X 5 5 Result Y Y Y Job 3 4 4 4 Join Graph Job 1 Job 2

  26. Possible Query Plans • B. job1: (y, xy)=x; (z,xz)=x, job2: (x, x, x) = done 2 2 Z Z 3 3 2,3 X X 1,2,3,4,5 X 1 1 X X 1 X X X 5 5 X Result Y Y 4,5 4 4 Join Graph Job 1 Job 2

  27. Query Plan Generation • Algorithm for query plan generation • Query plan is a sequence of Hadoop jobs which answers the query • Exploit the fact that in a single Hadoop job, a single triple pattern can take part in more than one join on a single variable simultaneously

  28. Experiment • Dataset and queries • Cluster description • Comparison with Jena In-Memory, SDB and BigOWLIM frameworks • Comparison with RDF-3X • Experiments with number of Reducers • Algorithm runtimes: Greedy vs. Exhaustive • Some query results

  29. Dataset And Queries • LUBM • Dataset generator • 14 benchmark queries • Generates data of some imaginary universities • Used for query execution performance comparison by many researches

  30. Our Clusters • 10 node cluster in SAIAL lab • 4 GB main memory • Intel Pentium IV 3.0 GHz processor • 640 GB hard drive • OpenCirrus HP labs test bed

  31. Comparison: LUBM Query 2

  32. Comparison: LUBM Query 9

  33. Comparison: LUBM Query 12

  34. Comparison with RDF-3X

  35. Experiment with Number of Reducers

  36. Greedy vs. Exhaustive Plan Generation

  37. Some Query Results Seconds Million Triples

  38. Access Control: Motivation • It’s important to keep the data safe from unwanted access. • Encryption can be used, but it has no or small semantic value. • By issuing and manipulating different levels of access control, the agent could access the data intended for him or make inferences.

  39. Access Control in Our Architecture Access control module is linked to all the components of MapReduce Framework MapReduce Framework Query Rewriter Access Control Query Plan Generator Plan Executor

  40. Access Control Terminology • Access Tokens (AT): Denoted by integer numbers allow agents to access security-relevant data. • Access Token Tuples (ATT): Have the form <AccessToken, Element, ElementType, ElementName> where Element can be Subject, Object, or Predicate, and ElementType can be described as URI , DataType, Literal , Model (Subject), or BlankNode.

  41. Six Access Control Levels • Predicate Data Access: Defined for a particular predicate. An agent can access the predicate file. For example: An agent possessing ATT <1, Predicate, isPaid, _> can access the entire predicate file isPaid. • Predicate and Subject Data Access: More restrictive than the previous one. Combining one of these Subject ATT’s with a Predicate data access ATT having the same AT grants the agent access to a specific subject of a specific predicate. For example, having ATT’s <1, Predicate, isPaid, _> and <1, Subject, URI , MichaelScott> permits an agent with AT 1 to access a subject with URI MichaelScott of predicate isPaid.

  42. Access Control Levels (Cont.) • Predicate and Object: This access level permits a principal to extract the names of subjects satisfying a particular predicate and object. • Subject Access: One of the less restrictive access control levels. The subject can ne a URI , DataType, or BlankNode. • Object Access: The object can be a URI , DataType, Literal , or BlankNode.

  43. Access Control Levels (Cont.) • Subject Model Level Access: This permits an agent to read all necessary predicate files to obtain all objects of a given subject. The ones which are URI objects obtained from the last step are treated as subjects to extract their respective predicates and objects. This iterative process continues until all objects finally become blank nodes or literals. Agents may generate models on a given subject.

  44. Access Token Assignment • Each agent contains an Access Token list (AT-list) which contains 0 or more ATs assigned to the agents along with their issuing timestamps. • These timestamps are used to resolve conflicts (explained later). • The set of triples accessible by an agent is the union of the result sets of the AT’s in the agent’s AT-list.

  45. Conflict • A conflict arises when the following three conditions occur: • An agent possesses two AT’s 1 and 2, • the result set of AT 2 is a proper subset of AT 1, and • the timestamp of AT 1 is earlier than the timestamp of AT 2 • Later, more specific AT supersedes the former, so AT 1 is discarded from the AT-list to resolve the conflict.

  46. Conflict Type • Subset Conflict: It occurs when AT 2 (later issued) is a conjunction of ATT’s that refine AT 1. For example, AT 1 is defined by <1, Subject, URI, Sam> and AT 2 is defined by <2, Subject, URI, Sam> and <2, Predicate, HasAccounts, _> ATT’s. If AT 2 is issued to the possessor of AT 1 at a later time, then a conflict will occur and AT 1 will be discarded from the agent’s AT-list.

  47. Conflict Type • Subtype conflict: Subtype conflicts occur when the ATT’s in AT 2 involve data types that are subtypes of those in AT 1. The data types can be those of subjects, objects or both.

  48. Conflict Resolution Algorithm

  49. Policy Enforcement stages • Embedded enforcement • Enforcement as post-processing Experiment

  50. Results Scenario 1: “takesCourse” A list of sensitive courses cannot be viewed by a normal user for any student

More Related