1 / 69

Data Integration

Data Integration. Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives). Agenda. Overview of Data Integration Some systems: The LSD System The TSIMMIS System Information Manifold. A Problem.

drew-turner
Download Presentation

Data Integration

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Integration Helena Galhardas DEI IST (based on the slides of the course: CIS 550 – Database & Information Systems, Univ. Pennsylvania, Zachary Ives)

  2. Agenda • Overview of Data Integration • Some systems: • The LSD System • The TSIMMIS System • Information Manifold

  3. A Problem • Even with normalization and the same needs, different people will arrive at different schemas • In fact, most people also have different needs! • Often people build databases in isolation, then want to share their data • Different systems within an enterprise • Different information brokers on the Web • Scientific collaborators • Researchers who want to publish their data for others to use • This is the goal of data integration: tie together different sources, controlled by many people, under a common schema

  4. Motivating example (1) • FullServe: company that provides internet access to homes, but also sells products to support the home computing infrastructure (ex: modems, wireless routers, etc) • FullServe is a predominantly American company and decided to acquire EuroCard, an European company that is mainly a credit card provider, but has recently started leveraging its customer base to enter the internet market

  5. Motivating Example (2) FullServe databases: Employee Database FullTimeEmp(ssn, empId, firstName, middleName, lastName) Hire(empId, hireDate, recruiter) TempEmployees(ssn, hireStart, hireEnd, name, hourlyRate) Training Database Courses(courseID, name, instructor) Enrollments(courseID, empID, date) Services Database Services(packName, textDescription) Customers(name, id, zipCode, streetAdr, phone) Contracts(custID, packName, startDate) Sales Database Products(prodName, prodId) Sales(prodName, customerName, address) Resume Database Interview(interviewDate, name, recruiter, hireDecision, hireDate) CV(name, resume) HelpLine Database Calls(date, agent, custId, text, action)

  6. Motivating Example (3) EuroCard databases: Employee Database Emp(ID, firstNameMiddleInitial, lastName) Hire(ID, hireDate, recruiter) CreditCard Database Customer(CustID, cardNum, expiration, currentBalance) CustDetail(CustID, name, address) Resume Database Interview(ID, date, location, recruiter) CV(name, resume) HelpLine Database Calls(date, agent, custId, description, followup)

  7. Motivating Example (4) Some queries employees or managers in FullServe may want to pose: • The Human Resources Department may want to be able to query for all of its employees whether in the US or in Europe • Require access to 2 databases in the American side and 1 in the European side • There is a single customer support hot-line, where customers can call about any service or product they obtain from the company.When a representative is on the phone with a customer, it´s important to see the entire set of services the customer is getting from FullServe (internet service, credit card or products purchased). Furthermore, it is useful to know that the customer is a big spender on its credit card. • Require access to 2 databases in the US side and 1 in the European side.

  8. Another example: searching for a new job (1)

  9. Another example: searching for a new job (2) • Each form (site) asks for a slighly different set of attributes (ex: keywords describing job, location and job category or employer and job type) • Ideally, would like to have a single web site to pose our queries and have that site integrating data from all relevant sites in the Web,

  10. Goal of data integration • Offer uniform access to a set of data autonomous and heterogeneous data sources: • Querying disparate sources • Large number of sources • Heterogeneous data sources (different systems, diff. Schemas, some structured others unstructured) • Autonomous data sources: we may not have full access to the data or source may not be available all the time.

  11. Why is it hard? • Systems reasons: even with the same HW and all relational sources, the SQL supported is not always the same • Logical reasons: different schemas (e.g. FullServe and EuroCard temporary employees), diff attributes (e.g., ID), diff attribute names for the same knowledge (e.g., text and action), diff. Representations of data (e.g. First-name, last name) also known as semantic heterogeneity • Social and administrative reasons

  12. Building a Data Integration System Create a middleware “mediator” or “data integration system” over the sources • Can be warehoused (a data warehouse) or virtual • Presents a uniform query interface and schema • Abstracts away multitude of sources; consults them for relevant data • Unifies different source data formats (and possibly schemas) • Sources are generally autonomous, not designed to be integrated • Sources may be local DBs or remote web sources/services • Sources may require certain input to return output (e.g., web forms): “binding patterns” describe these

  13. Logical components of a virtual integration system Is built for the data integration application and contains only the aspects of the domain relevant to the application. Most probably will contain a subset of the attributes seen in sources Specify the properties of the sources the system needs to know to use their data. Main component are semantic mappings that specify how attributes in the sources correspond to attributes in the mediated schema. Other info is whether sources are complete Programs whose role is to send queries to a data source, receive answers and possibly apply some transformation to the answer

  14. A data integration scenario

  15. Components of a data integration system

  16. Query reformulation (1) • Rewrite the user query that was posed in terms of the relations in the mediated schema, into queries referring to the schemas of data sources • Result is called a logical query plan • Ex: SELECT title, startTime FROM Movie, Plays WHERE Movie.title = Plays.movie AND location=”New York” AND director=”Woody Allen”

  17. Query reformulation (2) • Tuples for Movie can be obtained from source S1 but attribute title needs to be reformulated to name • Tuples for Plays can be obtained from S2 or S3. Since S2 is complete for showings in NY, we choose it • Since source S3 requires the title of a movie as input and the title is not secified in the query, the query plan must first access S1 and then feed the movie titles returned from S1 as inputs to S3.

  18. Query optimization • Acepts a logical query plan as input and produces a physical query plan • Specifies the exact order in which sources are accessed, when results are combined, which algorithms are used for performing operations on the data

  19. Query execution • Responsible for the execution of the physical query plan • Dispatches the queries to the individual sources through the wrappers and combines the results as specified by the query plan. • Also may ask the optimizer to reconsider its plan based on its monitoring of the plan’s progress (e.g., if source S3 is slow)

  20. Challenges of Mapping Schemas In a perfect world, it would be easy to match up items from one schema with another • Every table would have a similar table in the other schema • Every attribute would have an identical attribute in the other schema • Every value would clearly map to a value in the other schema Real world: as with human languages, things don’t map clearly! • May have different numbers of tables – different decompositions • Metadata in one relation may be data in another • Values may not exactly correspond • It may be unclear whether a value is the same

  21. Different Aspects to Mapping Schema matching / ontology alignment How do we find correspondences between attributes? Entity matching / deduplication / record linkage / etc. How do we know when two records refer to the same thing? Mapping definition • How do we specify the constraints or transformations that let us reason about when to create an entry in one schema, given an entry in another schema?

  22. Why Schema Matching is Important Enterprise 1 Application has more than one schema need for schema matching! Data integration Data integration Data translation Data warehousing E-commerce World-Wide Web Ontology Matching Knowledge Base 2 Information agent Enterprise 2 Homeusers KnowledgeBase1

  23. Why Schema Matching is Difficult • No access to exact semantics of concepts • Semantics not documented in sufficient details • Schemas not adequately expressive to capture semantics • Must rely on clues in schema & data • Using names, structures, types, data values, etc. • Such clues can be unreliable • Synonyms: Different names => same entity: • area & address => location • Homonyms: Same names => different entities: • area => location or square-feet • Done manually by domain experts • Expensive and time consuming

  24. The LSD Project Reconciling Schemas of Disparate Data Sources: A Machine Learning Approach AnHai Doan, Pedro Domingos, Alon Halevy University of Washington SIGMOD 2001

  25. Find houses with 2 bedrooms priced under 300K realestate.com homeseekers.com homes.com Charlie comes to town

  26. Data Integration Find houses with 2 bedrooms priced under 300K mediated schema source schema 1 source schema 3 source schema 2 wrapper wrapper wrapper realestate.com homeseekers.com homes.com

  27. The LSD (Learning Source Descriptions) Approach Suppose user wants to integrate 100 data sources 1. User • manually creates mappings for a few sources, say 3 • shows LSD these mappings 2. LSD learns from the mappings 3. LSD proposes mappings for remaining 97 sources

  28. Semantic Mappings between Schemas • Mediated & source schemas = XML DTDs house address contact-info num-baths agent-nameagent-phone 1-1 mapping non 1-1 mapping house location contact full-baths half-baths name phone

  29. Example Mediated schema address price agent-phone description locationlisted-pricephonecomments Learned hypotheses If “phone” occurs in the name => agent-phone Schema of realestate.com location Miami, FL Boston, MA ... listed-price $250,000 $110,000 ... phone (305) 729 0831 (617) 253 1429 ... comments Fantastic house Great location ... realestate.com If “fantastic” & “great” occur frequently in data values => description homes.com price $550,000 $320,000 ... contact-phone (278) 345 7215 (617) 335 2315 ... extra-info Beautiful yard Great beach ...

  30. LSD Contributions 1. Use of multi-strategy learning • well-suited to exploit multiple types of knowledge • highly modular & extensible 2. Extend learning to incorporate constraints • handle a wide range of domain & user-specified constraints 3. Develop XML learner • exploit hierarchical nature of XML

  31. Multi-Strategy Learning Use a set of base learners • Each exploits well certain types of information: • Name learner looks at words in the attribute names • Naïve Bayes learner looks at patterns in the data values • Etc. Match schema elements of a new source • Apply the base learners • Each returns a score • For different attributes one learner is more useful than another • Combine their predictions using a meta-learner Meta-learner • Uses training sources to measure base learner accuracy • Weights each learner based on its accuracy

  32. Base Learners • Input • schema information: name, proximity, structure, ... • data information: value, format, ... • Output • prediction weighted by confidence score • Examples • Name learner • agent-name => (name,0.7), (phone,0.3) • Naive Bayes learner • “Kent, WA” => (address,0.8), (name,0.2) • “Great location” => (description,0.9), (address,0.1)

  33. The two phases of LSD Training Phase Matching Phase Mediated schema • Base learners:Name Learner, XML learner, Naive Bayes, Whirl learner • Meta-learner • uses stacking [Ting&Witten99, Wolpert92] • returns linear weighted combination of base learners’ predictions Source schemas Domain Constraints Data listings Training data for base learners User Feedback Constraint Handler L1 L2 Lk Mapping Combination

  34. Training phase • Manually specify 1-1 mappings for several sources • Extract source data • Create training data for each base learner • Train the base learners • Train the meta-learner

  35. Training the Learners Mediated schema address price agent-phone description locationlisted-pricephonecomments Schema of realestate.com Name Learner (location, address) (listed-price, price) (phone, agent-phone) (comments, description) ... <location> Miami, FL </> <listed-price> $250,000</> <phone> (305) 729 0831</> <comments> Fantastic house </> realestate.com Naive Bayes Learner <location> Boston, MA </> <listed-price> $110,000</> <phone> (617) 253 1429</> <comments> Great location </> (“Miami, FL”, address) (“$ 250,000”, price) (“(305) 729 0831”, agent-phone) (“Fantastic house”, description) ...

  36. The matching phase • Extract and collect data • Match each source-DTD tag • Apply the constraint handler

  37. Figure 5 - SIGMOD’01 paper

  38. Applying the Learners Mediated schema Schema of homes.com address price agent-phone description area day-phone extra-info Name Learner Naive Bayes <area>Seattle, WA</> <area>Kent, WA</> <area>Austin, TX</> (address,0.8), (description,0.2) (address,0.6), (description,0.4) (address,0.7), (description,0.3) Meta-Learner Name Learner Naive Bayes Meta-Learner (address,0.7), (description,0.3) <day-phone>(278) 345 7215</> <day-phone>(617) 335 2315</> <day-phone>(512) 427 1115</> (agent-phone,0.9), (description,0.1) (address,0.6), (description,0.4) <extra-info>Beautiful yard</> <extra-info>Great beach</> <extra-info>Close to Seattle</>

  39. Domain Constraints • Impose semantic regularities on sources • verified using schema or data • Examples • a = address & b = addressa = b • a = house-ida is a key • a = agent-info & b = agent-nameb is nested in a • Can be specified up front • when creating mediated schema • independent of any actual source schema

  40. The Constraint Handler Predictions from Meta-Learner Domain Constraints a = address & b = adderssa = b area: (address,0.7), (description,0.3) contact-phone: (agent-phone,0.9), (description,0.1) extra-info: (address,0.6), (description,0.4) • Can specify arbitrary constraints • User feedback = domain constraint • ad-id = house-id • Extended to handle domain heuristics • a = agent-phone & b = agent-namea & b are usually close to each other 0.3 0.1 0.4 0.012 area: address contact-phone: agent-phone extra-info: address area: address contact-phone: agent-phone extra-info: description 0.7 0.9 0.6 0.378 0.7 0.9 0.4 0.252

  41. Exploiting Hierarchical Structure • Existing learners flatten out all structures • Developed XML learner • similar to the Naive Bayes learner • input instance = bag of tokens • differs in one crucial aspect • consider not only text tokens, but also structure tokens <contact> <name> Gail Murphy </name> <firm> MAX Realtors </firm> </contact> <description> Victorian house with a view. Name your price! To see it, contact Gail Murphy at MAX Realtors. </description>

  42. Related Work • Rule-based approaches • TRANSCM [Milo&Zohar98], ARTEMIS [Castano&Antonellis99], [Palopoli et. al. 98], CUPID [Madhavan et. al. 01] • utilizeonly schema information • Learner-based approaches • SEMINT [Li&Clifton94], ILA [Perkowitz&Etzioni95] • employ a single learner, limited applicability • Others • DELTA [Clifton et. al. 97], CLIO [Miller et. al. 00][Yan et. al. 01] • Multi-strategy learning in other domains • series of workshops [91,93,96,98,00] • [Freitag98], Proverb [Keim et. al. 99]

  43. Summary • LSD project • applies machine learning to schema matching • Main ideas & contributions • use of multi-strategy learning • extend learning to handle domain & user-specified constraints • develop XML learner • System design: A contribution to generic schema-matching • highly modular & extensible • handle multiple types of knowledge • continuously improve over time

  44. End LSD

  45. Mappings between Schemas LSD provides attribute correspondences, but not complete mappings Mappings generally are posed as views: define relations in one schema (typically either the mediated schema or the source schema), given data in the other schema • This allows us to “restructure” or “recompose + decompose” our data in a new way We can also define mappings between values in a view • We use an intermediate table defining correspondences – a “concordance table” • It can be filled in using some type of code, and corrected by hand

  46. Movie(Title, Year, Director, Editor, Star1, Star2) Movie(Title, Year, Director, Editor, Star1, Star2) PieceOfArt(ID, Artist, Subject, Title, TypeOfArt) MotionPicture(ID, Title, Year)Participant(ID, Name, Role) A Few Mapping Examples PieceOfArt(I, A, S, T, “Movie”) :- Movie(T, Y, A, _, S1, S2), I = T || Y, S = S1 || S2 Movie(T, Y, D, E, S1, S2) :- MotionPicture(I, T, Y), Participant(I, D, “Dir”), Participant(I, E, “Editor”), Participant(I, S1, “Star1”), Participant(I, S2, “Star2”) T1 T2 Need a concordance table from CustIDs to PennIDs

  47. Two Important Approaches • TSIMMIS [Garcia-Molina+97] – Stanford • Focus: semistructured data (OEM), OQL-based language (Lorel) • Creates a mediated schema as a view over the sources • Spawned a UCSD project called MIX, which led to a company now owned by BEA Systems • Other important systems of this vein: Kleisli/K2 @ Penn • Information Manifold [Levy+96] – AT&T Research • Focus: local-as-view mappings, relational model • Sources defined as views over mediated schema • Led to peer-to-peer integration approaches (Piazza, etc.) • Focus: Web-based queriable sources

  48. TSIMMIS • One of the first systems to support semi-structured data, which predated XML by several years: “OEM” • An instance of a “global-as-view” mediation system • We define our global schema as views over the sources

  49. XML vs. Object Exchange Model <book> <author>Bernstein</author> <author>Newcomer</author> <title>Principles of TP</title></book> <book> <author>Chamberlin</author> <title>DB2 UDB</title></book> O1: book { O2: author { Bernstein } O3: author { Newcomer } O4: title { Principles of TP } } O5: book { O6: author { Chamberlin } O7: title { DB2 UDB }}

  50. Queries in TSIMMIS Specified in OQL-style language called Lorel • OQL was an object-oriented query language that looks like SQL • Lorel is, in many ways, a predecessor to XQuery Based on path expressions over OEM structures: select book where book.title = “DB2 UDB” and book.author = “Chamberlin” This is basically like XQuery, which we’ll use in place of Lorel and the MSL template language. Previous query restated = for $b in AllData()/bookwhere $b/title/text() = “DB2 UDB” and $b/author/text() = “Chamberlin”return $b

More Related