1 / 67

Data warehouses and Online Analytical Processing

Data warehouses and Online Analytical Processing. Lecture Outline. What is a data warehouse? A multi-dimensional data model Data warehouse architecture Data warehouse implementation From data warehousing to data mining. What is Data Warehouse?.

zora
Download Presentation

Data warehouses and Online Analytical Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data warehouses and Online Analytical Processing

  2. Lecture Outline • What is a data warehouse? • A multi-dimensional data model • Data warehouse architecture • Data warehouse implementation • From data warehousing to data mining

  3. What is Data Warehouse? • Defined in many different ways, but not rigorously. • A decision support database that is maintained separately from the organization’s operational database • Supports information processing by providing a solid platform of consolidated, historical data for analysis.

  4. “A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision-making process.”—W. H. Inmon, a leading architect in the DW development • Data warehousing: • The process of constructing and using data warehouses

  5. Subject-Oriented • Organized around major subjects, such as customer, product, sales. • Focusing on the modeling and analysis of data for decision makers, not on daily operations or transaction processing. • Provide a simple and concise view around particular subject issues by excluding data that are not useful in the decision support process.

  6. Integrated • Constructed by integrating multiple, heterogeneous data sources • relational databases, flat files, on-line transaction records • Data cleaning and data integration techniques are applied. • Ensure consistency in naming conventions, encoding structures, attribute measures, etc. among different data sources • E.g., Hotel price: currency, tax, breakfast covered, etc. • When data is moved to the warehouse, it is converted.

  7. Time Variant • The time horizon for the data warehouse is significantly longer than that of operational systems. • Operational database: current value data. • Data warehouse data: provide information from a historical perspective (e.g., past 5-10 years) • Every key structure in the data warehouse • Contains an element of time, explicitly or implicitly • The key of operational data may or may not contain “time element”.

  8. Non-Volatile • A physically separate store of data transformed from the operational environment. • Operational update of data does not occur in the data warehouse environment. • Does not require transaction processing, recovery, and concurrency control mechanisms • Requires only two operations in data accessing: • initial loading of data and access of data.

  9. Data Warehouse vs. Operational DBMS (OLAP vs. OLTP) • OLTP (on-line transaction processing) • Major task of traditional relational DBMS • Day-to-day operations: purchasing, inventory, banking, manufacturing, payroll, registration, accounting, etc. • OLAP (on-line analytical processing) • Major task of data warehouse system • Data analysis and decision making • Distinct features (OLTP vs. OLAP): • User and system orientation: customer vs. market • Data contents: current, detailed vs. historical, consolidated • Database design: ER + application vs. star + subject • View: current, local vs. evolutionary, integrated • Access patterns: update vs. read-only but complex queries

  10. OLTP vs. OLAP

  11. Why Create a Separate Data Warehouse? • High performance for both systems • DBMS— tuned for OLTP: access methods, indexing, concurrency control, recovery • Warehouse—tuned for OLAP: complex OLAP queries, multidimensional view, consolidation. • Different functions and different data: • missing data: Decision support requires historical data which operational DBs do not typically maintain • data consolidation: DS requires consolidation (aggregation, summarization) of data from heterogeneous sources • data quality: different sources typically use inconsistent data representations, codes and formats which have to be reconciled

  12. Lecture Outline • What is a data warehouse? • A multi-dimensional data model • Data warehouse architecture • Data warehouse implementation • From data warehousing to data mining

  13. From Tables and Spreadsheets to Data Cubes • A data warehouse is based on a multidimensional data model which views data in the form of a data cube • In data warehousing literature, an n-D base cube is called a base cuboid. The top most 0-D cuboid, which holds the highest-level of summarization, is called the apex cuboid. The lattice of cuboids forms a data cube.

  14. A data cube, such as sales, allows data to be modeled and viewed in multiple dimensions • Dimension tables, such as item (item_name, brand, type), or time(day, week, month, quarter, year) • Fact table contains measures (such as dollars_sold) and keys to each of the related dimension tables

  15. Cube: A Lattice of Cuboids all 0-D(apex) cuboid time item location supplier 1-D cuboids time,item time,location item,location location,supplier 2-D cuboids time,supplier item,supplier time,location,supplier time,item,location 3-D cuboids item,location,supplier time,item,supplier 4-D(base) cuboid time, item, location, supplier

  16. Conceptual Modeling of Data Warehouses • Modeling data warehouses: dimensions & measures • Star schema: A fact table in the middle connected to a set of dimension tables • Snowflake schema: A refinement of star schema where some dimensional hierarchy is normalized into a set of smaller dimension tables, forming a shape similar to snowflake • Fact constellations: Multiple fact tables share dimension tables, viewed as a collection of stars, therefore called galaxy schema or fact constellation

  17. item time item_key item_name brand type supplier_type time_key day day_of_the_week month quarter year location branch location_key street city province_or_street country branch_key branch_name branch_type Example of Star Schema Sales Fact Table time_key item_key branch_key location_key units_sold dollars_sold avg_sales Measures

  18. supplier item time item_key item_name brand type supplier_key supplier_key supplier_type time_key day day_of_the_week month quarter year city location branch city_key city province_or_street country location_key street city_key branch_key branch_name branch_type Example of Snowflake Schema Sales Fact Table time_key item_key branch_key location_key units_sold dollars_sold avg_sales Measures

  19. Note that the only difference between star and snowflake schemas is the normalization of one or more dimension tables, thus reducing redundancy. • Is that worth it? Since dimension tables are small compared to fact tables, the saving in space turns out to be negligible. • In addition, the need for extra joins introduced by the normalization reduces the computational efficiency. For this reason, the snowflake schema model is not as popular as the star schema model.

  20. item time item_key item_name brand type supplier_type time_key day day_of_the_week month quarter year location location_key street city province_or_street country shipper branch shipper_key shipper_name location_key shipper_type branch_key branch_name branch_type Example of Fact Constellation Shipping Fact Table time_key Sales Fact Table item_key time_key shipper_key item_key from_location branch_key to_location location_key dollars_cost units_sold units_shipped dollars_sold avg_sales Measures

  21. Note that those designs apply to data warehouses as well as to data marts. • A data warehouse spans an entire enterprise while a data mart is restricted to a single department and is usually a subset of the data warehouse. • The star and snowflakes are more popular for data marts, since they model single objects, with the former being more efficient for the same reasons discussed above.

  22. A Data Mining Query Language, DMQL: Language Primitives • Language used to build and query DWs • Cube Definition (Fact Table) define cube <cube_name> [<dimension_list>]: <measure_list> • Dimension Definition ( Dimension Table ) define dimension <dimension_name> as (<attribute_or_subdimension_list>

  23. Defining a Star Schema in DMQL define cube sales_star [time, item, branch, location]: dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*) define dimension time as (time_key, day, day_of_week, month, quarter, year) define dimension item as (item_key, item_name, brand, type, supplier_type) define dimension branch as (branch_key, branch_name, branch_type) define dimension location as (location_key, street, city, province_or_state, country)

  24. Defining a Snowflake Schema in DMQL define cube sales_snowflake [time, item, branch, location]: dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*) define dimension time as (time_key, day, day_of_week, month, quarter, year) define dimension item as (item_key, item_name, brand, type, supplier(supplier_key, supplier_type)) define dimension branch as (branch_key, branch_name, branch_type) define dimension location as (location_key, street, city(city_key, province_or_state, country))

  25. Defining a Fact Constellation in DMQL define cube sales [time, item, branch, location]: dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars), units_sold = count(*) define dimension time as (time_key, day, day_of_week, month, quarter, year) define dimension item as (item_key, item_name, brand, type, supplier_type) define dimension branch as (branch_key, branch_name, branch_type) define dimension location as (location_key, street, city, province_or_state, country) define cube shipping [time, item, shipper, from_location, to_location]: dollar_cost = sum(cost_in_dollars), unit_shipped = count(*) define dimension time as time in cube sales define dimension item as item in cube sales define dimension shipper as (shipper_key, shipper_name, location as location in cube sales, shipper_type) define dimension from_location as location in cube sales define dimension to_location as location in cube sales

  26. Measures • A data cube measure is a numerical function that can be evaluated at each point in the data cube space. • The data cube space is the set of points defined by the dimensions in that space. For example, the point {Time=“2001”, Item=”milk”, Location=”Fargo”} is the point in the space defined by aggregating the data across the given dimension values. • Points can be joined to form multidimensional planes by removing dimensions such as {Time=“2001”, Item=”milk”}.

  27. Three categorizations exist for data warehouse measures. • The distributivecategory includes all aggregate functions that when applied to different chunks of the data and then to results of those chucks would yield the same result as when applied over the whole dataset. • For example, after dividing a dataset into n parts and applying the SUM() function on each, we get n sums. Applying the SUM() again on the n sums would result in the same value as if we applied SUM() on the whole dataset in the first place. Other distributive functions include MIN(), MAX(), and COUNT().

  28. The second category of measures is the algebraic category which includes functions that can be computed by an algebraic function with M arguments each of which is obtained by a distributive-category function. • For example AVG() is calculated as SUM()/COUNT() both of which are distributive. • The third and last category is the holistic category which aggregate functions that are not classified to either of the aforementioned categories. • Examples include MODE() and MEDIAM().

  29. Concept Hierarchies • Formally speaking a concept hierarchy defines a sequence of mappings from a set of low-level concepts to higher-level concepts. • For example, the time dimension has the following two sequences: • first, a day maps to month, a month maps to a quarter and a quarter maps to a year; • second, a day maps to week, a week maps to a year. • This is a partial order mapping because not all items are related (e.g. weeks and months are unrelated because weeks can span months).

  30. An example of a total order is the location hierarchy: • a street maps to a city, a city maps to state and a state maps to country. • Concept hierarchies are usually defined over categorical attributes; however, continuous attributes could be discretized thus forming concept hierarchies. • For example, for “price” dimension with values [0,1000], we form the following discretization: [0,100), [100,500), [500, 1000), etc…where “[” is inclusive and “)” is exclusive.

  31. Store Dimension Total Region District Stores

  32. A Concept Hierarchy: Dimension (location) all all Europe ... North_America region Germany ... Spain Canada ... Mexico country Vancouver ... city Frankfurt ... Toronto L. Chan ... M. Wind office

  33. Multidimensional Data • Sales volume as a function of product, month, and region Dimensions: Product, Location, Time Hierarchical summarization paths Region Industry Region Year Category Country Quarter Product City Month Week Office Day Product Month

  34. Date 2Qtr 1Qtr sum 3Qtr 4Qtr TV Product U.S.A PC VCR sum Canada Country Mexico sum All, All, All A Sample Data Cube Total annual sales of TV in U.S.A.

  35. Cuboids Corresponding to the Cube all 0-D(apex) cuboid country product date 1-D cuboids product,date product,country date, country 2-D cuboids 3-D(base) cuboid product, date, country

  36. Typical OLAP Operations • Roll up (drill-up): summarize data • by climbing up hierarchy or by dimension reduction • Drill down (roll down): reverse of roll-up • from higher level summary to lower level summary or detailed data, or introducing new dimensions • Slice and dice: • project and select • Pivot (rotate): • reorient the cube, visualization, 3D to series of 2D planes. • Other operations • drill across: involving (across) more than one fact table • drill through: through the bottom level of the cube to its back-end relational tables (using SQL)

  37. Roll-up • Performs aggregation on a data cube either by • climbing up a concept hierarchy or • a “sales” cube defined by {Time= “January 2002”, Location = “Fargo”} could be ROLLed_UP by requiring Time= “2002” or Location = “Fargo” or both • dimension reduction (i.e. removing one or more dimensions from the view). • require viewing the cube by removing the location dimension thus aggregating the sales across all locations

  38. Drill-down • Just the opposite of the ROLL-UP operation. • navigates from more detailed to less detailed data views by either stepping down a concept hierarchy or introducing additional dimensions. • a “sales” cube defined by {Time= “January 2002”, Location = “Fargo”} could be DRILLed_DOWN by requiring Time= “24 January 2002” or Location = “Cass County Fargo” or both. • To DOWN-UP by dimension addition, we could require viewing the cube by adding the item dimension

  39. The SLICE operation performs selections on one dimension of a given cube. For example, we can slice over time= “2001” which will give us subcube with time= “2001”. • Similarly, DICE performs selections on but on more than one dimension.

  40. Lecture Outline • What is a data warehouse? • A multi-dimensional data model • Data warehouse architecture • Data warehouse implementation • From data warehousing to data mining

  41. Building a Data Warehouse • As aforementioned, the process creating of a warehouse goes through the steps of: data cleaning, data integration, data transformation, data loading and refreshing • Before creating the data warehouse, a design phase usually occurs. • Designing warehouses follows software engineering design standards.

  42. Data Warehouse Design • Top-down, bottom-up approaches or a combination of both • Top-down: Starts with overall design and planning (mature) • Bottom-up: Starts with experiments and prototypes (rapid) • The most two ubiquitous design models are waterfall and spiral model. • The waterfall model • has a number of phases and proceeds from a new phase after completing of the current phase. • rather static and does not give a lot of flexibility for change and • only suitable for systems with well-known requirements as in bottom-up designs. • The spiral model • based on incremental development of the data warehouse. • A prototype of a part is built, evaluated, modified and then integrated into the final system. • much more flexible in allowing change and is thus more fit to designs without well-known requirements as in top-down designs. • In practice, a combination of both though what is known as the combined approach is the best solution.

  43. In order to design a data warehouse, we need to follow a certain number of steps: • understand the business process we are modeling, • select the fact table (s) by viewing the data at the atomic level, • select the dimensions over which you are trying to view the data in the fact table(s) and • finally choose the measure of interest such as “amount sold” or “dollars sold”.

  44. other sources Extract Transform Load Refresh Operational DBs Multi-Tiered Architecture Monitor & Integrator OLAP Server Metadata Analysis Query Reports Data mining Serve Data Warehouse Data Marts Data Sources Data Storage OLAP Engine Front-End Tools

  45. Three Data Warehouse Models • Enterprise warehouse • collects all of the information about subjects spanning the entire organization • Data Mart • a subset of corporate-wide data that is of value to a specific groups of users. Its scope is confined to specific, selected groups, such as marketing data mart • Independent vs. dependent (directly from warehouse) data mart • Virtual warehouse • A set of views over operational databases • Only some of the possible summary views may be materialized

  46. OLAP Server Architectures • OLAP servers provide a logical view thus abstracting the physical storage of the data in the data warehouse. • Relational OLAP (ROLAP)servers • sit on top of relational (or extended-relational) DBMSs which are used to store and manage warehouse data. • optimize query throughput and uses OLAP middleware to support missing OLAP functionalities. • Usually the star and snowflake schemas are used here. • The base fact table stores the data at the abstraction level indicated by the join keys in the schema for the given cubes

  47. Aggregated data can be stored either in additional fact tables known as summary fact tables or in the same base fact table. • Multidimensional OLAP (MOLAP) servers • sit on top of multidimensional databases to store and manage data. • Facts are usually stored in multidimensional arrays with dimensions as indexes over the arrays thus providing fast indexing. • suffer from storage utilization problems due to array sparsity. Matrix compression can used to alleviate this problem especially for sparse • ROLAP tends to outperform MOLAP. Examples include IBM DB2, Oracle, Informix Redbrick, and Sybase IQ.

  48. Figure A.6Mapping between relational tables and multidimensional arrays

  49. Hybrid OLAP (HOLAP) servers • combines ROLAP and MOLAP. • Data maybe stored in relational databases with aggregations in a MOLAP store. • Typically it stores data in a both a relational database and a multidimensional database and uses whichever one is best suited to the type of processing desired. • For data-heavy processing, the data is more efficiently stored in a RDB; while for speculative processing, the data is more effectively stored in an MDDB. • In general, it combines ROLAP’s scalability with and MOLAP’s fast indexing. • An example of HOLAP is Microsoft SQL Server 7.0 OLAP Services.

More Related