1 / 54

Temple University – CIS Dept. CIS616– Principles of Data Management

Temple University – CIS Dept. CIS616– Principles of Data Management. V. Megalooikonomou Introduction to Data Mining ( based on notes by Jiawei Han and Micheline Kamber and on notes by Christos Faloutsos). General Overview - rel. model. Relational model - SQL

bartn
Download Presentation

Temple University – CIS Dept. CIS616– Principles of Data Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Temple University – CIS Dept.CIS616– Principles of Data Management V. Megalooikonomou Introduction to Data Mining (based on notes by Jiawei Han and Micheline Kamber and on notes by Christos Faloutsos)

  2. General Overview - rel. model • Relational model - SQL • Functional Dependencies & Normalization • Physical Design; Indexing • Query processing/optimization • Transaction processing • Advanced topics • Distributed Databases • OO- and OR-DBMSs • Data Mining

  3. Agenda • Motivation: Why data mining? • What is data mining? • Data Mining: On what kind of data? • Data mining functionality • Are all the patterns interesting? • Classification of data mining systems • Major issues in data mining

  4. Motivation • Data rich but information poor! • Data explosion problem • Automated data collection tools and mature database technology lead to tremendous amounts of data stored in databases, data warehouses and other information repositories • Solution: Data Mining • Extraction of interesting knowledge (rules, regularities, patterns, constraints) from data in large databases

  5. Evolution of Database Technology • 1960s: • Data collection, database creation, IMS and network DBMS • 1970s: • Relational data model, relational DBMS implementation • 1980s: • RDBMS, advanced data models (extended-relational, OO, deductive, etc.) and application-oriented DBMS (spatial, scientific, engineering, etc.) • 1990s—2000s: • Data mining and data warehousing, multimedia databases, and Web databases

  6. What Is Data Mining? • Data mining (knowledge discovery in databases): • Extraction of interesting (non-trivial,implicit, previously unknown and potentially useful)information or patterns from data in large databases • Alternative names: • Knowledge discovery(mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, information harvesting, business intelligence, etc. • What is not data mining? • (Deductive) query processing. • Expert systems or small ML/statistical programs

  7. What Is Data Mining? • Now that we have gathered so much data, what do we do with it? • Extract interesting patterns (automatically) • Associations (e.g., butter + bread --> milk) • Sequences (e.g., temporal data related to stock market) • Rules that partition the data (e.g., store location problem) • What patterns are “interesting”? information content, confidence and support, unexpectedness, actionability (utility in decision making))

  8. Why Data Mining? — Potential Applications • Database analysis and decision support • Market analysis and management • target marketing, market basket analysis,… • Risk analysis and management • Forecasting, quality control, competitive analysis,… • Fraud detection and management • Other Applications • Text mining (newsgroup, email, documents) and Web analysis. • Spatial data mining • Intelligent query answering

  9. Market Analysis and Management • Data sources for analysis? (Credit card transactions, discount coupons, customer complaint calls, etc.) • Target marketing (Find clusters of “model” customers who share same characteristics: interest, income level, spending habits, etc.) • Customer purchasing patterns over time (Conversion of single to a joint bank account: marriage, etc.) • Cross-market analysis (Associations between product sales and prediction based on associations) • Customer Profiling (What customers buy what products) • Customer Requirements (Best products for different customers) • Summary information (multidimensional summary reports)

  10. Risk Analysis and Management • Finance planning and asset evaluation • cash flow analysis and prediction • cross-sectional and time series analysis (financial-ratio, trend analysis, etc.) • Resource planning: • summarize and compare the resources and spending • Competition: • monitor competitors and market directions • group customers into classes and a class-based pricing procedure • set pricing strategy in a highly competitive market

  11. Fraud Detection and Management • Applications • health care, retail, credit card services, telecommunications etc. • Approach • use historical data to build models of normal and fraudulent behavior and use data mining to help identify fraudulent instances • Examples • auto insurance: detect groups who stage accidents to collect insurance • money laundering: detect suspicious money transactions • medical insurance: detect professional patients and ring of doctors, inappropriate medical treatment • detecting telephone fraud:Telephone call model: destination of the call, duration, time of day/week. Analyze patterns that deviate from expected norm.

  12. Discovery of Medical/Biological Knowledge • Discovery of structure-function associations • Structure of proteins and their function • Human Brain Mapping (lesion-deficit, task-activation associations) • Cell structure (cytoskeleton) and functionality or pathology • Discovery of causal relationships • Symptoms and medical conditions • DNA sequence analysis • Bioinformatics (microarrays, etc)

  13. Other Applications • Sports • IBM Advanced Scout analyzed NBA game statistics (shots blocked, assists, and fouls) to gain competitive advantage for New York Knicks and Miami Heat • Astronomy • JPL and the Palomar Observatory discovered 22 quasars with the help of data mining • Internet Web Surf-Aid • IBM Surf-Aid applies data mining algorithms to Web access logs for market-related pages to discover customer preference and behavior pages, analyzing effectiveness of Web marketing, improving Web site organization, etc.

  14. Data Mining: A KDD Process Knowledge Pattern Evaluation • Data mining: the core of knowledge discovery process. Data Mining Task-relevant Data Selection Data Warehouse Data Cleaning Data Integration Databases

  15. Steps of a KDD Process • Learning the application domain: • relevant prior knowledge and goals of application • Creating a target data set: data selection • Data cleaning and preprocessing: (may take 60% of effort!) • Data reduction and transformation: • Find useful features, dimensionality/variable reduction, invariant representation. • Choosing functions of data mining • summarization, classification, regression, association, clustering. • Choosing the mining algorithm(s) • Data mining: search for patterns of interest • Pattern evaluation and knowledge presentation • visualization, transformation, removing redundant patterns, etc. • Use of discovered knowledge

  16. Data Mining and Business Intelligence End User Making Decisions Increasing potential to support business decisions Business Analyst Data Presentation VisualizationTechniques DataMining Data Analyst Information Discovery DataExploration StatisticalAnalysis, QueryingandReporting Data Warehouses / Data Marts OLAP, MDA DBA Data Sources Paper, Files, Information Providers, Database Systems

  17. Architecture of a Typical Data Mining System Graphical user interface Pattern evaluation Data mining engine Knowledge-base Database or data warehouse server Data cleaning & data integration Filtering Data Warehouse Databases

  18. Data Mining: On What Kind of Data? • Relational databases • Data warehouses • Transactional databases • Advanced DB and information repositories • Object-oriented (OO)and object-relational (OR) databases • Spatial databases (medical, satellite image DBs, GIS) • Temporal databases • Text databases • Multimedia databases (Image, Video, etc) • Heterogeneous and legacy databases • WWW

  19. Data Mining Functionalities – Patterns that can be mined • Concept description: Characterization and discrimination • Generalize, summarize, and contrast data characteristics, e.g., dry vs. wet regions • Association (correlation and causality) • Multi-dimensional vs. single-dimensional association • age(X, “20..29”) ^ income(X, “20..29K”) à buys(X, “PC”) [support = 2%, confidence = 60%] • contains(T, “computer”) à contains(x, “software”) [1%, 75%] • Confidence(x à y) = P(y|x): degree of certainty of association • Support(x à y) = P(x y): % of transactions that the rule satisfies

  20. Data Mining Functionalities – Patterns that can be mined • Classification and Prediction • Finding models (e.g., if-then rules, decision trees, mathematical formulae, neural networks, classification rules) that describe and distinguish classes or concepts for future prediction, e.g., classify cars based on gasmileage • Prediction: Predict some unknown or missing numerical values • Cluster analysis • Class label is unknown: Group data to form new classes, e.g., cluster houses to find distribution patterns • Clustering principle: maximize intra-class similarity and minimize interclass similarity

  21. Data Mining Functionalities – Patterns that can be mined • Outlier analysis • Outliers: data objects that do not comply with the general behavior of the data (can be detected using statistical tests that assume a prob. model) • Often considered as noise but useful in fraud detection, rare events analysis • Trend and evolution analysis • Study regularities of objects whose behavior changes over time • Trend and deviation: regression analysis • Sequential pattern mining, periodicity analysis • Similarity-based analysis

  22. When is a “Discovered” Pattern Interesting? • A data mining system/query may generate thousands of patterns, not all of them are interesting. • Suggested approach: Human-centered, query-based, focused mining • Interestingness measures: A pattern is interestingif it is easily understood by humans, valid on new or test data with some degree of certainty, potentially useful, novel, or validates some hypothesis that a user seeks to confirm • Objective vs. subjective interestingness measures: • Objective: based on statistics and structures of patterns, e.g., support, confidence, etc. • Subjective: based on user’s belief in the data, e.g., unexpectedness, novelty, actionability, etc.

  23. Can We Find All and Only Interesting Patterns? • Find all the interesting patterns: Completeness • Can a data mining system find all the interesting patterns? • Association vs. classification vs. clustering • Search for only interesting patterns: Optimization • Can a data mining system find only the interesting patterns? • Approaches • First generate all the patterns and then filter out the uninteresting ones • Generate only the interesting patterns—mining query optimization

  24. Data Mining: Confluence of Multiple Disciplines Database Technology Statistics Data Mining Machine Learning Visualization Information Science Other Disciplines

  25. Data Mining: Classification Schemes • General functionality • Descriptive data mining • Predictive data mining • Different views, different classifications • Kinds of databases to be mined • Kinds of knowledge to be discovered • Kinds of techniques utilized • Kinds of applications adapted

  26. A Multi-Dimensional View of Data Mining Classification • Databases to be mined • Relational, transactional, object-oriented, object-relational, active, spatial, time-series, text, multi-media, heterogeneous, legacy, WWW, etc. • Knowledge to be mined • Characterization, discrimination, association, classification, clustering, trend, deviation and outlier analysis, etc. • Multiple/integrated functions and mining at multiple levels • Techniques utilized • Database-oriented, data warehouse (OLAP), machine learning, statistics, visualization, neural network, etc. • Applications adapted • Retail, telecommunication, banking, fraud analysis, DNA mining, stock market analysis, Web mining, Weblog analysis, etc.

  27. Major Issues in Data Mining • Mining methodology and user interaction • Mining different kinds of knowledge in databases • Interactive mining ofknowledge at multiple levels of abstraction • Incorporation of background knowledge to guide the discovery process • Data mining query languages and ad-hoc data mining • Expression and visualization of data mining results • Handling noise and incomplete data • Pattern evaluation: the interestingness problem • Performance and scalability • Efficiency and scalability of data mining algorithms • Parallel, distributed and incremental mining methods

  28. More details… Decision Trees • Decision trees • Problem • Approach • Classification through trees • Building phase - splitting policies • Pruning phase (to avoid over-fitting)

  29. Decision Trees • Problem: Classification – i.e., given a training set (N tuples, with M attributes, plus a label attribute) find rules, to predict the label for newcomers Pictorially:

  30. Decision trees ??

  31. Decision trees • Issues: • missing values • noise • ‘rare’ events

  32. Decision trees • types of attributes • numerical (= continuous) - eg: ‘salary’ • ordinal (= integer) - eg.: ‘# of children’ • nominal (= categorical) - eg.: ‘car-type’

  33. num. attr#2 (e.g., chol-level) - - + + + - + - + - + - + num. attr#1 (e.g., ‘age’) Decision trees • Pictorially, we have

  34. Decision trees • and we want to label ‘?’ ? num. attr#2 (e.g., chol-level) - - + + + - + - + - + - + num. attr#1 (e.g., ‘age’)

  35. Decision trees • so we build a decision tree: ? num. attr#2 (e.g., chol-level) - - + + + 40 - + - + - + - + 50 num. attr#1 (e.g., ‘age’)

  36. Decision trees • so we build a decision tree: age<50 N Y chol. <40 + Y N - ...

  37. Decision trees • Typically, two steps: • tree building • tree pruning (for over-training/over-fitting)

  38. num. attr#2 (e.g.,chol-level) - - + + + - + - + - + - + num. attr#1 (e.g., ‘age’) Tree building • How?

  39. Tree building • How? • A: Partition, recursively - pseudocode: Partition ( dataset S) if all points in S have same label then return evaluate splits along each attribute A pick best split, to divide S into S1 and S2 Partition(S1); Partition(S2) • Q1: how to introduce splits along attribute Ai • Q2: how to evaluate a split?

  40. Tree building • Q1: how to introduce splits along attribute Ai • A1: • for numerical attributes: • binary split, or • multiple split • for categorical attributes: • compute all subsets (expensive!), or • use a greedy approach

  41. Tree building • Q2: how to evaluate a split? • A: by how close to uniform each subset is - ie., we need a measure of uniformity:

  42. Tree building Any other measure? entropy: H(p+, p-) p+: relative frequency of class + in S 1 0 0.5 0 1 p+

  43. 1 0 0.5 0 1 p+ Tree building ‘gini’ index: 1-p+2 - p-2 entropy: H(p+, p-) p+: relative frequency of class + in S 1 0 0.5 0 1 p+

  44. Tree building Intuition: • entropy: #bits to encode the class label • gini: classification error, if we randomly guess ‘+’ with prob. p+

  45. num. attr#2 (e.g., chol-level) - - + + + - + - + - + - + num. attr#1 (e.g., ‘age’) Tree building Thus, we choose the split that reduces entropy/classification-error the most: E.g.:

  46. Tree building • Before split: we need (n+ + n-) * H( p+, p-) = (7+6) * H(7/13, 6/13) bits total, to encode all the class labels • After the split we need: 0 bits for the first half and (2+6) * H(2/8, 6/8) bits for the second half

  47. num. attr#2 (eg., chol-level) - - + + + - + - ... + - + - + num. attr#1 (eg., ‘age’) Tree pruning • What for?

  48. num. attr#2 (eg., chol-level) - - + + + - + - ... + - + - + num. attr#1 (eg., ‘age’) Tree pruning • Q: How to do it?

  49. Tree pruning • Q: How to do it? • A1: use a ‘training’ and a ‘testing’ set - prune nodes that improve classification in the ‘testing’ set. (Drawbacks?)

  50. Tree pruning • Q: How to do it? • A1: use a ‘training’ and a ‘testing’ set - prune nodes that improve classification in the ‘testing’ set. (Drawbacks?) • A2: or, rely on MDL (= Minimum Description Language) - in detail:

More Related