1 / 64

Machine Learning: Regression or Classification?

Machine Learning: Regression or Classification?. J.-S. Roger Jang ( 張智星 ) jang@mirlab.org http://mirlab.org/jang MIR Lab, CSIE Dept. National Taiwan University. GHW Dataset. GHW: Gender-height-weight Source A table of 10000 entries with 3 columns Gender: Categorical Height: Numerical

hughburton
Download Presentation

Machine Learning: Regression or Classification?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Learning:Regression or Classification? J.-S. Roger Jang (張智星) jang@mirlab.org http://mirlab.org/jang MIR Lab, CSIE Dept. National Taiwan University

  2. GHW Dataset • GHW: Gender-height-weight • Source • A table of 10000 entries with 3 columns • Gender: Categorical • Height: Numerical • Weight: Numerical

  3. Multiple Ways of Prediction • Multiple ways of prediction based on GHW dataset • Classification: Height, weight  gender • Regression: Gender, height  weight • Regression: Gender, weight  height • In general • Classification: the output is categorical • Regression: the output is numerical • Sometimes it’s not so obvious!

  4. NBC for Iris Dataset (1/3) Scatter plot of Iris dataset (with only the last two dim.) PDF on each features and each class

  5. NBC for Iris Dataset (2/3) PDF for each class

  6. NBC for Iris Dataset (3/3) Decision boundaries (using the last 2 features)

  7. Strength and Weakness of NBC Quiz! • Strength • Fast computation during training and evaluation • Robust than QC • Weakness • No able to deal with bi-modal data correctly • Class boundary not as complex as QC

  8. K-Nearest Neighbor Classifiers(KNNC) J.-S. Roger Jang (張智星) jang@mirlab.org http://mirlab.org/jang MIR Lab, CSIE Dept. National Taiwan University

  9. Concept of KNNC Quiz! : class-A point : class-B point : point with unknown class Feature 2 3 nearest neighbors  The point is classified as B via 3NNC. • Concept: 近朱者赤、近墨者黑 • Steps: • Find the first k nearest neighbors of a given point. • Determine the class of the given point by voting among k nearest neighbors. Feature 1

  10. Flowchart for KNNC General flowchart of PR: KNNC: Feature extraction From raw data to features Model construction Clustering (optional) Model evaluation KNNC evaluation on test dataset

  11. Decision Boundary for 1NNC • Voronoi diagram: piecewise linear boundary Quiz!

  12. Characteristics of KNNC • Strengths of KNNC • Intuitive • No computation for model construction • Weakness of KNNC • Massive computation required when dataset is big • No straightforward way • To determine the value of K • To rescale the dataset along each dimension Quiz!

  13. Preprocessing/Variants for KNNC Quiz! • Preprocessing: • Z normalization to have zero mean and unit variance along each feature • Value of K obtained via trials and errors • Variants: • Weighted votes • Nearest prototype classification • Edited nearest neighbor classification • k+k-nearest neighbor

  14. Demos by Cleve Moler • Cleve’s Demos of Delaunay triangles and Voronoi diagram • books/dcpr/example/cleve/vshow.m

  15. Natural Examples of Voronoi Diagrams (1/2)

  16. Natural Examples of Voronoi Diagrams (2/2)

  17. 1NNC Decision Boundaries • 1NNC Decision boundaries

  18. 1NNC Distance/Posterioras Surfaces and Contours

  19. Using Prototypes in KNNC • No. of prototypes for each class is 4.

  20. Decision Boundaries of Different Classifiers Quadratic classifier 1NNC classifier Naive Bayes classifier

  21. Naive Bayes Classifiers(NBC) J.-S. Roger Jang (張智星) jang@mirlab.org http://mirlab.org/jang MIR Lab, CSIE Dept. National Taiwan University

  22. Assumptions & Characteristics • Assumptions: • Statistical independency between features • Statistical independency between samples • Each feature governed by a feature-wise parameterized PDF (usually a 1D Gaussian) • Characteristics • Simple and easy (That’s why it’s named “naive”.) • Highly successful in real-world applications regardless of the strong assumptions

  23. Training and Test Stages of NBC Quiz! • Training stage • Identify class PDF, as follows. • Identify feature PDF by MLE for 1D Gaussians • Class PDF is the product of all the corresponding feature PDFs • Test stage • Assign a sample to the class by taking class prior into consideration:

  24. NBC for Iris Dataset (1/3) Scatter plot of Iris dataset (with only the last two dim.) PDF on each features and each class

  25. NBC for Iris Dataset (2/3) PDF for each class

  26. NBC for Iris Dataset (3/3) Decision boundaries (using the last 2 features)

  27. Strength and Weakness of NBC Quiz! • Strength • Fast computation during training and evaluation • Robust than QC • Weakness • No able to deal with bi-modal data correctly • Class boundary not as complex as QC

  28. Quadratic Classifiers(QC) J.-S. Roger Jang (張智星) jang@mirlab.org http://mirlab.org/jang MIR Lab, CSIE Dept. National Taiwan University

  29. Review: PDF Modeling • Goal: • Find a PDF (probability density function) that can best describe a given dataset • Steps: • Select a class of parameterized PDF • Identify the parameters via MLE (maximum likelihood estimate) based on a given set of sample data • Commonly used PDFs: • Multi-dimensional Gaussian PDF • Gaussian mixture models (GMM)

  30. PDF Modeling for Classification • Procedure for classification based on PDF • Training stage: PDF modeling of each class based on the training dataset • Test stage: For each entry in the test dataset, pick the class with the max. PDF • Commonly used classifiers: • Quadratic classifiers, with n-dim. Gaussian PDF • Gaussian-mixture-model classifier, with GMM PDF

  31. 1D Gaussian PDF Modeling 1D Gaussian PDF: MLE of m (mean) and s2 (variance)

  32. D-dim. Gaussian PDF Modeling D-dim Gaussian PDF: MLE of m (mean) and S (covariance matrix)

  33. 2D Gaussian PDF Contours PDF Bivariate normal distribution:

  34. Training and Test Stages of QC Quiz! • Training stage • Identify the Gaussian PDF of each class via MLE • Test stage • Assign a sample point to the class C by taking class prior into consideration:

  35. Characteristics of QC Quiz! • If each class is modeled by an Gaussian PDF, the decision boundary between any two classes is a quadratic function. • That is why it is called quadratic classifier. • How to prove it?

  36. QC on Iris Dataset (1/3) • Scatter plot of Iris dataset (with only the last two dim.)

  37. QC on Iris Dataset (2/3) • PDF for each class:

  38. QC on Iris Dataset (3/3) • Decision boundaries among classes:

  39. Strength and Weakness of QC Quiz! • Strength • Easy computation when the dimension d is small • Efficient way to compute leave-one-out cross validation • Weakness • The covariance matrix (d by d) is big when the dimension d is median large • The inverse of the covariance matrix may not exist • Cannot handle bi-modal data correctly

  40. Modified Versions of QC Quiz! • How to modify QC such that it won’t deal with a big covariance matrix? • Make the covariance matrix diagonal Equivalent to naïve Bayes classifiers (proof?) • Make the covariance matrix a constant times an identity matrix

  41. Performance Evaluation:Accuracy Estimate forClassification and Regression J.-S. Roger Jang (張智星) jang@mirlab.org http://mirlab.org/jang MIR Lab, CSIE Dept. National Taiwan University

  42. Introduction to Performance Evaluation • Performance evaluation: An objective procedure to derive the performance index (or figure of merit) of a given model • Typical performance indices • Accuracy • Recognition rate: ↗(望大) • Error rate: ↘(望小) • RMSE (root mean squared error): ↘(望小) • Computation load: ↘ (望小)

  43. Synonyms • Sets of synonyms to be used interchangeably (since we are focusing on classification): • Classifiers, models • Recognition rate, accuracy • Training, design-time • Test, run-time

  44. Performance Indices for Classifiers Quiz! • Performance indices of a classifier • Recognition rate • How to have an objective method or procedure to derive it • Computation load • Design-time computation (training) • Run-time computation (test) • Our focus • Recognition rate and the procedures to derive it • The estimated accuracy depends on • Dataset partitioning • Model (types and complexity)

  45. Methods for Performance Evaluation • Methods to derive the recognition rates • Inside test (resubstitution accuracy) • One-side holdout test • Two-side holdout test • M-fold cross validation • Leave-one-out cross validation

  46. Dataset Partitioning For model construction For model evaluation Dataset Training set Test set Training set Validating set For model selection For model construction • Data partitioning: to make the best use of the dataset • Training set • Training and test sets • Training, validating, and test sets Disjoint!

  47. Why It’s So Complicated? • To avoid overfitting! • Overfitting in regression • Overfitting in classification

  48. Inside Test (1/2) • Dataset partitioning • Use the whole dataset for training & evaluation • Recognition rate (RR) • Inside-test or resubstitution recognition rate

  49. Inside Test (2/2) • Characteristics • Too optimistic since RR tends to be higher • For instance, 1-NNC always has an RR of 100%! • Can be used as the upper bound of the true RR. • Potential reasons for low inside-test RR: • Bad features of the dataset • Bad method for model construction, such as • Bad results from neural network training • Bad results from k-means clustering

  50. One-side Holdout Test (1/2) • Dataset partitioning • Training set for model construction • Test set for performance evaluation • Recognition rate • Inside-test RR • Outside-test RR

More Related