Unsupervised learning with non ignorable missing data
Download
1 / 32

Unsupervised Learning With Non-ignorable Missing Data - PowerPoint PPT Presentation


  • 97 Views
  • Uploaded on

Unsupervised Learning With Non-ignorable Missing Data. Ben Marlin Sam Roweis Rich Zemel. Machine Learning Group Talk University of Toronto Monday Oct 4, 2004. Introduction. Missing Data Theory and EM. Synthetic Data Experiments. Extensions and Future Work. Conclusions.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Unsupervised Learning With Non-ignorable Missing Data' - sen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Unsupervised learning with non ignorable missing data

Unsupervised Learning With Non-ignorable Missing Data

Ben Marlin

Sam Roweis

Rich Zemel

Machine Learning Group Talk

University of Toronto

Monday Oct 4, 2004


Outline

Introduction

Missing Data Theory and EM

Synthetic Data Experiments

Extensions and Future Work

Conclusions

Models for Non-Ignorable Missing Data

Real Data Experiments

Outline


Introduction the problem of missing data

Missing data is a pervasive problem in machine learning and statistical data analysis.

Most large, complex data sets will be certain amount of missing data.

A fundamental question in the analysis of missing data is why is the data missing and what do we have to do about it?.

There are extreme examples of data sets in machine learning with upwards of 95% missing data (EachMovie).

Introduction The Problem of Missing Data


Introduction a theory of missing data

Little and Rubin laid out a theory of missing data several decades ago that provides answers to these questions.

They describe a classification of missing data in terms of the mechanism, or process that causes the data to be missing. ie: the generative model for missing data.

They also derive the exact conditions outlining when missing data must be treated specially to obtain correct inferences based on likelihood.

Introduction A Theory of Missing Data


Introduction types of missing data mcar

If the missing data can be explained by a simple random process like flipping a single biased coin, the missing data is missing completely at random.

Attributes

Attributes

Attributes

Attributes

Attributes

Attributes

Attributes

1

1

1

1

1

1

1

2

2

2

2

2

2

2

3

3

3

3

3

3

3

6

6

6

6

6

6

6

5

5

5

5

5

5

5

4

4

4

4

4

4

4

1

1

1

1

1

1

1

1

1

1

1

1

1

1

2

2

2

2

2

2

2

1

4

4

4

4

4

4

4

2

2

2

2

2

2

2

1

1

1

1

1

1

1

2

2

2

2

2

2

2

5

5

5

5

5

5

5

1

1

1

1

1

1

1

1

1

1

1

1

1

1

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

3

3

3

3

3

3

3

3

3

3

3

3

3

3

2

2

2

4

4

4

4

4

4

4

1

1

1

1

1

1

1

3

3

3

3

3

3

3

2

2

2

Data Cases

Data Cases

Data Cases

Data Cases

Data Cases

Data Cases

Data Cases

4

4

4

4

4

4

4

4

4

4

4

4

4

4

3

3

3

3

3

3

3

4

4

4

4

4

4

4

5

5

5

5

3

3

3

3

3

3

3

3

3

3

3

3

3

3

5

5

5

5

5

5

5

3

3

3

3

3

1

1

1

1

1

1

1

3

3

3

3

3

3

3

1

1

1

1

1

1

1

2

2

2

2

2

2

2

2

2

2

2

2

2

2

6

6

6

6

6

6

6

5

5

5

5

5

5

5

2

2

2

2

2

2

2

5

5

5

5

5

5

5

1

1

1

1

1

1

5

5

5

5

5

5

5

2

2

2

2

2

2

2

Introduction Types of Missing Data: MCAR


Introduction types of missing data mar

If the probability that a data entry is missing depends only on the data entries that are observed, then the data is missing at random.

Attributes

Attributes

Attributes

Attributes

Attributes

Attributes

Attributes

1

1

1

1

1

1

1

2

2

2

2

2

2

2

3

3

3

3

3

3

3

6

6

6

6

6

6

6

5

5

5

5

5

5

5

4

4

4

4

4

4

4

1

1

1

1

1

1

1

1

1

1

1

1

1

1

2

2

2

2

2

2

2

1

1

1

1

1

1

1

4

4

4

4

4

4

4

2

2

2

2

2

2

2

1

2

2

2

2

2

2

2

5

5

5

5

5

5

5

1

1

1

1

1

1

1

1

1

1

1

1

1

1

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

5

3

5

2

5

3

2

3

3

3

3

3

3

3

3

3

3

3

3

3

3

2

2

2

2

2

2

2

4

4

4

4

4

4

4

1

1

1

1

1

1

1

3

3

3

3

3

3

3

2

2

2

Data Cases

Data Cases

Data Cases

Data Cases

Data Cases

Data Cases

Data Cases

4

4

4

4

4

4

4

4

4

4

4

4

4

4

3

3

3

3

3

3

3

4

4

4

4

4

4

4

5

5

5

5

5

5

5

3

3

3

3

3

3

3

3

3

3

3

3

3

3

5

5

5

5

5

5

5

3

3

3

3

3

3

3

1

1

1

1

1

1

1

3

3

3

3

3

3

3

1

1

1

1

1

1

1

2

2

2

2

2

2

2

2

2

2

2

2

6

6

6

6

6

6

6

5

5

5

5

5

5

5

2

2

2

2

2

2

2

5

5

5

5

5

5

5

1

1

1

1

1

1

1

5

5

5

5

5

5

5

2

2

2

2

2

2

2

Introduction Types of Missing Data: MAR


Introduction types of missing data non ignorable

If the probability that a data entry is missing depends on the value of that data entry, then the missing data is non-ignorable.

Attributes

Attributes

1

1

2

2

3

3

6

6

5

5

4

4

1

1

1

1

2

2

1

1

4

2

2

1

4

1

2

3

5

2

2

5

1

1

1

1

5

5

5

5

3

3

3

3

2

2

4

1

1

3

3

2

2

Data Cases

Data Cases

4

4

4

3

3

4

5

5

3

3

3

3

5

5

3

3

1

1

3

3

1

1

2

2

2

6

6

5

2

2

4

4

1

1

5

2

2

Introduction Types of Missing Data: Non-Ignorable


Introduction the effect of missing data

4.90 the value of that data entry, then the missing data is non-ignorable.

4.90

4.10

If missing data is MCAR or MAR, then inference based on the observed data likelihood will not be biased.

If missing data is non-ignorable, then inference based on the observed data likelihood is provably biased.

Mean

Data:

5

6

4

5

6

3

3

4

8

5

6

4

4

7

6

5

4

5

2

6

MCAR:

5

4

5

6

3

8

5

7

4

2

NI:

5

3

3

4

5

4

4

6

4

5

2

Introduction The Effect of Missing Data


Introduction unsupervised learning and missing data

This simple mean estimation problem can be interpreted as fitting a normal distribution to the data, a simple unsupervised learning problem.

Just like the mean estimation example, any unsupervised learning algorithm that treats non-ignorable missing data as missing at random will learn biased estimates of model parameters.

Introduction Unsupervised Learning and Missing Data


Introduction research overview

The goals of this research project are: fitting a normal distribution to the data, a simple unsupervised learning problem.

Introduction Research Overview

1. Apply the theory developed by Little and Rubin to extend the standard unsupervised learning framework to correctly handle non-ignorable missing data.

2. Apply this extended framework to augment a variety of existing models, and show that tractable learning algorithms can be obtained.

3. Demonstrate that these augmented models out perform standard models on tasks where missing data is believed to be non-ignorable.


Introduction research overview1

The current status of the project: fitting a normal distribution to the data, a simple unsupervised learning problem.

Introduction Research Overview

1. We have been able to augment mixture models to account for non-ignorable missing data.

2. We have derived efficient learning and exact inference algorithms for the augmented models.

3. We have obtained empirical results on synthetic data sets showing the augmented models learn accurately.

4. Preliminary results were recently submitted to AISTATS.


Missing data theory and em notation

Missing Data Theory and EM fitting a normal distribution to the data, a simple unsupervised learning problem. Notation

Complete data matrix.

Observed elements of the data matrix.

Missing elements of the data matrix.

Matrix of response indicators.

Data model.

Selection or observation model.


Missing data theory and em the mar assumption

Under this notation the MAR assumption can be expressed as follows:

Basically this says the distribution over the response indicators is independent of the missing data.

Missing Data Theory and EM The MAR Assumption


Missing data theory and em observed and full likelihood functions

Missing Data Theory and EM follows:Observed and Full Likelihood Functions

The standard procedure for unsupervised learning is to maximize the observed data likelihood. The correct procedure is maximize the full data likelihood.


Missing data theory and em expectation maximization algorithm

Missing Data Theory and EM follows:Expectation Maximization Algorithm

In an unsupervised learning setting with non-ignorable missing data, the correct learning procedure is to maximize the expected full log likelihood.


Models for non ignorable missing data review standard mixture model

Models for Non-Ignorable Missing Data follows:Review: Standard Mixture Model

In the work that follows we assume a multinomial mixture model as the data model. It is a simple baseline model that is quite effective in many discrete domains.

q

n=1:N

Latent variable for case n.

b

Zn

Data variables for case n.

Y1n

Y2n

Y3n

YMn


Models for non ignorable missing data mixture fully connected model

If we fully connect the response indicators to the data variables we get the most general selection mode, but it is not tractable.

Models for Non-Ignorable Missing Data Mixture/Fully Connected Model

q

Latent variable

n=1:N

b

Zn

Data variables

Response indicators

m=1:M

Ymn

m

m=1:M

Rmn


Models for non ignorable missing data mixture cpt v model

To derive tractable learning and inference algorithms we need to assert further independence relations.

Models for Non-Ignorable Missing Data Mixture/CPT-v Model

q

Latent variable

n=1:N

b

Zn

Data variables

Response indicators

m=1:M

Ymn

m

Rmn


Models for non ignorable missing data mixture cpt v model1

Exact inference and learning for the Mixture/CPT-v model is only slightly more complex than in a standard mixture model.

Models for Non-Ignorable Missing Data Mixture/CPT-v Model


Models for non ignorable missing data mixture logit v mz model

The LOGIT-v,mz model assumes a functional form for the missing data parameters. It is able to model a wider range of effects.

Models for Non-Ignorable Missing Data Mixture/LOGIT-v,mz Model

q

Latent variable

n=1:N

b

Zn

Data variables

Response indicators

m=1:M

Ymn

m

Rmn


Models for non ignorable missing data mixture logit v mz model1

Exact inference is still possible, but learning requires gradient based techniques for s and w.

Models for Non-Ignorable Missing Data Mixture/LOGIT-v,mz Model


Synthetic data experiments experimental procedure

Synthetic Data Experiments gradient based techniques for Experimental Procedure

  • Sample mixture model parameters from Dirichlet priors.

  • Sample 5000 complete data cases from the mixture model.

  • Apply each missing data effect and resample complete data to obtain observed data.

  • Train each model on observed data only.

  • Measure prediction error on complete data set.


Synthetic data experiments experiment 1 cpt v missing data

Synthetic Data Experiments gradient based techniques for Experiment 1: CPT-v Missing Data


Synthetic data experiments experiment 1 results

Synthetic Data Experiments gradient based techniques for Experiment 1: Results


Synthetic data experiments experiment 2 logit v mz missing data

Synthetic Data Experiments gradient based techniques for Experiment 2: LOGIT-v,mz Missing Data

Value Based Effect

Item/Latent Variable Effect


Synthetic data experiments experiment 2 results

Synthetic Data Experiments gradient based techniques for Experiment 2: Results


Real data experiments experimental procedure

Real Data Experiments gradient based techniques for Experimental Procedure

  • Train LOGIT-v,mz model on observed data.

  • Look at parameters and full likelihood values after training.


Real data experiments data sets

  • Jester Collaborative Filtering Data Set :

  • Base: 900K Ratings, 17K users, 100 jokes, 50.4% missing

  • Filtering: Continuous –10 to +10 scale to discrete 5 point scale.

Real Data Experiments Data Sets


Real data experiments results marginal selection probabilities

Real Data Experiments gradient based techniques for Results – Marginal Selection Probabilities


Real data experiments results full data log likelihood

Real Data Experiments gradient based techniques for Results – Full Data Log Likelihood


Conclusions summary and future work

We have shown positive preliminary results on synthetic data with both the CPT-v, and that the LOGIT-v,mz model. We have shown that the LOGIT-v,mz model does something reasonable on real data.

To show some convincing results on real data we need to look at new procedures for collect data, and possibly new experimental procedures for validating model under this framework.

We have proposed a framework for dealing with non-ignorable missing data by augmenting existing models with a general selection model.

ConclusionsSummary and Future Work


The end

The End with both the CPT-v, and that the LOGIT-v,mz model. We have shown that the LOGIT-v,mz model does something reasonable on real data.


ad