- 82 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about 'Distances Between Genes and Samples' - tamera

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Why Do We Care about Distance?

Genes that are "close" may be in the same pathway or have similar function.

Samples that are "close" may come from similar subpopulations, or have an evolutionary relationship.

Some ideas from the picture

1. Some genes are much more highly expressed in some tissue.

2. Brain regions are more similar across species than species across brain regions (for these genes).

Some questions based on the picture

Suppose that we were missing the expression for a gene in this picture for a particular sample. How might we "impute" a value?

Suppose we had a new sample and wanted to know the species and brain region? How could we infer this?

Suppose we have a gene of unknown function in the first cluster. What might one infer about the function?

Some common distance measures

'euclidean': Usual square distance between the two vectors (2 norm).

'maximum': Maximum distance between two components of x and y (supremum norm)

'manhattan': Absolute distance between the two vectors (1 norm).

'canberra': sum(|x_i - y_i| / |x_i + y_i|). Terms with zero numerator and denominator are omitted from the sum and treated as if the values were missing.

'minkowski': The p norm, the pth root of the sum of the pth powers of the differences of the components.

'correlation': 1-r where r is the Pearson or Spearman correlation

'absolute correlation': 1-|r|

Preprocessing

Often we preprocess, either by gene (row) or by sample (or treatment) (column)

Centering

Y-c(Y) where c(Y) is the sample mean or

sample median

Standardizing (scaling, z-score)

Y-c(Y) where s(Y) is a measure of spread

s(Y) such as SD or IQR

Data Imputation

Some classification and clustering methods require expression values for every sample (or every gene in every sample)

Imputation is a technical term for clever methods to make up data.

2 reasonable methods:

K-nearest neighbors (KNN)

Singular Value Decomposition (SVD)

KNN

Based on the distance metric of your choice, and all the non-missing values of your gene G

compute d(G,x) for all other genes and select

the K closest x1 ... xk

The imputed value is the weighted average

S wixi where closer genes are more heavily weighted.

Naturally, we use the imputed value only to fill in the missing value - the other values for G are kept.

SVD

Any matrix M can be decomposed as

Mg,s=Ug,gDg,sV's,s where D is zero except on the

diagonal

Then Mvi = divi and uiM=diui

V captures the main patterns of variation in the rows of M

U captures the main directions of variation in the columns of M

SVD

I made up a matrix as follows:

x=1:12

G15,12 has 4 rows which are sin(2p x/12) + noise

3 rows which are sin(2p x/12 + 1.5) +noise

7 rows which are noise

SVD

Any matrix M can be decomposed as

Mg,s=Ug,gDg,sVs,s where D is zero except on the

diagonal

Use the k columns of V corresponding to the largest values of D

Then

is considered to be a less noisy version of M

M is the gene expression matrix with rows=genes and columns = samples.

We start by putting the row average in any missing value and then iteratively estimate V and regress the observed values of the gene on the selected columns of V to fill in the missing values

SVD

g.svd=svd(g)

Using SVD

The diagonal elements of D are called the eigenvalues. The sum of the elements squared is a measure of the overall variability of the data.

We generally assume that treatment differences create more variability than the noise in the data.

So, the largest eigenvalues should express the variability due to the treatments.

Using SVD

Scree is the jumble of rock at the base of a mountain.

The scree plot (eigenvalues vs rank) is used to pick out the informative eigenvalues from the jumble (noise).

"Denoised Reconstruction"

Mg,s=Ug,gDg,sV's,s

If there are k informative eigenvalues, we pick the corresponding k columns of U and V

M*=Ug,kDk,kV's,k is a denoised reconstruction of M

To use to fill in missing data: start by filling in M with some estimate (e.g. row or column mean)

Compute M* and replace missing values in M by corresponding values in M*

Iterate

The first 9 rows of the matrix and the reconstruction based on 2 eigenvalues.

Note that rows 8 and 9 are where the pure noise begin.

Some terminology for Data Imputation

• Missing completely at random:

Pr( R| X(obs), X(miss) ) = Pr( R )

does not depend on the values in X .

• Missing at random:

Pr( R| X(obs), X(miss) ) = Pr( R| X(obs) )

depends on the values in X only through the ones we get to observe.

• Missing NOT at random:

Pr( R| X(obs), X(miss) )

depends also on the values we do not get to observe – most complicated situation

(These simple imputation methods do not work.)

Download Presentation

Connecting to Server..