Loading in 5 sec....

MSCL Analyst’s Toolbox, Part 2 PowerPoint Presentation

MSCL Analyst’s Toolbox, Part 2

- 101 Views
- Uploaded on

Download Presentation
## PowerPoint Slideshow about ' MSCL Analyst’s Toolbox, Part 2 ' - odetta

**An Image/Link below is provided (as is) to download presentation**

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

### “Probe Level” analysis

MSCL Analyst’s Toolbox, Part 2

Instructors:

Jennifer Barb, Zoila G. Rangel, Peter Munson

March 2007

Mathematical and Statistical Computing Laboratory

Division of Computational Biosciences

Statistical topics

- Quality Control Charts
- False Discovery Rate
- Principal Components Analysis explained
- PCA Heatmap
- Data normalization, transformation
- Affymetrix probesets and “Probe-level” analysis
- MAS5, RMA, S10 compared

Gene Expression Microarrays

- Started in mid-1990s, exponential growth in popularity
- High-throughput -- measures 10,000s of genes at once
- Very noisy -- systematic and random errors
- Chip manufacturing, printing artifacts
- RNA sample quality issues
- Sample preparation, amplification, labeling reaction problems
- Hybridization reaction variability
- Linearity of response, saturation, background

- Affymetrix has controlled chip quality well.
- REPLICATION IS STILL REQUIRED!
- Statistical methods are critical in analysis!
- Quality Control is Essential!

Quality Control Plotsfor Parameters RawQ, ScaleFactor

Experimental Designs for Gene Expression

- Cross-sectional clinical studies from 2 or more patient groups or tissues; identify markers, prognostic indicators.
- Animal model: samples compared between treatments, groups, or over time; identify genes involved in disease process.
- Intervention Trial: collect blood samples pre/post treatment or over time, identify (and rationalize) genes involved.
- Cell culture: Treat cells in culture, identify genes and patterns of response. Complex study designs possible.
- Genetic Knock-out: Perturb genotype, give treatment, investigate expression response, in animal or cells.

Gene Expression Analysis Strategies

- Clinical Studies:
- Exploratory analysis, Hierarchical Cluster, Heat maps
- Sample size often insufficient
- Two-sample tests, Discriminant Analysis, “machine learning” approaches to find prognostic factors

- Designed studies: Analysis plan should follow design
- T-tests, one-way ANOVA to select significantly changing genes
- Blocking to account for experimental batch
- Two-way ANOVA for complete two-factor experiments
- Regression (etc.) for time-course experimemts

- Corrections for multiple-comparison (20,000 genes tested)
- False Discovery Rate

- Interpretation of gene lists (open-ended problem!)

discoveries

False

discoveries

Cut at p<.05

P-values should be uniformly distributed- Note excess of small p-values in 45,000 probe sets
- Indicates presence of significant, differentially expressed genes

Expected Number of False Discoveries

FDR* =

Number Discovered

(Number of tests) x p-value cutoff

=

Number Discovered at this p-value

12

12,000 * .001

= 25%

FDR =

=

48

48

False Discovery Rate calculation(simplified version)Example: 48 genes detected at p<.001 in chip with 12,000 genes.

*Benjamini, Y., Hochberg, Y. (1995) JRSS-B, 57, 289-300.

False Discovery Rate calculation(full version)

Now we have guarantee that,

Samples

n

1

12,625

Genes

Gene Expression Data Matrix, X(transpose of “Final File” format)Annotations for each Gene

Expression Matrix, X

Information about

each Sample

Analyzing the Data Matrix

- "pre-condition" the Expression Data Matrix
- Select "significant" Genes (False Discovery Rate)
- Select relevant Samples (Outlier rejection, QC)
- Re-order, partition the Genes ("clustering")
- Re-order the Samples
- Visualize the matrix ("heat-map", PCA scatterplot), encode Gene and Sample annotations
- Visualize by Sample (rows of X, scatterplots, line plots)
- Visualize by Gene (cols of X)
- Visualize the Annotations (how?)
- Browse the display for new hypotheses!

Principal Component Analysis

Each Principal Component is an orthogonal, linear combination of the expression levels. For the ith gene chip:

In matrix notation:

Principal Components Matrix

Patterns Matrix

Expression Data Matrix

Components

Genes

1

12,625

n

1

1

12,625

1

1

1

PC

*

=

Experiments

Experiments

Components

X

EP

n

n

n

Plot PC(i,1) vs PC(i,2)

for each experiment

Data Matrix (X) equals Principal Components (PC) times Expression Patterns (EP = AT)- EP row1 contains most important “expression pattern"
- PC col 1 defines how that pattern is manifest in each experiment
- Similarly for EP row 2, PC col 2, etc.
- Only a few patterns needed to reconstruct data matrix X

GLOBAL DATABASE PCA BI-PLOT (PC2 vs PC3)

Components

Genes

1

12,625

n

1

1

12,625

1

1

1

PC

*

=

X

Experiments

Experiments

Components

EP

n

n

n

Visualize coefficients

of a first few “Patterns”,

Re-order Experiments

PCA HEATMAPData (X) equals Components (PC) times Expression Patterns (EP)U95A DatabasePCA Heatmapcolored bySample Type (12)

Conclusion:

Sample Type and Project

determine clusters

Chip-to-chip normalization,Data transformation

- Signal intensity varies chip-to-chip for a variety of technical reasons.
- Scale adjustments can be made in variety of ways.
- Median adjustment (divide by col median) is commonly used
- Other quantiles (e.g.75th percentile) may work better

- Log-transform
- spreads data more evenly
- makes variance more uniform

- “Lmed” is median normalized, log transform

Chip-to-chip normalization,Data transformation (2)

- Quantile normalization (“ranking” the data): every percentile becomes identical across chips
- Quantile normalization may remove technical artifacts (e.g. curvature)
- Variance should be homogeneous across measurement scale
- Variance may be “homogenized” with appropriate transform (e.g. logarithm, square-root, arcsinh)
- “S10” transform -- optimal variance stabilizing, quantile normalizing transform, calibrated to match Log10 over central part of measurement scale

2 Comparison of two chips - Lmed(SG)

Note deviation from line of identity

2 Comparison of two chips - 2 x limits

- Note deviation from line of identity
- Note nonuniform variance

Median-normalized Log-transform“Lmed”

- Adequate in most cases
BUT….

- Some nonlinearity may remain, requiring further normalization
- Variance is not truly constant, expands at low intensities
- Cannot treat zero or negative values
- Logarithm may not be best transformation
- Median normalization may not always be adequate

Variance Stabilizing Transform (3)

Symmetric Adaptive Transform (S10):

- We start with quantile normalization to convenient distribution
- We further transform to make variance constant with mean
- We adapt transform to empirical variance model (with experiment with at least 5 to 10 chips)
- We scale transform to match log10 units midrange
- We require symmetry around origin

2 Comparison of two chips - Lmed(Signal)

Model the nonlinear relationship

Red line is plot of

quantile of chip 1 vs quantile of chip 2

2 Comparison of two chips - Quantile normalization

- Second chip is quantile-normalized to first chip
- Curvature is cured!
- Now, can we remove the variable spread?
- Nonuniform variance?

2 Comparison of two chips -Symmetric Adaptive Transpose, base10 “S10”

- Uses Quantile normalization
- Gives better fit to line of identity
- Adapts scale to give homogeneous variance
- Uniform scatter about line
- Calibrated to match Log10 in middle of scale
- *Munson, P.J. A consistency test for determining the significance of gene expression changes on replicate samples and two convenient variance-stabilizing transformations. in GeneLogic Workshop of Low Level Analysis of Affymetrix GeneChip Data. 2001. Bethesda, MD.

PCA on Lmed transformed data

- 12 Chips
- 3 Groups
- Two apparent outliers
- Groups not well separated
- 1st PC explains 15.3% of variation

PCA on S10 transformed data

- Outliers no longer obvious
- Groups well-separated
- 1st PC explains 30.8% of variation

Comparison of Signal, RMA, S10

Data Summarizing Algorithms

To go from 11 probe pairs to a single number:

- Affymetrix MAS 4.0 (Average difference)
- Affymetrix MAS 5.0 (Signal)
- dChip (Li and Wong, 2001)
- RMA (Irizarry, 2003)
- PLIER (Hubbell, 2004, Affymetrix)
- Transformations of above statistics (Log, Glog, S10, etc.)

Which Algorithm is Best?Latin Square Data Answers Question

- Spike-in (or Latin Square) study on Affy U133A chip
- 13 concentrations plus “control” spiked into complex HeLa background
- 42 oligos, 0, 0.125 - 512 pM
- Concentration doubles at each step
- Three chips run for each concentrationwww.affymetrix.com“Latin Square Data for Expression Algorithm Assessment”

Mean Intensity for Probeset

Concentration Number

Move selector box to detect more Red, fewer Blue points

Detect 2x changes for each Spike-inUsing Volcano PlotRED - spike-in genes

BLUE - background

ROC curve for Lmed of Signal

TP=Red points inside detection box

FP=Blue points inside detection box

Number of True Positives

Lmed(Signal)

Number of False Positives

Comparison of Algorithms

- RMA
- gives overall best ROC curve
- requires probes on multiple chips be summarized together
- Implemented in Affy EC, R, Bioconductor or ArrayAssistLite

- Signal (MAS5)
- is convenient, available in Affy GCOS software
- summarizes each chip separately
- has expanded variance near baseline
- LmedMAS5 give worst ROC curve

- S10 transform
- cures variance problem for Signal,
- improves detection efficiency (ROC curve),
- is simple to compute!

Download Presentation

Connecting to Server..