Mscl analyst s toolbox part 2
This presentation is the property of its rightful owner.
Sponsored Links
1 / 60

MSCL Analyst’s Toolbox, Part 2 PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

MSCL Analyst’s Toolbox, Part 2 . Instructors: Jennifer Barb, Zoila G. Rangel, Peter Munson March 2007. Mathematical and Statistical Computing Laboratory Division of Computational Biosciences. Statistical topics. Quality Control Charts False Discovery Rate

Download Presentation

MSCL Analyst’s Toolbox, Part 2

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Mscl analyst s toolbox part 2

MSCL Analyst’s Toolbox, Part 2


Jennifer Barb, Zoila G. Rangel, Peter Munson

March 2007

Mathematical and Statistical Computing Laboratory

Division of Computational Biosciences

Statistical topics

Statistical topics

  • Quality Control Charts

  • False Discovery Rate

  • Principal Components Analysis explained

  • PCA Heatmap

  • Data normalization, transformation

  • Affymetrix probesets and “Probe-level” analysis

  • MAS5, RMA, S10 compared

Gene expression microarrays

Gene Expression Microarrays

  • Started in mid-1990s, exponential growth in popularity

  • High-throughput -- measures 10,000s of genes at once

  • Very noisy -- systematic and random errors

    • Chip manufacturing, printing artifacts

    • RNA sample quality issues

    • Sample preparation, amplification, labeling reaction problems

    • Hybridization reaction variability

    • Linearity of response, saturation, background

  • Affymetrix has controlled chip quality well.


  • Statistical methods are critical in analysis!

  • Quality Control is Essential!

Quality control plots for parameters rawq scalefactor

Quality Control Plotsfor Parameters RawQ, ScaleFactor

Quality control plots for parameters rawq scalefactor1

New Scanner Installed

Scanner “burn-in”?

Quality Control Plotsfor Parameters RawQ, ScaleFactor

Quality control

Quality Control

Experimental designs for gene expression

Experimental Designs for Gene Expression

  • Cross-sectional clinical studies from 2 or more patient groups or tissues; identify markers, prognostic indicators.

  • Animal model: samples compared between treatments, groups, or over time; identify genes involved in disease process.

  • Intervention Trial: collect blood samples pre/post treatment or over time, identify (and rationalize) genes involved.

  • Cell culture: Treat cells in culture, identify genes and patterns of response. Complex study designs possible.

  • Genetic Knock-out: Perturb genotype, give treatment, investigate expression response, in animal or cells.

Gene expression analysis strategies

Gene Expression Analysis Strategies

  • Clinical Studies:

    • Exploratory analysis, Hierarchical Cluster, Heat maps

    • Sample size often insufficient

    • Two-sample tests, Discriminant Analysis, “machine learning” approaches to find prognostic factors

  • Designed studies: Analysis plan should follow design

    • T-tests, one-way ANOVA to select significantly changing genes

    • Blocking to account for experimental batch

    • Two-way ANOVA for complete two-factor experiments

    • Regression (etc.) for time-course experimemts

  • Corrections for multiple-comparison (20,000 genes tested)

    • False Discovery Rate

  • Interpretation of gene lists (open-ended problem!)

P values should be uniformly distributed





Cut at p<.05

P-values should be uniformly distributed

  • Note excess of small p-values in 45,000 probe sets

  • Indicates presence of significant, differentially expressed genes

False discovery rate calculation simplified version

Expected Number of False Discoveries

FDR* =

Number Discovered

(Number of tests) x p-value cutoff


Number Discovered at this p-value


12,000 * .001

= 25%





False Discovery Rate calculation(simplified version)

Example: 48 genes detected at p<.001 in chip with 12,000 genes.

*Benjamini, Y., Hochberg, Y. (1995) JRSS-B, 57, 289-300.

False discovery rate calculation full version

False Discovery Rate calculation(full version)

Now we have guarantee that,

Gene expression data matrix x transpose of final file format







Gene Expression Data Matrix, X(transpose of “Final File” format)

Annotations for each Gene

Expression Matrix, X

Information about

each Sample

Analyzing the data matrix

Analyzing the Data Matrix

  • "pre-condition" the Expression Data Matrix

  • Select "significant" Genes (False Discovery Rate)

  • Select relevant Samples (Outlier rejection, QC)

  • Re-order, partition the Genes ("clustering")

  • Re-order the Samples

  • Visualize the matrix ("heat-map", PCA scatterplot), encode Gene and Sample annotations

  • Visualize by Sample (rows of X, scatterplots, line plots)

  • Visualize by Gene (cols of X)

  • Visualize the Annotations (how?)

  • Browse the display for new hypotheses!

Principal component analysis

Principal Component Analysis

Each Principal Component is an orthogonal, linear combination of the expression levels. For the ith gene chip:

In matrix notation:

Principal Components Matrix

Patterns Matrix

Expression Data Matrix

Data can be reconstructed from pcs


Data can be Reconstructed from PCs!

A was chosen so that AAT is the Identity matrix:

Data matrix x equals principal components pc times expression patterns ep a t
























Plot PC(i,1) vs PC(i,2)

for each experiment

Data Matrix (X) equals Principal Components (PC) times Expression Patterns (EP = AT)

  • EP row1 contains most important “expression pattern"

  • PC col 1 defines how that pattern is manifest in each experiment

  • Similarly for EP row 2, PC col 2, etc.

  • Only a few patterns needed to reconstruct data matrix X

Principal components analysis

Principal Components Analysis

PC 2(12%)

PC 1(38%)

Global database hg u95a pca bi plot


Each spot is one chip


Global database pca bi plot pc2 vs pc3


Pca heatmap data x equals components pc times expression patterns ep
























Visualize coefficients

of a first few “Patterns”,

Re-order Experiments

PCA HEATMAPData (X) equals Components (PC) times Expression Patterns (EP)

U95a database pca heatmap colored by sample type 12

U95A DatabasePCA Heatmapcolored bySample Type (12)


Sample Type and Project

determine clusters

Pca heatmap of entire database

PCA Heatmap of Entire Database

469 Chips,

468 Components5,933,750 values!

Data normalization and transformation

Data Normalization and Transformation

Chip to chip normalization data transformation

Chip-to-chip normalization,Data transformation

  • Signal intensity varies chip-to-chip for a variety of technical reasons.

    • Scale adjustments can be made in variety of ways.

    • Median adjustment (divide by col median) is commonly used

    • Other quantiles (e.g.75th percentile) may work better

  • Log-transform

    • spreads data more evenly

    • makes variance more uniform

  • “Lmed” is median normalized, log transform

Chip to chip normalization data transformation 2

Chip-to-chip normalization,Data transformation (2)

  • Quantile normalization (“ranking” the data): every percentile becomes identical across chips

  • Quantile normalization may remove technical artifacts (e.g. curvature)

  • Variance should be homogeneous across measurement scale

  • Variance may be “homogenized” with appropriate transform (e.g. logarithm, square-root, arcsinh)

  • “S10” transform -- optimal variance stabilizing, quantile normalizing transform, calibrated to match Log10 over central part of measurement scale

Data transformation and normalization

Data Transformation and Normalization

Log x median x transform lmed

Log(x/median x) transform (“Lmed”)

2 comparison of two chips mas5 signal

2 Comparison of two chips-MAS5 signal

2 comparison of two chips log10 signal

2 Comparison of two chips - Log10(Signal)

2 comparison of two chips lmed sg

2 Comparison of two chips - Lmed(SG)

Note deviation from line of identity

2 comparison of two chips 2 x limits

2 Comparison of two chips - 2 x limits

  • Note deviation from line of identity

  • Note nonuniform variance

Median normalized log transform lmed

Median-normalized Log-transform“Lmed”

  • Adequate in most cases


  • Some nonlinearity may remain, requiring further normalization

  • Variance is not truly constant, expands at low intensities

  • Cannot treat zero or negative values

  • Logarithm may not be best transformation

  • Median normalization may not always be adequate

Variance stabilizing transform 3

Variance Stabilizing Transform (3)

Symmetric Adaptive Transform (S10):

  • We start with quantile normalization to convenient distribution

  • We further transform to make variance constant with mean

  • We adapt transform to empirical variance model (with experiment with at least 5 to 10 chips)

  • We scale transform to match log10 units midrange

  • We require symmetry around origin

2 comparison of two chips lmed signal

2 Comparison of two chips - Lmed(Signal)

Model the nonlinear relationship

Red line is plot of

quantile of chip 1 vs quantile of chip 2

2 comparison of two chips quantile normalization

2 Comparison of two chips - Quantile normalization

  • Second chip is quantile-normalized to first chip

  • Curvature is cured!

  • Now, can we remove the variable spread?

  • Nonuniform variance?

2 comparison of two chips symmetric adaptive transpose base10 s10

2 Comparison of two chips -Symmetric Adaptive Transpose, base10 “S10”

  • Uses Quantile normalization

  • Gives better fit to line of identity

  • Adapts scale to give homogeneous variance

  • Uniform scatter about line

  • Calibrated to match Log10 in middle of scale

  • *Munson, P.J. A consistency test for determining the significance of gene expression changes on replicate samples and two convenient variance-stabilizing transformations. in GeneLogic Workshop of Low Level Analysis of Affymetrix GeneChip Data. 2001. Bethesda, MD.

Symmetric adaptive transform s10

Symmetric Adaptive Transform (“S10”)

Symmetric adaptive transform s101


Symmetric Adaptive Transform (“S10”)


Pca on lmed transformed data

PCA on Lmed transformed data

  • 12 Chips

  • 3 Groups

  • Two apparent outliers

  • Groups not well separated

  • 1st PC explains 15.3% of variation

Pca on s10 transformed data

PCA on S10 transformed data

  • Outliers no longer obvious

  • Groups well-separated

  • 1st PC explains 30.8% of variation

Fold change due to drug log10 scale

Fold Change due to Drug - Log10 scale

LFC - Repl. 2

Log Fold Change-Drug vs. Control - Repl. 1

Fold change due to drug s10 scale

SFC - Repl. 2

SFC-Drug vs. Control - Repl. 1

Fold Change due to Drug - S10 scale

Variance stabilizing transforms 1

Variance Stabilizing Transforms (1)

Variance stabilizing transforms 2

Variance Stabilizing Transforms (2)


Log of signal variance model

Log of “Signal”, Variance Model

Lmed Transform Value

Std Dev Lmed

Mean Lmed Value

Signal Value

S10 signal variance model

S10(“Signal”), Variance Model

S10 Transform Value

Std Dev S10

Mean S10 Value

Lmed Transform Value

Probe level analysis

“Probe Level” analysis

Comparison of Signal, RMA, S10

Affymetrix technology

Affymetrix Technology

Affymetrix uses multiple probes per gene

Affymetrix uses multiple probes per gene

Data summarizing algorithms

Data Summarizing Algorithms

To go from 11 probe pairs to a single number:

  • Affymetrix MAS 4.0 (Average difference)

  • Affymetrix MAS 5.0 (Signal)

  • dChip (Li and Wong, 2001)

  • RMA (Irizarry, 2003)

  • PLIER (Hubbell, 2004, Affymetrix)

  • Transformations of above statistics (Log, Glog, S10, etc.)

Which algorithm is best latin square data answers question

Which Algorithm is Best?Latin Square Data Answers Question

  • Spike-in (or Latin Square) study on Affy U133A chip

  • 13 concentrations plus “control” spiked into complex HeLa background

  • 42 oligos, 0, 0.125 - 512 pM

  • Concentration doubles at each step

  • Three chips run for each“Latin Square Data for Expression Algorithm Assessment”

Mean Intensity for Probeset

Concentration Number

Detect 2x changes for each spike in using volcano plot

Move selector box to detect more Red, fewer Blue points

Detect 2x changes for each Spike-inUsing Volcano Plot

RED - spike-in genes

BLUE - background

Roc curve for lmed of signal

ROC curve for Lmed of Signal

TP=Red points inside detection box

FP=Blue points inside detection box

Number of True Positives


Number of False Positives

Roc for s10 signal

ROC for S10(Signal)



Number of True Positives


Number of False Positives

Lmed signal details

Lmed(Signal) Details

S10 signal details

S10(Signal) Details

Rma details

RMA Details

Comparison of algorithms

Comparison of Algorithms

  • RMA

    • gives overall best ROC curve

    • requires probes on multiple chips be summarized together

    • Implemented in Affy EC, R, Bioconductor or ArrayAssistLite

  • Signal (MAS5)

    • is convenient, available in Affy GCOS software

    • summarizes each chip separately

    • has expanded variance near baseline

    • LmedMAS5 give worst ROC curve

  • S10 transform

    • cures variance problem for Signal,

    • improves detection efficiency (ROC curve),

    • is simple to compute!

  • Login