1 / 21

Why EOFs ?

Why EOFs ?. Joe Tribbia NCAR Random Matrices TOY 5/9/2007. Why EOFs ? outline. Background history of EOFs in meteorology 1 dimensional example-Burger’s eqn EOFs as a random matrix EOFs for taxonomy EOFs for dimension reduction/basis Summary. Background in meteorology.

keira
Download Presentation

Why EOFs ?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Why EOFs ? Joe Tribbia NCAR Random Matrices TOY 5/9/2007

  2. Why EOFs ? outline • Background history of EOFs in meteorology • 1 dimensional example-Burger’s eqn • EOFs as a random matrix • EOFs for taxonomy • EOFs for dimension reduction/basis • Summary

  3. Background in meteorology • 1956 report by E N Lorenz • Use EOFs to objectively classify Low –frequency weather patterns • Application was to “Long range prediction” i.e. monthly weather outlooks • Through Don Gilman, John Kutzbach and Bob Livezy became the basis for monthly and seasonal outlooks

  4. E N Lorenz : EOFs and dynamical systems Simplest chaotic system : the Lorenz Attractor, a metaphor for the unpredictability of weather

  5. E N Lorenz (continued) Covariance fits an ellipsoid to the attractor. EOFs are the principle axes of the ellipsoid

  6. 1 dimensional example:sample over time

  7. 1 dimensional example Burger’s equation Given by orthogonal projection Multiply eqn by exp(-ikx) and integrate 2K+1 independent degrees of freedom define u on 2K+1 points on a circle. Defines a random vector u(n) with sample covariance U(n,m)=<u(n)u(m)> Because of sampling U is a random matrix.

  8. 1 dimensional example (cont.) Diagonalizing U(m,n) determines the eigenvalues and eigenvectors of U(m,n). The sum of the eigenvalues is the trace of the U and is an invariant corresponding to the total variance. The diagonalization breaks the variance into independent pieces and the eigenvalue is the variance in each independent piece. The eigenvectors are the spatial structures corresponding to each independent variance Eigenvectors are orthogonal and can be used as a basis for u

  9. EOF spectrum and wavenumber spectrum Leading EOFs each represent 30% of variance EOF1 EOF2

  10. EOFs and PCs EOF1 PC1 EOF2 PC2

  11. Looking for variancestructure: taxonomy in climate Arctic Oscillation EOF#1 with 19% of Variance

  12. Looking for structure: taxonomy

  13. Looking for dynamicalstructure: bump hunting

  14. Searching for statistical structure beyond Gaussian. Is there a reason for EOF dominance beyond linear dynamics? Comparison of scatter plots for Lorenz attractor and climate data. Climate data is much more homogeneous, i.e. linear dynamics?

  15. Looking for predictable structure

  16. 1 dimensional example Burger’s equation Given by orthogonal projection Multiply eqn by exp(-ikx) and integrate 2K+1 independent degrees of freedom define u on 2K+1 points on a circle. Defines a random vector u(n) with sample covariance U(n,m)=<u(n)u(m)>

  17. 1 dimensional example (cont.) Diagonalizing U(m,n) determines the eigenvalues and eigenvectors of U(m,n). The sum of the eigenvalues is the trace of the U and is an invariant corresponding to the total variance. The diagonalization breaks the variance into independent pieces and the eigenvalue is the variance in each independent piece. The eigenvectors are the spatial structures corresponding to each independent variance Eigenvectors are orthogonal and can be used as a basis for u

  18. Dimension reduction:EOF basis

  19. Sampling strategies for small samples in high dimensional systems: dimension reduction From Liouville eqn, importance sampling, entropy consderations

  20. Bred vectors and Singular vectors Singular vector (upper) Bred vector (lower) Basic state jet Singular vectors are the fastest growing structures into the future Bred vectors are the fastest growing structures from the past. Both are EOFs of linearly predicted error covariance

  21. Concluding remarks • EOFs can be motivated from a dynamical systems perspective • EOFs useful for elucidating structure ( taxonomy, predictability, non-gaussianity) • EOFs useful for dimension reduction (natural basis, importance sampling) • Limits to utility: intrinsic Gaussianity and linearity, prior information needed

More Related