1 / 36

Advances in Random Matrix Theory: Let there be tools

Advances in Random Matrix Theory: Let there be tools. Alan Edelman Brian Sutton, Plamen Koev, Ioana Dumitriu, Raj Rao and others MIT: Dept of Mathematics, Computer Science AI Laboratories World Congress, Bernoulli Society Barcelona, Spain Wednesday July 28, 2004. Tools.

Download Presentation

Advances in Random Matrix Theory: Let there be tools

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advances in Random Matrix Theory:Let there be tools Alan Edelman Brian Sutton, Plamen Koev, Ioana Dumitriu, Raj Rao and others MIT: Dept of Mathematics, Computer Science AI Laboratories World Congress, Bernoulli Society Barcelona, Spain Wednesday July 28, 2004

  2. Tools • So many applications … • Random matrix theory: catalyst for 21st century special functions, analytical techniques, statistical techniques • In addition to mathematics and papers • Need tools for the novice! • Need tools for the engineers! • Need tools for the specialists!

  3. Themes of this talk • Tools for general beta • What is beta? Think of it as a measure of (inverse) volatility in “classical” random matrices. • Tools for complicated derived random matrices • Tools for numerical computation and simulation

  4. Wigner’s Semi-Circle • The classical & most famous rand eig theorem • Let S = random symmetric Gaussian • MATLAB: A=randn(n); S=(A+A’)/2; • S known as the Gaussian Orthogonal Ensemble • Normalized eigenvalue histogram is a semi-circle • Precise statements require n etc. n=20; s=30000; d=.05; %matrix size, samples, sample dist e=[]; %gather up eigenvalues im=1; %imaginary(1) or real(0) for i=1:s, a=randn(n)+im*sqrt(-1)*randn(n);a=(a+a')/(2*sqrt(2*n*(im+1))); v=eig(a)'; e=[e v]; end hold off; [m x]=hist(e,-1.5:d:1.5); bar(x,m*pi/(2*d*n*s)); axis('square'); axis([-1.5 1.5 -1 2]); hold on; t=-1:.01:1; plot(t,sqrt(1-t.^2),'r');

  5. sym matrix to tridiagonal form Same eigenvalue distribution as GOE: O(n) storage !! O(n2) compute

  6. General beta beta: 1: reals 2: complexes 4: quaternions Bidiagonal Version corresponds To Wishart matrices of Statistics

  7. Tools • Motivation: A condition number problem • Jack & Hypergeometric of Matrix Argument • MOPS: Ioana Dumitriu’s talk • The Polynomial Method • The tridiagonal numerical 109 trick

  8. Tools • Motivation: A condition number problem • Jack & Hypergeometric of Matrix Argument • MOPS: Ioana Dumitriu’s talk • The Polynomial Method • The tridiagonal numerical 109 trick

  9. Numerical Analysis: Condition Numbers • (A) = “condition number of A” • If A=UV’ is the svd, then (A) = max/min . • Alternatively, (A) =  max (A’A)/ min (A’A) • One number that measures digits lost in finite precision and general matrix “badness” • Small=good • Large=bad • The condition of a random matrix???

  10. P(<n) 0.02 P(< 10n)0.44 P(<10n)0.80 Von Neumann & co. estimates (1947-1951) • “For a ‘random matrix’ of order n the expectation value has been shown to be about n” Goldstine, von Neumann • “… we choose two different values of , namely n and 10n” Bargmann, Montgomery, vN • “With a probability ~1 …  < 10n” Goldstine, von Neumann X 

  11. Random cond numbers, n Distribution of /n Experiment with n=200

  12. Finite n n=10 n=25 n=50 n=100

  13. P(κ/n > x) ≈ 2/x P(κ/n2 > x) ≈ 4/x2 Condition Number Distributions Real n x n, n∞ • Generalizations: • β: 1=real, 2=complex • finite matrices • rectangular: m x n Complex n x n, n∞

  14. P(κ/n > x) ≈ 2/x P(κ/n2 > x) ≈ 4/x2 Condition Number Distributions Square, n∞: P(κ/nβ > x) ≈ (2ββ-1/Γ(β))/xβ (All Betas!!) General Formula: P(κ>x) ~Cµ/x β(n-m+1), where µ= β(n-m+1)/2th moment of the largest eigenvalue of Wm-1,n+1 (β) and C is a known geometrical constant. Density for the largest eig of W is known in terms of 1F1((β/2)(n+1), ((β/2)(n+m-1); -(x/2)Im-1)from whichµ is available Tracy-Widom law applies probably all beta for large m,n. Johnstone shows at least beta=1,2. Real n x n, n∞ Complex n x n, n∞

  15. Tools • Motivation: A condition number problem • Jack & Hypergeometric of Matrix Argument • MOPS: Ioana Dumitriu’s talk • The Polynomial Method • The tridiagonal numerical 109 trick

  16. Multivariate Orthogonal Polynomials&Hypergeometrics of Matrix Argument • Ioana Dumitriu’s talk • The important special functions of the 21st century • Begin with w(x) on I • ∫ pκ(x)pλ(x) Δ(x)β∏i w(xi)dxi = δκλ • Jack Polynomials orthogonal for w=1 on the unit circle. Analogs of xm

  17. Multivariate Hypergeometric Functions

  18. Multivariate Hypergeometric Functions

  19. Wishart (Laguerre) Matrices!

  20. Plamen’s clever idea

  21. Tools • Motivation: A condition number problem • Jack & Hypergeometric of Matrix Argument • MOPS: Ioana Dumitriu’s talk • The Polynomial Method • The tridiagonal numerical 109 trick

  22. Mops (Dumitriu etc.) Symbolic

  23. Symbolic MOPS applications A=randn(n); S=(A+A’)/2; trace(S^4) det(S^3)

  24. Symbolic MOPS applications β=3; hist(eig(S))

  25. Smallest eigenvalue statistics A=randn(m,n); hist(min(svd(A).^2))

  26. Tools • Motivation: A condition number problem • Jack & Hypergeometric of Matrix Argument • MOPS: Ioana Dumitriu’s talk • The Polynomial Method -- Raj! • The tridiagonal numerical 109 trick

  27. RM Tool – Raj! Courtesy of the Polynomial Method

  28. Tools • Motivation: A condition number problem • Jack & Hypergeometric of Matrix Argument • MOPS: Ioana Dumitriu’s talk • The Polynomial Method • The tridiagonal numerical 109 trick

  29. 1 n2 d2 dx2 Everyone’s Favorite Tridiagonal … … … … …

  30. 1 n2 d2 dx2 dW β1/2 Everyone’s Favorite Tridiagonal 1 (βn)1/2 … … … + … … +

  31. 2 d 2 - + x dW , 2 dx β æ ö N(0,2) χ - ç ÷ (n 1) β χ N(0,2) χ ç ÷ - - (n 1) β (n 2) β 1 ç ÷ β H ~ , ç ÷ n 2 n β ç ÷ χ N(0,2) χ 2 β β ç ÷ χ N(0,2) è ø β 2 ¥ » + β H H G , n n n β Stochastic Operator Limit … … …

  32. Largest Eigenvalue Plots

  33. MATLAB beta=1; n=1e9; opts.disp=0;opts.issym=1; alpha=10; k=round(alpha*n^(1/3)); % cutoff parameters d=sqrt(chi2rnd( beta*(n:-1:(n-k-1))))'; H=spdiags( d,1,k,k)+spdiags(randn(k,1),0,k,k); H=(H+H')/sqrt(4*n*beta); eigs(H,1,1,opts)

  34. Tricks to get O(n9) speedup • Sparse matrix storage (Only O(n) storage is used) • Tridiagonal Ensemble Formulas (Any beta is available due to the tridiagonal ensemble) • The Lanczos Algorithm for Eigenvalue Computation ( This allows the computation of the extreme eigenvalue faster than typical general purpose eigensolvers.) • The shift-and-invert accelerator to Lanczos and Arnoldi (Since we know the eigenvalues are near 1, we can accelerate the convergence of the largest eigenvalue) • The ARPACK software package as made available seamlessly in MATLAB (The Arnoldi package contains state of the art data structures and numerical choices.) • The observation that if k = 10n1/3 , then the largest eigenvalue is determined numerically by the top k × k segment of n. (This is an interesting mathematical statement related to the decay of the Airy function.)

  35. Random matrix tools!

More Related