1 / 55

Random Matrix Laws & Jacobi Operators

Random Matrix Laws & Jacobi Operators. Alan Edelman MIT May 19, 2014 joint with Alex Dubbs and Praveen Venkataramana (acknowledging gratefully the help from Bernie Wang ). Conference Blurb.

nike
Download Presentation

Random Matrix Laws & Jacobi Operators

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Random Matrix Laws&Jacobi Operators Alan Edelman MIT May 19, 2014 joint with Alex Dubbs and Praveen Venkataramana (acknowledging gratefully the help from Bernie Wang)

  2. Conference Blurb • Recent years have seen significant progress in the understanding of asymptotic spectral properties of random matrices and related systems. • One particularly interesting aspect is the multifaceted connection with properties of orthogonal polynomial systems, encoded in Jacobi matrices (and their analogs)

  3. At a Glance

  4. Jacobi Operators (Symmetric Tridiagonal Format) Three term recurrence coefficients for orthogonal polynomials displayed as a Jacobi matrix Classically derived through Gram-Schmidt…

  5. JacobiMatrix

  6. Gil Strang’s Favorite Matrixencoded in Cupcakes

  7. Computing the Jacobi encodingFrom the moments[Golub,Welsch 1969] • Form Hankel matrix of moments • R=Cholesky(H)

  8. Computing the Jacobi encodingFrom the weight (Continuous Lanczos) • Inner product: • Computes Jacobi Parameters and orthogonal polynomials • Discrete version very successful for eigenvalues of sparse symmetric matrices • May be computed with Chebfun

  9. Example: Normal Distribution Moments  Hermite Recurrence

  10. Example ChebfunLanczos Run [Verbatim from Pedro Gonnet’s November 2011 Run] Thanks to Bernie Wang

  11. Too Small

  12. RMT Big laws: Toeplitz + Boundary [Anshelevich,2010] (Free Meixner) [E, Dubbs, 2014] That’s pretty special! Corresponds to 2nd order differences with boundary

  13. Anshelevich Theory [Anshelevich, 2010] • Describe all weight Functions whose Jacobi encoding is Toeplitz off the first row and column • This is a terrific result, which directly lets us characterize • McKay often thrown in with Wachter, but seems worth distinguishing as special • Known as “free Meixner,” but I prefer to emphasize the Toeplitz plus boundary aspect

  14. Semicircle Law

  15. Marcenko-Pastur Law

  16. McKay Law

  17. Wachter Law

  18. What RM are these other three? [Anshelevich, 2010]

  19. Another interesting Random Matrix Law • The singular values (squared) of • Density: • Moments:

  20. Jacobi Matrix J =

  21. Jacobi Matrix J =

  22. Implication? • The four big laws are Toeplitz + size 1 border • The svd law seems to be heading towards Toeplitz • Enough laws “want” to be Toeplitz Idea A moment algorithm that “looks for” an eventually Toeplitzform

  23. Algorithm • Compute truncated Jacobi from a few initial moments 1a. (or run a few steps of Lanczos on the density) • Compute g(x)= • Approximate density = 5x5 example

  24. Algorithm • Compute truncated Jacobi from a few initial moments 1a. (or run a few steps of Lanczos on the density) • Compute g(x)= • Approximate density = k x k example 5x5 example “It’s like replacing .1666… with 1/6 and not .16” “No need to move off real axis” Replaces infinitely equal α’s and β’s

  25. Mathematica

  26. Fast convergence! Theory g[2] approx

  27. Even the normal distribution • (not particularly well approximated by Toeplitz) • It’s not a random matrix law!

  28. Moments

  29. Free Cumulants

  30. Narayana Photo Unavailable Wigner and Narayana • Marcenko-Pastur = Limiting Density for Laguerre • Moments are Narayana Polynomials! • Narayana probably would not have known [Wigner, 1957] (Narayana was 27)

  31. At a Glance

  32. Multivariate Orthogonal Polynomials • In random matrix theory and elsewhere • The orthogonal polynomials associated with the weight of general beta distributions

  33. Classical Orthogonal Polynomials • Triangular Sparsity structure of monomial expansion: • Hermite: even/odd: • Generally Pngoes from 0 to n • Tridiagonalsparsity of 3-term recurrence

  34. Classical Orthogonal Polynomials • Triangular Sparsity structure of monomial expansion: • Hermite: even/odd: • Generally Pngoes from 0 to n • Tridiagonalsparsity of 3-term recurrence Extensions to multivariate case?? Before extending, a few slides about these multivariate polynomials and their applications.

  35. Hermite Polynomials becomeMultivariate Hermite Polynomials Orthogonal with respect to Orthogonal with respect to Indexed by degree k=0,1,2,3,… Symmetric scalar valued polynomials Indexed by partitions (multivariate degree): (),(1),(2),(1,1),(3),(2,1),(1,1,1),…

  36. Monomials becomeJack Polynomials Orthogonal on the unit circle Orthogonal on copies of the unit circle with respect to circular ensemble measure Symmetric scalar valued polynomials Indexed by partitions (multivariate degree): (),(1),(2),(1,1),(3),(2,1),(1,1,1),…

  37. Multivariate Hermite Polynomials (β=1) [Chikuse, 1992] X … matrix Polynomial evaluated at eigenvalues of X

  38. (Selberg Integrals and)Combinatorics of mult polynomials:Graphs on Surfaces(Thanks to Mike LaCroix) • Hermite: Maps with one Vertex Coloring • Laguerre: Bipartite Maps with multiple Vertex Colorings • Jacobi: We know it’s there, but don’t have it quite yet.

  39. β=2 Special case • Balderrama, Graczyk and Urbina (original proof) • β=2 (only!): explicit formula for multivariate orthogonal polynomials in terms of univariate orthogonal polynomials. • Generalizes Schur Polynomial construction in an important way • New proof reduces to orthogonality of Schur’s

  40. Classical Orthogonal Polynomials • Triangular Sparsity structure of monomial expansion: • Hermite: even/odd: • Generally Pngoes from 0 to n • Tridiagonalsparsity of 3-term recurrence Extensions to multivariate case?? Before extending, a few slides about these multivariate polynomials and their applications.

  41. What we know about the first question • Sometimes follows the Young Diagram • Hermite always follows Young diagram for all β • Laguerre always follows Young diagram for all β • (Baker and Forrester 1998) Young Diagram

  42. What we know • Young Diagram for Hermite, Laguerre for all β • Young Diagram for all weight functions for β=2 (can be derived from schur polynomials) • Numerical evidence suggests answer does not follow Young diagram for all weight functions for all beta • Open Questions remain

  43. The second question • What Is the sparsity pattern of the analog of = ? =

  44. Answer You, your parents and children in the Young Diagram

  45. At a Glance

  46. Hermite Jacobi Matrix

  47. The Jacobi matrix Defines the moments of the normal Similarly there is a recipe for that does not require knowledge of the multivariate β=2 Hermite weight

  48. Theorem: This is true for any weight function for which you have the Jacobi matrix • Proof: (Venkataramana, E 2014)

  49. Proof Idea • We can use the wonderful formula • To compute integrals of any symmetric polynomial against • without needing to know w(x) explicitly

  50. q-Hermite Jacobi Matrix q1 recovers classical Hermite

More Related