1 / 28

Dynamical Systems for Extreme Eigenspace Computations

Dynamical Systems for Extreme Eigenspace Computations. Maziar Nikpour UCL Belgium. Co-workers. Iven M. Y. Mareels Jonathan H. Manton University of Melbourne, Australia. Vadym Adamyan Odessa State University, Ukraine. Uwe Helmke University of Wurzberg, Germany. Problem.

csilvana
Download Presentation

Dynamical Systems for Extreme Eigenspace Computations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamical Systems for Extreme Eigenspace Computations Maziar Nikpour UCL Belgium

  2. Co-workers Iven M. Y. Mareels Jonathan H. Manton University of Melbourne, Australia. Vadym Adamyan Odessa State University, Ukraine. Uwe Helmke University of Wurzberg, Germany.

  3. Problem • For Hermitian matrices (A, B), with B > 0; find the non-trivial solutions (l, x) of with the smallest or largest generalised eigenvalues l. n – size of matrices (A,B) k – no. of desired generalised eigenvalue/eigenvector pairs.

  4. Outline • Introduction • Motivation • Brief history of literature • Penalty function approach • Gradient flow • Convergence • Discrete-time Algorithms • Applications • Conclusions

  5. Motivation • Signal Processing • Telecommunications • Control • Many others…

  6. Brief History of Problem • Numerical Linear Algebra Literature • Methods for general A and B: • QZ algorithm, Moler and Stewart 1973. (what MATLAB does when you type ‘eig’) • Methods for large and sparse A, B. • Trace minimisation method, Sameh & Wisiniewski, 1981. • Engineering Literature • Methods largely for computing largest/smallest generalised evs adaptively • Mathew and Reddy 1998 (inflation approach, special case of approach in this work). • Strobach, 2000 (tracking algorithms).

  7. Brief History of Problem • Dynamical systems literature • Brockett flow • Oja • Above approaches cannot be adapted to the Generalised Eigenvalue problem without manipulating A and/or B. • Recent paper by Manton et al. presents an approach that can…

  8. Penalty Function Approach • The minimisation of the following cost can lead to algorithms for computing extreme generalised evs.

  9. Dynamical Systems for Numerical Computations Gradient descent like flows on a cost function. Discretisation of flows. Efficient numerical algorithms.

  10. Examples • Power flow: • Oja subspace flow: • Brockett flow:

  11. Contributions • Gradient flow on f(A, B) • Discretisation of Gradient Flow • Steepest Descent • Conjugate Gradient • Stochastic minor/principal component tracking algorithms • The case B = I, and Z real has already been treated. • (see Manton et al. 2003). • Extending the domain to the complex matrices complicates the analysis substantially… • Allowing B to be any p.d. matrix expands the range of applications…

  12. Gradient Flow • Main Result: For almostall initial conditions, solutions of converge to a single point in the stable invariant set of the flow.

  13. Gradient Flow • The stable invariant set is:

  14. Critical Points of f(A, B) • Hessian of f(A, B) is degenerate at critical points, N.B. • Proposition:

  15. Stability analysis of critical points • Linear stability analysis will not suffice. • Use center manifold theorem at each c.p. • Proposition: Why? Nullspace of hessian of cost func. = Tangent space of critical subman.

  16. unstable stable center Stability analysis of critical points Reduction principle of dynamical systems

  17. Stability analysis of critical points • Main result follows…. • Proposition: level sets are compact => flow converges to one of the critical components. • Center manifold thm. + reduction principle => converge to a single point on a critical component. • Converges to stable invariant set for an open dense set of initial conditions.

  18. Remarks • Conditions used in proof => f(A, B) is a Morse-Bott function => solutions converge to a single point instead of a set (see Helmke & Moore, 1994). • Also f(A, B) is a real analytic function (Cn x k considered as a real vector space) => convergence to a single point (Lojasiewicz, 1984).

  19. Further Remarks • Generalised eigenvectors not unique but convergence to particular g.evs can be achieved by the following flow in reduced dimensions: where trunc{X} denotes X with imaginary components of diagonal set to 0. Flow converges to an element of critical component with real diagonal elements.

  20. Systems of Flows • Consider the system of cost functions:

  21. Systems of Flows • System of partial gradient descent flows allows the possibility to add or take away components without affecting the computation of others • Proposition: Z(t) converges to smallest generalised eigenvalues for a generic initial condition.

  22. Discrete-time algorithms • Since flow evolves on a Euclidean space – discretisation is not complicated: • Steepest descent: • Conjugate gradient

  23. Discrete-time algorithms • Can solve the Hermitian definite GEVP without any factorisation or manipulation of A or B. • Only matrix – small matrix multiplications are required. • Suitable for cases where A and B are large and sparse. • Conjugate gradient algorithm – superlinear convergence but no increase in order of computational complexity. • Complexity O(n2k). • Exact line search can be performed.

  24. Discrete-time algorithms • Tracking algorithm: - signal plus noise model • O(nk2) complexity when Rnn = I.

  25. Conclusion • Proposing and deriving convergence theory of a gradient flow for solving GEVP. • Modular system of flows. • Discretisation: CG and SD algorithms. • Application to Minor component tracking.

  26. Questions

More Related