Introduction to Model Order Reduction II.2 The Projection Framework Methods. Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White. Projection Framework: Non invertible Change of Coordinates. Note: q << N. reduced state.

Download Presentation

Introduction to Model Order Reduction II.2 The Projection Framework Methods

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

Projection Framework Equation Testing (Projection) Non-invertible change of coordinates (Projection)

Approaches for picking V and U • Use Eigenvectors of the system matrix (modal analysis) • Use Frequency Domain Data • Compute • Use the SVD to pick q < k important vectors • Use Time Series Data • Compute • Use the SVD to pick q < k important vectors Point Matching II.2.b POD Principal Component Analysis or SVD Singular Value Decomposition or KLD Karhunen-Lo`eve Decomposition or PCA Principal Component Analysis

Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

A canonical form for model order reduction Assuming A is non-singular we can cast the dynamical linear system into a canonical form for moment matching model order reduction Note: this step is not necessary, it just makes the notation simple for educational purposes

Intuitive view of Krylov subspace choice for change of base projection matrix Taylor series expansion: • change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point U

Moment matching around non-zero frequencies • In stead of expanding around only s=0 we can expand around another points • For each expansion point the problem can then be put again in the canonical form

Compare Pade’ Approximationsand Krylov Subspace Projection Framework • Pade approximations: • moment matching at • single DC point • numerically very • ill-conditioned!!! • Krylov Subspace Projection Framework: • multipoint moment • matching • AND numerically very • stable!!!

Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

Lemma: . Note in general: BUT... Substitute: Iq U is orthonormal

Need for Orthonormalization of U Vectors{b,Eb,...,Ek-1b}cannot be computed directly Vectors will quickly line up with dominant eigenspace!

Need for Orthonormalization of U (cont.) • In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space • In particular we can ORTHONORMALIZE the Krylov subspace vectors

For i = 1 to q Generates new Krylov subspace vector For j = 1 to i Orthogonalize new vector Normalize new vector Orthonormalization of U: The Arnoldi Algorithm Computational Complexity Normalize first vector O(n) sparse: O(n) dense:O(n2) O(q2n) O(n)

Generating vectors for the Krylov subspace • Most of the computation cost is spent in calculating: • Set up and solve a linear system using GCR • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) • The total complexity for calculating the projection matrix Uq is O(qn)

What about computing the reduced matrix ? Orthonormalization of the i-th column ofUq Orthonormalization of all columns ofUq So we don’t need to compute the reduced matrix. We have it already:

Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

Case #3: Intuitive view of subspace choice for general expansion points • In stead of expanding around only s=0 we can expand around another points • For each expansion point the problem can then be put again in the canonical form

Generating vectors for the Krylov subspace • Most of the computation cost is spent in calculating: • Set up and solve a linear system using GCR • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) • The total complexity for calculating the projection matrix Uq is O(qn)

Approaches for picking V and U • Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

Sufficient conditions for passivity • Sufficient conditions for passivity: i.e. A is negative semidefinite • Note that these are NOT necessary conditions (common misconception)

Heat In Example Finite Difference System from on Poisson Equation (heat problem) We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A-1 are negative semidefinite.

Sufficient conditions for passivity • Sufficient conditions for passivity: i.e. E is negative semidefinite • Note that these are NOT necessary conditions (common misconception)

Congruence Transformations Preserve Negative (or positive) Semidefinitness • Def. congruence transformation same matrix • Note: case #1 in the projection framework V=U produces congruence transformations • Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrix • Proof. Just rename

qxn nxn nxq nxq Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability) If we use • Then we loose half of the degrees of freedom • i.e. we match only q moments instead of 2q • But if the original matrix E is negative semidefinite • so is the reduced, hence the system is passive and stable

Sufficient conditions for passivity • Sufficient conditions for passivity: i.e. E is positive semidefinite i.e. A is negative semidefinite • Note that these are NOT necessary conditions (common misconception)

+ + - - Example. hState-Space Model from MNA of R, L, C circuits Lemma: A is negative semidefinite if and only if When using MNA For immittance systems in MNA form A is Negative Semidefinite E is Positive Semidefinite

PRIMA preserves passivity • The main difference between and case #1 and PRIMA: • case #1 applies the projection framework to • PRIMA applies the projection framework to • PRIMA preserves passivity because • uses Arnoldi so that U=V and the projection becomes a congruence transformation • E and -A produced by electromagnetic analysis are typically positive semidefinite • input matrix must be equal to output matrix

Conclusions • Reduction via eigenmodes • expensive and inefficient • Reduction via rational function fitting (point matching) • inaccurate in between points, numerically ill-conditioned • Reduction via Quasi-Convex Optimization • quite efficient and accurate • Reduction via moment matching: Pade approximations • better behavior but covers small frequency band • numerically very ill-conditioned • Reduction via moment matching: Krylov Subspace Projection Framework • allows multipoint expansion moment matching (wider frequency band) • numerically very robust and computationally very efficient • use PVL is more efficient for model in frequency domain • use PRIMA to preserve passivity if model is for time domain simulator