Create Presentation
Download Presentation

Download Presentation

Introduction to Model Order Reduction II.2 The Projection Framework Methods

Introduction to Model Order Reduction II.2 The Projection Framework Methods

165 Views

Download Presentation
Download Presentation
## Introduction to Model Order Reduction II.2 The Projection Framework Methods

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -

**Introduction to Model Order Reduction II.2 The Projection**Framework Methods Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White**Projection Framework:Non invertible Change of Coordinates**Note: q << N reduced state original state**Projection Framework**• Original System • Substitute • Note: now few variables (q<<N) in the state, but still thousands of equations (N)**Projection Framework (cont.)**Reduction of number of equations: test by multiplying byVqT • If VqT and UqT are chosen biorthogonal**qxn**nxn nxq nxq Projection Framework (graphically) qxq**Projection Framework**Equation Testing (Projection) Non-invertible change of coordinates (Projection)**Approaches for picking V and U**• Use Eigenvectors of the system matrix (modal analysis) • Use Frequency Domain Data • Compute • Use the SVD to pick q < k important vectors • Use Time Series Data • Compute • Use the SVD to pick q < k important vectors Point Matching II.2.b POD Principal Component Analysis or SVD Singular Value Decomposition or KLD Karhunen-Lo`eve Decomposition or PCA Principal Component Analysis**Approaches for picking V and U**• Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)**A canonical form for model order reduction**Assuming A is non-singular we can cast the dynamical linear system into a canonical form for moment matching model order reduction Note: this step is not necessary, it just makes the notation simple for educational purposes**Intuitive view of Krylov subspace choice for change of base**projection matrix Taylor series expansion: • change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point U**Aside on Krylov Subspaces - Definition**The order k Krylov subspace generated from matrix A and vector b is defined as**Moment matching around non-zero frequencies**• In stead of expanding around only s=0 we can expand around another points • For each expansion point the problem can then be put again in the canonical form**Projection Framework: Moment Matching Theorem (E. Grimme 97)**If and Then Total of 2q moment of the transfer function will match**Combine point and moment matching: multipoint moment**matching • Multipole expansion points give larger band • Moment (derivates) matching gives more • accurate behavior in between expansion points**Compare Pade’ Approximationsand Krylov Subspace Projection**Framework • Pade approximations: • moment matching at • single DC point • numerically very • ill-conditioned!!! • Krylov Subspace Projection Framework: • multipoint moment • matching • AND numerically very • stable!!!**Approaches for picking V and U**• Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)**Special simple case #1: expansion at s=0,V=U, orthonormal**UTU=I If U and V are such that: Then the first q moments (derivatives) of the reduced system match**Algebraic proof of case #1: expansion at s=0, V=U,**orthonormal UTU=I apply k times lemma in next slide**Lemma: .**Note in general: BUT... Substitute: Iq U is orthonormal**Need for Orthonormalization of U**Vectors{b,Eb,...,Ek-1b}cannot be computed directly Vectors will quickly line up with dominant eigenspace!**Need for Orthonormalization of U (cont.)**• In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space • In particular we can ORTHONORMALIZE the Krylov subspace vectors**For i = 1 to q**Generates new Krylov subspace vector For j = 1 to i Orthogonalize new vector Normalize new vector Orthonormalization of U: The Arnoldi Algorithm Computational Complexity Normalize first vector O(n) sparse: O(n) dense:O(n2) O(q2n) O(n)**Generating vectors for the Krylov subspace**• Most of the computation cost is spent in calculating: • Set up and solve a linear system using GCR • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) • The total complexity for calculating the projection matrix Uq is O(qn)**What about computing the reduced matrix**? Orthonormalization of the i-th column ofUq Orthonormalization of all columns ofUq So we don’t need to compute the reduced matrix. We have it already:**Approaches for picking V and U**• Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)**Special case #2: expansion at s=0, biorthogonal VTU=I**If U and V are such that: Then the first 2q moments of reduced system match**Proof of special case #2: expansion at s=0, biorthogonal**VTU=UTV=Iq (cont.) apply k times the lemma in next slide**Lemma: .**Substitute: biorthonormality Iq Substitute: biorthonormality Iq**PVL: Pade Via Lanczos[P. Feldmann, R. W. Freund TCAD95]**• PVL is an implementation of the biorthogonal case 2: Use Lanczos process to biorthonormalize the columns of U and V: gives very good numerical stability**Example: Simulation of voltage gain of a filter with PVL**(Pade Via Lanczos)**Approaches for picking V and U**• Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)**Case #3: Intuitive view of subspace choice for general**expansion points • In stead of expanding around only s=0 we can expand around another points • For each expansion point the problem can then be put again in the canonical form**Case #3: Intuitive view of Krylov subspace choice for**general expansion points (cont.) Hence choosing Krylov subspace s2 s1 matches first kj of transfer function around each expansion point sj s1=0 s3**Generating vectors for the Krylov subspace**• Most of the computation cost is spent in calculating: • Set up and solve a linear system using GCR • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n) • The total complexity for calculating the projection matrix Uq is O(qn)**Approaches for picking V and U**• Use Eigenvectors of the system matrix • POD or SVD or KLD or PCA. • Use Krylov Subspace Vectors (Moment Matching) • general Krylov Subspace methods • case 1: Arnoldi • case 2: PVL • case 3: multipoint moment matching • moment matching preserving passivity: PRIMA • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)**Sufficient conditions for passivity**• Sufficient conditions for passivity: i.e. A is negative semidefinite • Note that these are NOT necessary conditions (common misconception)**Heat In**Example Finite Difference System from on Poisson Equation (heat problem) We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A-1 are negative semidefinite.**Sufficient conditions for passivity**• Sufficient conditions for passivity: i.e. E is negative semidefinite • Note that these are NOT necessary conditions (common misconception)**Congruence Transformations Preserve Negative (or positive)**Semidefinitness • Def. congruence transformation same matrix • Note: case #1 in the projection framework V=U produces congruence transformations • Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrix • Proof. Just rename**qxn**nxn nxq nxq Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability) If we use • Then we loose half of the degrees of freedom • i.e. we match only q moments instead of 2q • But if the original matrix E is negative semidefinite • so is the reduced, hence the system is passive and stable**Sufficient conditions for passivity**• Sufficient conditions for passivity: i.e. E is positive semidefinite i.e. A is negative semidefinite • Note that these are NOT necessary conditions (common misconception)**+**+ - - Example. hState-Space Model from MNA of R, L, C circuits Lemma: A is negative semidefinite if and only if When using MNA For immittance systems in MNA form A is Negative Semidefinite E is Positive Semidefinite**PRIMA (for preserving passivity) (Odabasioglu, Celik,**Pileggi TCAD98) A different implementation of case #1: V=U, UTU=I, Arnoldi Krylov Projection Framework: Use Arnoldi: Numerically very stable**PRIMA preserves passivity**• The main difference between and case #1 and PRIMA: • case #1 applies the projection framework to • PRIMA applies the projection framework to • PRIMA preserves passivity because • uses Arnoldi so that U=V and the projection becomes a congruence transformation • E and -A produced by electromagnetic analysis are typically positive semidefinite • input matrix must be equal to output matrix**Algebraic proof of moment matching for PRIMA expansion at**s=0, V=U, orthonormal UTU=I Used Lemma: If U is orthonormal (UTU=I) and b is a vector such that**Proof of lemma**Proof:**Conclusions**• Reduction via eigenmodes • expensive and inefficient • Reduction via rational function fitting (point matching) • inaccurate in between points, numerically ill-conditioned • Reduction via Quasi-Convex Optimization • quite efficient and accurate • Reduction via moment matching: Pade approximations • better behavior but covers small frequency band • numerically very ill-conditioned • Reduction via moment matching: Krylov Subspace Projection Framework • allows multipoint expansion moment matching (wider frequency band) • numerically very robust and computationally very efficient • use PVL is more efficient for model in frequency domain • use PRIMA to preserve passivity if model is for time domain simulator**Case study: Passive Reduced Models from an Electromagnetic**Field Solver long coplanar T-line, shorted on other side dielectric layer