Introduction to Model Order Reduction II.2 The Projection Framework Methods

1 / 88

# Introduction to Model Order Reduction II.2 The Projection Framework Methods - PowerPoint PPT Presentation

Introduction to Model Order Reduction II.2 The Projection Framework Methods. Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White. Projection Framework: Non invertible Change of Coordinates. Note: q << N. reduced state.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## Introduction to Model Order Reduction II.2 The Projection Framework Methods

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Introduction to Model Order Reduction II.2 The Projection Framework Methods

Luca Daniel

Massachusetts Institute of Technology

with contributions from:

Alessandra Nardi, Joel Phillips, Jacob White

Projection Framework:Non invertible Change of Coordinates

Note: q << N

reduced state

original state

Projection Framework
• Original System
• Substitute
• Note: now few variables (q<<N) in the state, but still thousands of equations (N)
Projection Framework (cont.)

Reduction of number of equations: test by multiplying byVqT

• If VqT and UqT are chosen biorthogonal
Projection Framework

Equation Testing

(Projection)

Non-invertible change

of coordinates (Projection)

Approaches for picking V and U
• Use Eigenvectors of the system matrix (modal analysis)
• Use Frequency Domain Data
• Compute
• Use the SVD to pick q < k important vectors
• Use Time Series Data
• Compute
• Use the SVD to pick q < k important vectors

Point Matching

II.2.b POD Principal Component Analysis

or SVD Singular Value Decomposition

or KLD Karhunen-Lo`eve Decomposition

or PCA Principal Component Analysis

Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
A canonical form for model order reduction

Assuming A is non-singular we can cast the dynamical linear system into a canonical form for moment matching model order reduction

Note: this step is not necessary, it just makes the notation simple for educational purposes

Intuitive view of Krylov subspace choice for change of base projection matrix

Taylor series expansion:

• change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point

U

Aside on Krylov Subspaces - Definition

The order k Krylov subspace generated from matrix A and vector b is defined as

Moment matching around non-zero frequencies
• In stead of expanding around only s=0 we can expand around another points
• For each expansion point the problem can then be put again in the canonical form
Projection Framework: Moment Matching Theorem (E. Grimme 97)

If

and

Then

Total of 2q moment of the transfer function will match

• Multipole expansion points give larger band
• Moment (derivates) matching gives more
• accurate behavior in between expansion points
• moment matching at
• single DC point
• numerically very
• ill-conditioned!!!
• Krylov Subspace Projection Framework:
• multipoint moment
• matching
• AND numerically very
• stable!!!
Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• general Krylov Subspace methods
• case 1: Arnoldi
• case 2: PVL
• case 3: multipoint moment matching
• moment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

If U and V are such that:

Then the first q moments (derivatives) of the

reduced system match

Algebraic proof of case #1: expansion at s=0, V=U, orthonormal UTU=I

apply k times lemma in next slide

Lemma: .

Note in general:

BUT...

Substitute:

Iq

U is orthonormal

Need for Orthonormalization of U

Vectors{b,Eb,...,Ek-1b}cannot be computed directly

Vectors will quickly line up with dominant eigenspace!

Need for Orthonormalization of U (cont.)
• In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space
• In particular we can ORTHONORMALIZE the Krylov subspace vectors

For i = 1 to q

Generates new Krylov

subspace vector

For j = 1 to i

Orthogonalize new vector

Normalize new vector

Orthonormalization of U: The Arnoldi Algorithm

Computational Complexity

Normalize first vector

O(n)

sparse: O(n) dense:O(n2)

O(q2n)

O(n)

Generating vectors for the Krylov subspace
• Most of the computation cost is spent in calculating:
• Set up and solve a linear system using GCR
• If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n)
• The total complexity for calculating the projection matrix Uq is O(qn)

Orthonormalization of

the i-th column ofUq

Orthonormalization of

all columns ofUq

So we don’t need to compute

the reduced matrix. We have it already:

Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• general Krylov Subspace methods
• case 1: Arnoldi
• case 2: PVL
• case 3: multipoint moment matching
• moment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
Special case #2: expansion at s=0, biorthogonal VTU=I

If U and V are such that:

Then the first 2q moments of reduced system match

Proof of special case #2: expansion at s=0, biorthogonal VTU=UTV=Iq (cont.)

apply k times the lemma in next slide

Lemma: .

Substitute:

biorthonormality

Iq

Substitute:

biorthonormality

Iq

• PVL is an implementation of the biorthogonal case 2:

Use Lanczos process to biorthonormalize the columns of U and V: gives very good numerical stability

Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• general Krylov Subspace methods
• case 1: Arnoldi
• case 2: PVL
• case 3: multipoint moment matching
• moment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
Case #3: Intuitive view of subspace choice for general expansion points
• In stead of expanding around only s=0 we can expand around another points
• For each expansion point the problem can then be put again in the canonical form
Case #3: Intuitive view of Krylov subspace choice for general expansion points (cont.)

Hence choosing Krylov subspace

s2

s1

matches first kj of transfer function around each expansion point sj

s1=0

s3

Generating vectors for the Krylov subspace
• Most of the computation cost is spent in calculating:
• Set up and solve a linear system using GCR
• If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n)
• The total complexity for calculating the projection matrix Uq is O(qn)
Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• general Krylov Subspace methods
• case 1: Arnoldi
• case 2: PVL
• case 3: multipoint moment matching
• moment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
Sufficient conditions for passivity
• Sufficient conditions for passivity:

i.e. A is negative semidefinite

• Note that these are NOT necessary conditions (common misconception)

Heat In

Example Finite Difference System from on Poisson Equation (heat problem)

We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A-1 are negative semidefinite.

Sufficient conditions for passivity
• Sufficient conditions for passivity:

i.e. E is negative semidefinite

• Note that these are NOT necessary conditions (common misconception)
Congruence Transformations Preserve Negative (or positive) Semidefinitness
• Def. congruence transformation

same matrix

• Note: case #1 in the projection framework V=U produces congruence transformations
• Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrix
• Proof. Just rename

qxn

nxn

nxq

nxq

Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability)

If we use

• Then we loose half of the degrees of freedom
• i.e. we match only q moments instead of 2q
• But if the original matrix E is negative semidefinite
• so is the reduced, hence the system is passive and stable
Sufficient conditions for passivity
• Sufficient conditions for passivity:

i.e. E is positive

semidefinite

i.e. A is negative

semidefinite

• Note that these are NOT necessary conditions (common misconception)

+

+

-

-

Example. hState-Space Model from MNA of R, L, C circuits

Lemma: A is negative semidefinite if and only if

When using MNA

For immittance systems

in MNA form

A is Negative

Semidefinite

E is Positive

Semidefinite

PRIMA (for preserving passivity) (Odabasioglu, Celik, Pileggi TCAD98)

A different implementation of case #1:

V=U, UTU=I, Arnoldi Krylov Projection Framework:

Use Arnoldi: Numerically very stable

PRIMA preserves passivity
• The main difference between and case #1 and PRIMA:
• case #1 applies the projection framework to
• PRIMA applies the projection framework to
• PRIMA preserves passivity because
• uses Arnoldi so that U=V and the projection becomes a congruence transformation
• E and -A produced by electromagnetic analysis are typically positive semidefinite
• input matrix must be equal to output matrix
Algebraic proof of moment matching for PRIMA expansion at s=0, V=U, orthonormal UTU=I

Used Lemma: If U is orthonormal (UTU=I) and b is a vector such that

Conclusions
• Reduction via eigenmodes
• expensive and inefficient
• Reduction via rational function fitting (point matching)
• inaccurate in between points, numerically ill-conditioned
• Reduction via Quasi-Convex Optimization
• quite efficient and accurate
• Reduction via moment matching: Pade approximations
• better behavior but covers small frequency band
• numerically very ill-conditioned
• Reduction via moment matching: Krylov Subspace Projection Framework
• allows multipoint expansion moment matching (wider frequency band)
• numerically very robust and computationally very efficient
• use PVL is more efficient for model in frequency domain
• use PRIMA to preserve passivity if model is for time domain simulator

long coplanar T-line,

shorted on other side

dielectric layer

100

__ with dielectrics

- - w/o dielectrics

10-1

10-2

10-3

10-4

0

1

2

3

4

5

6

frequency [Hz]

x 108

Can guarantee passivity

Techniques for including dielectrics
• Finite Element Method
• Green’s Functions for dielectric bodies
• Surface Formulations using Equivalent Theorem
• (substitute dielectrics with equivalent surface currents and use free space Green’s functions)
• Volume Formulations using Polarization Currents
Frequency independent kernel approximation
• Note: in this work we used a classical frequency independent approximation for the integration kernel:
Reducing to algebraic form
• Surface and Volume discretization both for conductors and dielectrics + Galerkin gives branch equations:

conductors

dielectrics

positive definite when

using Galerkin

congruence transformation

preserves positive definiteness

diagonal with

positive coef.

is block diagonal and the blocks are all positive,

hence is positive semidefinite and so is

diagonal with positive coef.

congruence transformation

preserves positive definiteness

is block diagonal and the blocks are all positive semidefinite, hence is also positive semidefinite

100

(order 16)

__ with dielectrics, reduced model

o with dielectrics, full system

(order 700)

10-1

10-2

10-3

10-4

0

1

2

3

4

5

6

frequency [Hz]

x 108

Example2: frequency responseof the line with opposite strips

100

10-1

10-2

10-3

(order 16)

__ with dielectrics, reduced model

o with dielectrics, full system

(order 700)

10-4

0

1

2

3

4

5

6

x 108

frequency [Hz]

Example2: Current distributions

Note: NOT TO SCALE!

reduced filament widths

for visualization purposes

Frequency response for the reduced model of the MCM bus

100

__ with dielectrics, reduced model (order 12)

o with dielectrics, full system (order 600)

- - without dielectrics

10-1

10-2

10-3

10-4

0

1

2

3

4

5

6

frequency [Hz]

x 108

Conclusions Electromagnetic Example
• Volume formulation with full mesh analysis (both conductors and dielectrics) produces
• well conditioned
• and positive semidefinite matrices
• Hence guaranteed passive models are generated when using congruence transformation
Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)

Observability Gramian

Energy of the output y(t) starting from state x with no input:

Observability Gramian:

Note: it is also the solution Lyapunov equation

Note: If x=xi the i-th eigenvector of Wo :

Hence: eigenvectors of Wo corresponding to small

eigenvalues do NOT produce much energy at the output

(i.e. they are not very observable):

Idea: let’s get rid of them!

Controllability Gramian

Minimum amount input energy required to drive the

system to a specific state x :

It is also the solution of

Inverse of Controllability Gramian:

Note: If x=xi the i-th eigenvector of Wc:

Hence: eigenvectors of Wc corresponding to small

eigenvalues do require a lot of input energy in order

to be reached (i.e. they are not very controllable):

Idea: let’s get rid of them!

Naïve Controllability/Observability MOR
• Suppose I could compute a basis for the strongly observable and/or strongly controllable spaces. Projection-based MOR can give a reduced model that deletes weakly observable and/or weakly controllable modes.
• Problem:
• What if the same mode is strongly controllable, but weakly observable?
• Are the eigenvalues of the respective Gramians even unique?

Changing coordinate system

• Consider an invertible change of coordinates:
• We know that the input/output relationship will be unchanged.
• But what about the the Gramians, and their eigenvalues?
• Gramians and their eigenvalues change! Hence the relative degrees of observability and controllability are properties of the coordinate system
• A bad choice of coordinates will lead to bad reduced models if we look at controllability and observability separately.
• What coordinate system should we use then?

Balancing

Fortunately the eigenvalues of the product (Hankel singular

values) do not change when changing coordinates:

Diagonal matrix with eigenvalues of the product

The eigenvectors change

But not the eigenvalues

And since Wc and Wo are symmetric, a change of coordinate matrix U can be found that diagonalizes both:

In Balanced coordinates the Gramians are equal and diagonal

Selection of vectors for the columns of the reduced order projection matrix.

In balanced coordinates it is easy to select the best vectors for the reduced model: we want the subspace of vectors that are at the same time most controllable and observable:

simply pick the eigenvectors corresponding to the largest entries on the diagonal (Hankel singular values).

In other words the ones corresponding to the largest

eigenvalues of the controllability and observability Grammians product.

Truncated Balance Realization Summary

• The good news:
• we even have bounds for the error
• Can do even a bit better with the optimal Hankel Reduction
• it is expensive:
• need to compute the Gramians (solve Lyapunov equation)
• need to compute eigenvalues of the product: O(N3)
• The bottom line:
• If the size of your system allows you O(N3) computation, Truncated Balance Realization is a much better choice than the any other reduction method.
• But if you cannot afford O(N3) computation (e.g. dense matrix with N > 5000) then PRIMA or PVL or Quasi-Convex-Optimization are better choices
Approaches for picking V and U
• Use Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)
• Use Singular Vectors of System Grammians Product
• Truncated Balance Realizations (TBR)
• Guaranteed Passive TBR
TBR: Passivity Preserving?
• TBR does not generally preserve passivity
• Not guaranteed PR-preserving
• Not guarateed BR-preserving
• A special case: “symmetrizable” models
• Suppose the system is transformable to symmetric and internally PR
• TBR will generate PR models! (via congruence!)
• Stronger property than for PRIMA: TBR is coordinate-invariant
Positive-Real Lemma
• Lur’e equations :
• The system is positive-real if and only if is positive semidefinite
• A dual set of equations can be written for with
PR Preserving TBR
• Lur’e equations for “Grammians” : Lyapunov + Constraints
• Insight : from the PR lemma Can be used in a TBR procedure
• “Balance” the Lur’e equations then truncate
• By similar partitioning argument, truncated (reduced) system will be PR/BR (passive) iff the original is!
Physical Interpretation
• Consider Y-parameter model
• Inputs: voltages. Outputs: currents.
• Dissipated energy
• Lur’e Equation for PR-“Controllability” Grammian
• Singular values represent: gains from dissipated energy to state
• Minimum energy dissipation to reach a given state:
• Lur’e Equation for PR-“Observability” Grammian
• Singular values represent: gains from state to output
• Energy dissipated, given initial state:
Computational Procedure
• Put system into standard form
• If is singular, requires an eigendecomposition
• Solve the PR/BR Lur’e equations
• Solve a generalized eigenproblem of 2X size
• Special treatment for singular
• Balance & Truncate as in standard TBR
Alternate Hybrid Procedure
• Perform standard TBR
• Use Positive-Real Lemma to check passivity of models generated
• If model is not acceptable, proceed to PR-TBR
• Why?
• Usually costs less
• May get better models
Example : RLC Model

TBR Model Not Positive Real

Example : Integrated Spiral Inductor

Order 60 PRIMA

Order 5 PR-TBR

Moment Matching Approaches

Accurate over a narrow band.

Matching function values and derivatives.

Cheap: O(qn).

Use it as a FIRST STAGE REDUCTION

Truncated Balanced Realization and Hankel Reduction

Optimal (best accuracy for given size q, and apriori error bound.

Expensive: O(n3)

USE IT AS A SECOND STAGE REDUCTION

Two Complementary Approaches
Combined Krylov-TBR algorithm

Krylov reduction (Wi , Vi):

Ai = WiTAVi

Bi = WiTB

Ci = CVi

Initial Model:

(A B C), n

Intermediate Model:

(Ai Bi Ci), ni

TBR reduction (Wt , Vt):

Ar = WtTAVt

Br = WtTB

Cr = CVt

Reduced Model:

(Ar Br Cr), q

Conclusions
• Moment Matching Projection Methods
• e.g. PVL, PRIMA, Arnoldi
• are suitable for application to VERY large systems O(qn)
• but do not generate optimal models
• PR/BR-TBR
• Independent of system structure
• Guarantee passive models
• but computationally O(n3) usable only on model size < 3000
• Combination of projection methods and new TBR technique provides near-optimal compression and guaranteed passive models -- in reasonable time
• Quasi-Convex Optimization Reduction is also a good alternative specially when building models from measurements

Course Outline

Numerical Simulation

Quick intro to PDE Solvers

Quick intro to ODE Solvers

Model Order reduction

Linear systems

Common engineering practice

Optimal techniques in terms of model accuracy

Efficient techniques in terms of time and memory

Non-Linear Systems

Parameterized Model Order Reduction

Linear Systems

Non-Linear Systems

Monday

Yesterday

Today

Tomorrow

Friday