introduction to model order reduction ii 2 the projection framework methods l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Introduction to Model Order Reduction II.2 The Projection Framework Methods PowerPoint Presentation
Download Presentation
Introduction to Model Order Reduction II.2 The Projection Framework Methods

Loading in 2 Seconds...

play fullscreen
1 / 88

Introduction to Model Order Reduction II.2 The Projection Framework Methods - PowerPoint PPT Presentation


  • 199 Views
  • Uploaded on

Introduction to Model Order Reduction II.2 The Projection Framework Methods. Luca Daniel Massachusetts Institute of Technology with contributions from: Alessandra Nardi, Joel Phillips, Jacob White. Projection Framework: Non invertible Change of Coordinates. Note: q << N. reduced state.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Introduction to Model Order Reduction II.2 The Projection Framework Methods' - chuong


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
introduction to model order reduction ii 2 the projection framework methods

Introduction to Model Order Reduction II.2 The Projection Framework Methods

Luca Daniel

Massachusetts Institute of Technology

with contributions from:

Alessandra Nardi, Joel Phillips, Jacob White

projection framework non invertible change of coordinates
Projection Framework:Non invertible Change of Coordinates

Note: q << N

reduced state

original state

projection framework
Projection Framework
  • Original System
  • Substitute
  • Note: now few variables (q<<N) in the state, but still thousands of equations (N)
projection framework cont
Projection Framework (cont.)

Reduction of number of equations: test by multiplying byVqT

  • If VqT and UqT are chosen biorthogonal
projection framework6
Projection Framework

Equation Testing

(Projection)

Non-invertible change

of coordinates (Projection)

approaches for picking v and u
Approaches for picking V and U
  • Use Eigenvectors of the system matrix (modal analysis)
  • Use Frequency Domain Data
    • Compute
    • Use the SVD to pick q < k important vectors
  • Use Time Series Data
    • Compute
    • Use the SVD to pick q < k important vectors

Point Matching

II.2.b POD Principal Component Analysis

or SVD Singular Value Decomposition

or KLD Karhunen-Lo`eve Decomposition

or PCA Principal Component Analysis

approaches for picking v and u8
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
  • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
a canonical form for model order reduction
A canonical form for model order reduction

Assuming A is non-singular we can cast the dynamical linear system into a canonical form for moment matching model order reduction

Note: this step is not necessary, it just makes the notation simple for educational purposes

intuitive view of krylov subspace choice for change of base projection matrix
Intuitive view of Krylov subspace choice for change of base projection matrix

Taylor series expansion:

  • change base and use only the first few vectors of the Taylor series expansion: equivalent to match first derivatives around expansion point

U

slide11

Aside on Krylov Subspaces - Definition

The order k Krylov subspace generated from matrix A and vector b is defined as

moment matching around non zero frequencies
Moment matching around non-zero frequencies
  • In stead of expanding around only s=0 we can expand around another points
  • For each expansion point the problem can then be put again in the canonical form
projection framework moment matching theorem e grimme 97
Projection Framework: Moment Matching Theorem (E. Grimme 97)

If

and

Then

Total of 2q moment of the transfer function will match

slide14

Combine point and moment matching: multipoint moment matching

  • Multipole expansion points give larger band
  • Moment (derivates) matching gives more
  • accurate behavior in between expansion points
slide15

Compare Pade’ Approximationsand Krylov Subspace Projection Framework

  • Pade approximations:
  • moment matching at
  • single DC point
  • numerically very
  • ill-conditioned!!!
  • Krylov Subspace Projection Framework:
  • multipoint moment
  • matching
  • AND numerically very
  • stable!!!
approaches for picking v and u16
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
    • general Krylov Subspace methods
    • case 1: Arnoldi
    • case 2: PVL
    • case 3: multipoint moment matching
    • moment matching preserving passivity: PRIMA
  • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
special simple case 1 expansion at s 0 v u orthonormal u t u i
Special simple case #1: expansion at s=0,V=U, orthonormal UTU=I

If U and V are such that:

Then the first q moments (derivatives) of the

reduced system match

algebraic proof of case 1 expansion at s 0 v u orthonormal u t u i
Algebraic proof of case #1: expansion at s=0, V=U, orthonormal UTU=I

apply k times lemma in next slide

lemma
Lemma: .

Note in general:

BUT...

Substitute:

Iq

U is orthonormal

need for orthonormalization of u
Need for Orthonormalization of U

Vectors{b,Eb,...,Ek-1b}cannot be computed directly

Vectors will quickly line up with dominant eigenspace!

need for orthonormalization of u cont
Need for Orthonormalization of U (cont.)
  • In "change of base matrix" U transforming to the new reduced state space, we can use ANY columns that span the reduced state space
  • In particular we can ORTHONORMALIZE the Krylov subspace vectors
orthonormalization of u the arnoldi algorithm

For i = 1 to q

Generates new Krylov

subspace vector

For j = 1 to i

Orthogonalize new vector

Normalize new vector

Orthonormalization of U: The Arnoldi Algorithm

Computational Complexity

Normalize first vector

O(n)

sparse: O(n) dense:O(n2)

O(q2n)

O(n)

generating vectors for the krylov subspace
Generating vectors for the Krylov subspace
  • Most of the computation cost is spent in calculating:
  • Set up and solve a linear system using GCR
  • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n)
  • The total complexity for calculating the projection matrix Uq is O(qn)
what about computing the reduced matrix
What about computing the reduced matrix ?

Orthonormalization of

the i-th column ofUq

Orthonormalization of

all columns ofUq

So we don’t need to compute

the reduced matrix. We have it already:

approaches for picking v and u25
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
    • general Krylov Subspace methods
    • case 1: Arnoldi
    • case 2: PVL
    • case 3: multipoint moment matching
    • moment matching preserving passivity: PRIMA
  • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
special case 2 expansion at s 0 biorthogonal v t u i
Special case #2: expansion at s=0, biorthogonal VTU=I

If U and V are such that:

Then the first 2q moments of reduced system match

proof of special case 2 expansion at s 0 biorthogonal v t u u t v i q cont
Proof of special case #2: expansion at s=0, biorthogonal VTU=UTV=Iq (cont.)

apply k times the lemma in next slide

lemma28
Lemma: .

Substitute:

biorthonormality

Iq

Substitute:

biorthonormality

Iq

pvl pade via lanczos p feldmann r w freund tcad95
PVL: Pade Via Lanczos[P. Feldmann, R. W. Freund TCAD95]
  • PVL is an implementation of the biorthogonal case 2:

Use Lanczos process to biorthonormalize the columns of U and V: gives very good numerical stability

approaches for picking v and u32
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
    • general Krylov Subspace methods
    • case 1: Arnoldi
    • case 2: PVL
    • case 3: multipoint moment matching
    • moment matching preserving passivity: PRIMA
  • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
case 3 intuitive view of subspace choice for general expansion points
Case #3: Intuitive view of subspace choice for general expansion points
  • In stead of expanding around only s=0 we can expand around another points
  • For each expansion point the problem can then be put again in the canonical form
case 3 intuitive view of krylov subspace choice for general expansion points cont
Case #3: Intuitive view of Krylov subspace choice for general expansion points (cont.)

Hence choosing Krylov subspace

s2

s1

matches first kj of transfer function around each expansion point sj

s1=0

s3

generating vectors for the krylov subspace35
Generating vectors for the Krylov subspace
  • Most of the computation cost is spent in calculating:
  • Set up and solve a linear system using GCR
  • If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n)
  • The total complexity for calculating the projection matrix Uq is O(qn)
approaches for picking v and u36
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
    • general Krylov Subspace methods
    • case 1: Arnoldi
    • case 2: PVL
    • case 3: multipoint moment matching
    • moment matching preserving passivity: PRIMA
  • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
sufficient conditions for passivity
Sufficient conditions for passivity
  • Sufficient conditions for passivity:

i.e. A is negative semidefinite

  • Note that these are NOT necessary conditions (common misconception)
slide38

Heat In

Example Finite Difference System from on Poisson Equation (heat problem)

We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A-1 are negative semidefinite.

sufficient conditions for passivity39
Sufficient conditions for passivity
  • Sufficient conditions for passivity:

i.e. E is negative semidefinite

  • Note that these are NOT necessary conditions (common misconception)
congruence transformations preserve negative or positive semidefinitness
Congruence Transformations Preserve Negative (or positive) Semidefinitness
  • Def. congruence transformation

same matrix

  • Note: case #1 in the projection framework V=U produces congruence transformations
  • Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrix
  • Proof. Just rename
congruence transformation preserves negative definiteness of e hence passivity and stability

qxn

nxn

nxq

nxq

Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability)

If we use

  • Then we loose half of the degrees of freedom
  • i.e. we match only q moments instead of 2q
  • But if the original matrix E is negative semidefinite
  • so is the reduced, hence the system is passive and stable
sufficient conditions for passivity42
Sufficient conditions for passivity
  • Sufficient conditions for passivity:

i.e. E is positive

semidefinite

i.e. A is negative

semidefinite

  • Note that these are NOT necessary conditions (common misconception)
example hstate space model from mna of r l c circuits

+

+

-

-

Example. hState-Space Model from MNA of R, L, C circuits

Lemma: A is negative semidefinite if and only if

When using MNA

For immittance systems

in MNA form

A is Negative

Semidefinite

E is Positive

Semidefinite

prima for preserving passivity odabasioglu celik pileggi tcad98
PRIMA (for preserving passivity) (Odabasioglu, Celik, Pileggi TCAD98)

A different implementation of case #1:

V=U, UTU=I, Arnoldi Krylov Projection Framework:

Use Arnoldi: Numerically very stable

prima preserves passivity
PRIMA preserves passivity
  • The main difference between and case #1 and PRIMA:
  • case #1 applies the projection framework to
  • PRIMA applies the projection framework to
  • PRIMA preserves passivity because
    • uses Arnoldi so that U=V and the projection becomes a congruence transformation
    • E and -A produced by electromagnetic analysis are typically positive semidefinite
    • input matrix must be equal to output matrix
algebraic proof of moment matching for prima expansion at s 0 v u orthonormal u t u i
Algebraic proof of moment matching for PRIMA expansion at s=0, V=U, orthonormal UTU=I

Used Lemma: If U is orthonormal (UTU=I) and b is a vector such that

conclusions
Conclusions
  • Reduction via eigenmodes
    • expensive and inefficient
  • Reduction via rational function fitting (point matching)
    • inaccurate in between points, numerically ill-conditioned
  • Reduction via Quasi-Convex Optimization
    • quite efficient and accurate
  • Reduction via moment matching: Pade approximations
    • better behavior but covers small frequency band
    • numerically very ill-conditioned
  • Reduction via moment matching: Krylov Subspace Projection Framework
    • allows multipoint expansion moment matching (wider frequency band)
    • numerically very robust and computationally very efficient
    • use PVL is more efficient for model in frequency domain
    • use PRIMA to preserve passivity if model is for time domain simulator
case study passive reduced models from an electromagnetic field solver
Case study: Passive Reduced Models from an Electromagnetic Field Solver

long coplanar T-line,

shorted on other side

dielectric layer

importance of including dielectrics a simple transmission line example
Importance of including dielectrics:a simple transmission line example

admittance [S]

100

__ with dielectrics

- - w/o dielectrics

10-1

10-2

10-3

10-4

0

1

2

3

4

5

6

frequency [Hz]

x 108

techniques for including dielectrics

Can guarantee passivity

Techniques for including dielectrics
  • Finite Element Method
  • Green’s Functions for dielectric bodies
  • Surface Formulations using Equivalent Theorem
    • (substitute dielectrics with equivalent surface currents and use free space Green’s functions)
  • Volume Formulations using Polarization Currents
frequency independent kernel approximation
Frequency independent kernel approximation
  • Note: in this work we used a classical frequency independent approximation for the integration kernel:
reducing to algebraic form
Reducing to algebraic form
  • Surface and Volume discretization both for conductors and dielectrics + Galerkin gives branch equations:

conductors

dielectrics

slide59

positive definite when

using Galerkin

congruence transformation

preserves positive definiteness

diagonal with

positive coef.

is block diagonal and the blocks are all positive,

hence is positive semidefinite and so is

slide61

diagonal with positive coef.

congruence transformation

preserves positive definiteness

is block diagonal and the blocks are all positive semidefinite, hence is also positive semidefinite

example 1 frequency response of the coplanar transmission line
Example 1: frequency responseof the coplanar transmission line

admittance [S]

100

(order 16)

__ with dielectrics, reduced model

o with dielectrics, full system

(order 700)

10-1

10-2

10-3

10-4

0

1

2

3

4

5

6

frequency [Hz]

x 108

example2 frequency response of the line with opposite strips
Example2: frequency responseof the line with opposite strips

admittance [S]

100

10-1

10-2

10-3

(order 16)

__ with dielectrics, reduced model

o with dielectrics, full system

(order 700)

10-4

0

1

2

3

4

5

6

x 108

frequency [Hz]

example2 current distributions
Example2: Current distributions

Note: NOT TO SCALE!

reduced filament widths

for visualization purposes

frequency response for the reduced model of the mcm bus
Frequency response for the reduced model of the MCM bus

admittance [S]

100

__ with dielectrics, reduced model (order 12)

o with dielectrics, full system (order 600)

- - without dielectrics

10-1

10-2

10-3

10-4

0

1

2

3

4

5

6

frequency [Hz]

x 108

conclusions electromagnetic example
Conclusions Electromagnetic Example
  • Volume formulation with full mesh analysis (both conductors and dielectrics) produces
    • well conditioned
    • and positive semidefinite matrices
  • Hence guaranteed passive models are generated when using congruence transformation
approaches for picking v and u68
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
  • Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)
slide69

Observability Gramian

Energy of the output y(t) starting from state x with no input:

Observability Gramian:

Note: it is also the solution Lyapunov equation

Note: If x=xi the i-th eigenvector of Wo :

Hence: eigenvectors of Wo corresponding to small

eigenvalues do NOT produce much energy at the output

(i.e. they are not very observable):

Idea: let’s get rid of them!

slide70

Controllability Gramian

Minimum amount input energy required to drive the

system to a specific state x :

It is also the solution of

Inverse of Controllability Gramian:

Note: If x=xi the i-th eigenvector of Wc:

Hence: eigenvectors of Wc corresponding to small

eigenvalues do require a lot of input energy in order

to be reached (i.e. they are not very controllable):

Idea: let’s get rid of them!

na ve controllability observability mor
Naïve Controllability/Observability MOR
  • Suppose I could compute a basis for the strongly observable and/or strongly controllable spaces. Projection-based MOR can give a reduced model that deletes weakly observable and/or weakly controllable modes.
  • Problem:
    • What if the same mode is strongly controllable, but weakly observable?
    • Are the eigenvalues of the respective Gramians even unique?
slide72

Changing coordinate system

  • Consider an invertible change of coordinates:
  • We know that the input/output relationship will be unchanged.
  • But what about the the Gramians, and their eigenvalues?
  • Gramians and their eigenvalues change! Hence the relative degrees of observability and controllability are properties of the coordinate system
  • A bad choice of coordinates will lead to bad reduced models if we look at controllability and observability separately.
  • What coordinate system should we use then?
slide73

Balancing

Fortunately the eigenvalues of the product (Hankel singular

values) do not change when changing coordinates:

Diagonal matrix with eigenvalues of the product

The eigenvectors change

But not the eigenvalues

And since Wc and Wo are symmetric, a change of coordinate matrix U can be found that diagonalizes both:

In Balanced coordinates the Gramians are equal and diagonal

slide74

Selection of vectors for the columns of the reduced order projection matrix.

In balanced coordinates it is easy to select the best vectors for the reduced model: we want the subspace of vectors that are at the same time most controllable and observable:

simply pick the eigenvectors corresponding to the largest entries on the diagonal (Hankel singular values).

In other words the ones corresponding to the largest

eigenvalues of the controllability and observability Grammians product.

slide75

Truncated Balance Realization Summary

  • The good news:
    • we even have bounds for the error
    • Can do even a bit better with the optimal Hankel Reduction
  • The bad news:
    • it is expensive:
      • need to compute the Gramians (solve Lyapunov equation)
      • need to compute eigenvalues of the product: O(N3)
  • The bottom line:
    • If the size of your system allows you O(N3) computation, Truncated Balance Realization is a much better choice than the any other reduction method.
    • But if you cannot afford O(N3) computation (e.g. dense matrix with N > 5000) then PRIMA or PVL or Quasi-Convex-Optimization are better choices
approaches for picking v and u76
Approaches for picking V and U
  • Use Eigenvectors of the system matrix
  • POD or SVD or KLD or PCA.
  • Use Krylov Subspace Vectors (Moment Matching)
  • Use Singular Vectors of System Grammians Product
    • Truncated Balance Realizations (TBR)
    • Guaranteed Passive TBR
tbr passivity preserving
TBR: Passivity Preserving?
  • TBR does not generally preserve passivity
    • Not guaranteed PR-preserving
    • Not guarateed BR-preserving
  • A special case: “symmetrizable” models
    • Suppose the system is transformable to symmetric and internally PR
    • TBR will generate PR models! (via congruence!)
    • Stronger property than for PRIMA: TBR is coordinate-invariant
positive real lemma
Positive-Real Lemma
  • Lur’e equations :
  • The system is positive-real if and only if is positive semidefinite
  • A dual set of equations can be written for with
pr preserving tbr
PR Preserving TBR
  • Lur’e equations for “Grammians” : Lyapunov + Constraints
  • Insight : from the PR lemma Can be used in a TBR procedure
    • “Balance” the Lur’e equations then truncate
  • By similar partitioning argument, truncated (reduced) system will be PR/BR (passive) iff the original is!
physical interpretation
Physical Interpretation
  • Consider Y-parameter model
    • Inputs: voltages. Outputs: currents.
    • Dissipated energy
  • Lur’e Equation for PR-“Controllability” Grammian
    • Singular values represent: gains from dissipated energy to state
    • Minimum energy dissipation to reach a given state:
  • Lur’e Equation for PR-“Observability” Grammian
    • Singular values represent: gains from state to output
    • Energy dissipated, given initial state:
computational procedure
Computational Procedure
  • Put system into standard form
    • If is singular, requires an eigendecomposition
  • Solve the PR/BR Lur’e equations
    • Solve a generalized eigenproblem of 2X size
    • Special treatment for singular
  • Balance & Truncate as in standard TBR
alternate hybrid procedure
Alternate Hybrid Procedure
  • Perform standard TBR
  • Use Positive-Real Lemma to check passivity of models generated
  • If model is not acceptable, proceed to PR-TBR
  • Why?
    • Usually costs less
    • May get better models
example rlc model
Example : RLC Model

TBR Model Not Positive Real

example integrated spiral inductor
Example : Integrated Spiral Inductor

Order 60 PRIMA

Order 5 PR-TBR

two complementary approaches
Moment Matching Approaches

Accurate over a narrow band.

Matching function values and derivatives.

Cheap: O(qn).

Use it as a FIRST STAGE REDUCTION

Truncated Balanced Realization and Hankel Reduction

Optimal (best accuracy for given size q, and apriori error bound.

Expensive: O(n3)

USE IT AS A SECOND STAGE REDUCTION

Two Complementary Approaches
combined krylov tbr algorithm
Combined Krylov-TBR algorithm

Krylov reduction (Wi , Vi):

Ai = WiTAVi

Bi = WiTB

Ci = CVi

Initial Model:

(A B C), n

Intermediate Model:

(Ai Bi Ci), ni

TBR reduction (Wt , Vt):

Ar = WtTAVt

Br = WtTB

Cr = CVt

Reduced Model:

(Ar Br Cr), q

conclusions87
Conclusions
  • Moment Matching Projection Methods
    • e.g. PVL, PRIMA, Arnoldi
    • are suitable for application to VERY large systems O(qn)
    • but do not generate optimal models
  • PR/BR-TBR
    • Independent of system structure
    • Guarantee passive models
    • but computationally O(n3) usable only on model size < 3000
  • Combination of projection methods and new TBR technique provides near-optimal compression and guaranteed passive models -- in reasonable time
  • Quasi-Convex Optimization Reduction is also a good alternative specially when building models from measurements
slide88

Course Outline

Numerical Simulation

Quick intro to PDE Solvers

Quick intro to ODE Solvers

Model Order reduction

Linear systems

Common engineering practice

Optimal techniques in terms of model accuracy

Efficient techniques in terms of time and memory

Non-Linear Systems

Parameterized Model Order Reduction

Linear Systems

Non-Linear Systems

Monday

Yesterday

Today

Tomorrow

Friday