lecture 22 adjunct methods n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Lecture 22 Adjunct Methods PowerPoint Presentation
Download Presentation
Lecture 22 Adjunct Methods

Loading in 2 Seconds...

play fullscreen
1 / 53

Lecture 22 Adjunct Methods - PowerPoint PPT Presentation


  • 104 Views
  • Uploaded on

Lecture 22 Adjunct Methods. Part 1. Motivation. Motivating scenario. We want to predict tomorrow’s weather, u(t) … We have a atmospheric model chugging away to predict temperature, pressure, etc.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Lecture 22 Adjunct Methods' - glain


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
part 1
Part 1

Motivation

motivating scenario
Motivating scenario

We want to predict tomorrow’s weather, u(t) …

We have a atmospheric model chugging away to predict temperature, pressure, etc.

This model depends on a forcing f(t), for example, sea surface temperature, which we known only imperfectly

slide4

prediction

u(t)

tyesterday

t

Yesterday’s model run

ttoday

f(t)

ty

tt

But now we have new

data for today

prediction

u(t)

new data

t

tyesterday

ttoday

f(t)

tt

ty

slide5

new prediction

Today’s model run should include yesterday’s data to help constrain the poorly known forcing

old prediction

u(t)

new data

t

tyesterday

ttoday

f(t)

new

old

tt

ty

How do we adjust the forcing (which was imperfectly known, anyway) to better predict yesterday’s weather?

part 2
Part 2

The mathematics of

continuous functions

inner products

linear operators

and their

adjoints

discrete vectors u k and v k
Discretevectors uk and vk

Continuousfunctions f(t) and g(t)

discrete dot product c s k u k v k u v
Discretedot product: c = Sk ukvk = uv

scalar

Continuousinner product c =  f(t) g(t) dt = (f,g)

slide10

discrete approximation as dot product(f,g) =  f(t) g(t) dt = Dt Sk ukvk = Dtuvwith uk=f(kDt) and vk=g(kDt)

inner product (f,g)

discrete matrix v j s k m jk u k or v mu
Discretematrix: vj = Sk Mjkuk or v=Mu

ContinuousLinear operator f = Lg

slide12

What is a linear operator?Linear differential operatorinvolving derivatives and known functionsLg = [ p(t) d/dt q(t) d/dt ] g(t)

known

slide13

and/orLinear integral operatorinvolving intergral and known functionsLg =  p(t,t’) g(t’) dt’

known

slide14

ifL1g=f and L2f=gthenL1=L2-1 and L2=L1-1one linear operator is the inverse of the other

slide15

discrete approximations

Sample differential operator plus b.c.

1 0 0 0 … 0

-1 1 0 0 … 0

0 -1 1 0 … 0

……

… 0 0 -1 1

Mu=v, M = Dt-1

Lg=f with L = d/dtplus b.c. g(0)=known

Sample integral operator plus b.c.

1 0 0 0 … 0

1 1 0 0 … 0

1 1 1 0 … 0

……

… 1 1 1 1

Lg=f with Lg = 0tg(t’)dt’plus b.b. g(0)=known

Mu=v, M = Dt

slide16
Question concerning a dot product …

given two matrices A and B

when is (Au)v = u(Bv) ?

Answer: when B=AT, since

(Au)v = (Au)Tv = uTATv = uT(ATv) = u (ATv)

slide17
Question concerning an inner product …

given two linear operators L1 and L2

when is ( L1f , g ) = ( f, L2g ) ?

Answer: never mind, but let’s give it a name

(L1f, g) = (f, L2g) when L1 is the adjoint of L2

let’s denote the adjoint relationship L2=L1*

means “adjoint”

transform if a b t then b a t a tt a a t 1 a 1t a b t a t b t if a t a then a is symmetric
Transformif A=BT then B=ATATT=AAT-1=A-1T(A+B)T= AT+BTif AT=A then A is symmetric

Adjointif L1=L2* then L2=L1*L**=LL*-1=L-1*(L1+L2)*= L1*+L2* if L*=L then Lis self-adjoint

calculating adjoints by integration by parts
Calculating adjoints by integration by parts

Let L = d/dt

with b.c. zero at ±

(Lf, g) = -+ df/dt g dt

= f g |-+ --+f dg/dt dt = --+f dg/dt dt

= (f, L*g)

So L* = -d/dt

with b.c. zero at ±

three simple adjoints
Three simple adjoints

L*

c(x)

-d/dt

b.c.: function 0 at ±

d2/dt2

b.c.: function and its first derivative 0 at ±

L

c(x)

d/dt

b.c.: function 0 at ±

d2/dt2

b.c.: function and its first derivative 0 at ±

part 3
Part 3

Functional derivatives

How to represent

the idea that a perturbation in forcing, f(t)

cause a perturbation in response, u(t)

here s the differential equation
Here’s the differential equation

L u(t) = f(t)

forcing

Data di linearly depends on u(t)through an inner productdi = (hi, u)

slide23
Differential Equation Lu=f

A perturbation in f(t) causes a perturbation in u(t)

f0(t)  f0(t)+df(t)

u0(t)  u0(t)+du(t)

Suppose df(t) was localized at time t0: df(t)=ed(t-t0)

Then du(t) is a function of e and t0: du(t,e,t0)

Then the function (or Fréchet)derivative is:

du(t)/df(t0) = lime0 [ u(t,e,t0) – u(t,e=0,t0) ] / e

slide24

An impulsive perturbation in forcing

ed(t-t0)

Causes a perturbation in response

du(t,e,t0)

Then the general perturbation df in forcing

causes the response

du =  (du/df) df dt0 = ( du/df, df )

slide25

An impulsive perturbation in forcing

ed(t-t0)

t

t0

Causes this response

du

t0

defines

du/da

A more complicated perturbation in forcing

df

t

t0

du =

(du/da, da)

Causes this response

du

t0

slide26

Then the general perturbation df in forcing

causes the response

du =  (du/df) df dt0 = ( du/df, df )

In a discrete world:

du1

du2

du3 = Dt

duN

df1

df2

df3

dfN

du(t1)/df(t1) du(t1)/df(t2) du(t1)/df(t3) …

du(t2)/df(t1) du(t2)/df(t2) du(t2)/df(t3) …

du(t3)/df(t1) du(t3)/df(t2) du(t3)/df(t3) …

du(tN)/df(t1) du(tN)/df(t2) du(tN)/df(t3) …

Might solve with least-squares …

part 4
Part 4

Calculating the data kernel

The functional derivative of

data

with respect to forcing

d d i g i t d f t

The Goal to find the data kernel, gi(t)which relates a perturbation in the data, ddi, to a perturbation in the forcing df(t) through an inner product

ddi = ( gi(t), df(t) )

note that since the data kernel satisfies it is a functional derivative g i t d d i d f t

Note that since the data kernel satisfies it is a functional derivative gi(t) = ddi /df(t)

ddi = ( gi(t), df(t) )

step 1 assume that a function u t solves a linear differential equation with forcing f t

Step 1:assume that a function u(t) solves a linear differential equation with forcing f(t)

L u(t) = f(t)

u t f t t f t dt f t t f t l 1 f t

Step 2:assume the differential equation has green function F(t,t’) so the solution can be written:note that L-1 is the inverse of L, sincef=Lu and u=L-1f

u(t) =  F(t,t’) f(t’) dt = (F(t,t’), f(t) )  L-1 f(t)

step 3 assume that the data d i are related to the solution u t through an inner product

Step 3:assume that the data, di, are related to the solution u(t) through an inner product

di = ( hi(t), u(t) )

step 4 do some substitutions and manipulations

Step 4:do some substitutions and manipulations

di = ( hi(t), u(t) )

= ( hi(t), L-1f(t) )

= (L-1*hi(t), f(t) )

= (L*-1hi(t), f(t) )

slide34

Step 4:since the problem is linear, this rule applies to perturbations of functions as well as to the functions themselves

di = (L*-1hi(t), f(t) )

So

ddi = (L*-1hi(t), df(t) )

slide35

Step 5:by comparing the definition of the data kernelddi = ( gi(t), df(t) )to the resultddi = (L*-1hi(t), df(t) )recognize that the data kernel is gi(t) = L*-1hi(t)

slide36

Step 6:since the data kernel satisfiesgi(t) = L*-1hi(t)then it must satisfy the differential equation L*gi(t) = hi(t)

slide37

This is the desired resultsa way of calculating the data kernel, gi(t)by solving the differential equation L*gi(t) = hi(t)

part 5
Part 5

An example

Note: In this example I use very simple differential equations that can be solved analytically.

In a reality, you would be using much more complicated differential equations that but be solved numerically ..

example newtonian cooling equation
Example: Newtonian cooling equation

du/dt + cu = f(t)

L = d/dt + c

u(t) is temperature

f(t) is heating

c is a constant

green s function
Green’s Function

du/dt + cu = d(t-t’)

F(t,t’) = H(t-t’) exp{ -c(t-t’) }

unit step function

adjoint differential equation
Adjoint differential equation

L = d/dt + c

The adjoint of d/dt is –d/dt

and the adjoint of c is c

So L* = -d/dt + c

And so du/dt + cu = f(t)

has corresponding adjoint equation

-dgi/dt + cgi = hi

greens function of the adjoint differential equation
Greens Function of the Adjoint differential equation

-dgi/dt + cgi = d(t-t’)

has solution

G(t,t’) = {1-H(t-t’)} exp{ c(t-t’) }

interpretation
interpretation

Suppose hi = d(t)

so that the data di is just u(t=0), temperature at time 0 Then G(t,t’=0) is the data kernel gi(t)

Now suppose that we make an impulsive perturbation of heating at time t0: df(t)=d(t-t0)

Then ddi = u(t=0)

= ( gi(t), df(t) )

= (G(t,t’=0) , d(t-t0) )

= G(t0,t’=0)

interpretation continued
Interpretation, continued

So for an impulsive perturbation of heating at time t0

du(t=0) = G(t0,t’=0)

We would expect:

no effect on temperature if heat applied after time t=0

large effect if applied just prior to t=0

minimal effect if it is applied way before t=1

Small effect

Large effect

No effect

example
example

H

f0

u0

f

uobs

duobs

t

forming data from u t here i use an example of the data being averages of neighboring u s
Forming data from u(t)Here I use an example of the data being averages of neighboring u’s

d1 = u(t1)

so h1 = [1, 0, 0, 0, 0, 0 … 0]T

dj = ½ { u(tj-1) + u(tj) } for j>1

so hj = ½ [0, 0, 0, … 1, 1, … 0, 0, 0]T

slide47

d0

dobs

ddobs

t

the problem
The problem

Reconstruct df from dd

setup for least squares
Setup for Least Squares

ddi = ( gi(t), df(t) )

dd1

dd2

dd3

ddN

g1

df1

df2

df3

dfN

g2

=

g3

gN

time varies along columns …

results
results

dftrue

dfpre

error

t

slide51
What about perturbations in the parameters of a differential equation

Suppose L has a parameter a(t).

Changing the parameter from

a0(t) to a0(t)+da(t)

Changes the solution of Lu=f from

u0(t) to u0(t)+du(t)

slide52
approximation that makes perturbation in parameter act like a forcing

L u = f with L = a(t) d/dt

Suppose a(t) = a0(t) + da(t)

Then L = L0 + L1 = a0(t) d/dt + da(t) d/dt

write u(t) = u0(t) + du(t) where u0(t) solves L0u0=f

Lu=f  (L0 + L1 )(u0 + du) = f

 (L0u0 + L0du+L1u0 + L1du) = f

 L0du = L1u0 - L1du

 L0du - L1u0

slide53
approximation that makes perturbation in parameter act like a forcing

L u = f with L = a(t) d/dt

Suppose a(t) = a0(t) + da(t)

Then L = L0 + L1 = a0(t) d/dt + da(t) d/dt

write u(t) = u0(t) + du(t) where u0(t) solves L0u0=f

Lu=f  (L0 + L1 )(u0 + du) = f

 (L0u0 + L0du+L1u0 + L1du) = f

 L0du = L1u0 - L1du

 L0du - L1u0

acts as forcing