1 / 53

Lecture 22 Adjunct Methods

Lecture 22 Adjunct Methods. Part 1. Motivation. Motivating scenario. We want to predict tomorrow’s weather, u(t) … We have a atmospheric model chugging away to predict temperature, pressure, etc.

glain
Download Presentation

Lecture 22 Adjunct Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 22Adjunct Methods

  2. Part 1 Motivation

  3. Motivating scenario We want to predict tomorrow’s weather, u(t) … We have a atmospheric model chugging away to predict temperature, pressure, etc. This model depends on a forcing f(t), for example, sea surface temperature, which we known only imperfectly

  4. prediction u(t) tyesterday t Yesterday’s model run ttoday f(t) ty tt But now we have new data for today prediction u(t) new data t tyesterday ttoday f(t) tt ty

  5. new prediction Today’s model run should include yesterday’s data to help constrain the poorly known forcing old prediction u(t) new data t tyesterday ttoday f(t) new old tt ty How do we adjust the forcing (which was imperfectly known, anyway) to better predict yesterday’s weather?

  6. Part 2 The mathematics of continuous functions inner products linear operators and their adjoints

  7. Discretevectors uk and vk Continuousfunctions f(t) and g(t)

  8. functions f(t) and g(t)Discrete approximation as a vectorsuk=f(kDt) and vk=g(kDt)

  9. Discretedot product: c = Sk ukvk = uv scalar Continuousinner product c =  f(t) g(t) dt = (f,g)

  10. discrete approximation as dot product(f,g) =  f(t) g(t) dt = Dt Sk ukvk = Dtuvwith uk=f(kDt) and vk=g(kDt) inner product (f,g)

  11. Discretematrix: vj = Sk Mjkuk or v=Mu ContinuousLinear operator f = Lg

  12. What is a linear operator?Linear differential operatorinvolving derivatives and known functionsLg = [ p(t) d/dt q(t) d/dt ] g(t) known

  13. and/orLinear integral operatorinvolving intergral and known functionsLg =  p(t,t’) g(t’) dt’ known

  14. ifL1g=f and L2f=gthenL1=L2-1 and L2=L1-1one linear operator is the inverse of the other

  15. discrete approximations Sample differential operator plus b.c. 1 0 0 0 … 0 -1 1 0 0 … 0 0 -1 1 0 … 0 …… … 0 0 -1 1 Mu=v, M = Dt-1 Lg=f with L = d/dtplus b.c. g(0)=known Sample integral operator plus b.c. 1 0 0 0 … 0 1 1 0 0 … 0 1 1 1 0 … 0 …… … 1 1 1 1 Lg=f with Lg = 0tg(t’)dt’plus b.b. g(0)=known Mu=v, M = Dt

  16. Question concerning a dot product … given two matrices A and B when is (Au)v = u(Bv) ? Answer: when B=AT, since (Au)v = (Au)Tv = uTATv = uT(ATv) = u (ATv)

  17. Question concerning an inner product … given two linear operators L1 and L2 when is ( L1f , g ) = ( f, L2g ) ? Answer: never mind, but let’s give it a name (L1f, g) = (f, L2g) when L1 is the adjoint of L2 let’s denote the adjoint relationship L2=L1* means “adjoint”

  18. Transformif A=BT then B=ATATT=AAT-1=A-1T(A+B)T= AT+BTif AT=A then A is symmetric Adjointif L1=L2* then L2=L1*L**=LL*-1=L-1*(L1+L2)*= L1*+L2* if L*=L then Lis self-adjoint

  19. Calculating adjoints by integration by parts Let L = d/dt with b.c. zero at ± (Lf, g) = -+ df/dt g dt = f g |-+ --+f dg/dt dt = --+f dg/dt dt = (f, L*g) So L* = -d/dt with b.c. zero at ±

  20. Three simple adjoints L* c(x) -d/dt b.c.: function 0 at ± d2/dt2 b.c.: function and its first derivative 0 at ± L c(x) d/dt b.c.: function 0 at ± d2/dt2 b.c.: function and its first derivative 0 at ±

  21. Part 3 Functional derivatives How to represent the idea that a perturbation in forcing, f(t) cause a perturbation in response, u(t)

  22. Here’s the differential equation L u(t) = f(t) forcing Data di linearly depends on u(t)through an inner productdi = (hi, u)

  23. Differential Equation Lu=f A perturbation in f(t) causes a perturbation in u(t) f0(t)  f0(t)+df(t) u0(t)  u0(t)+du(t) Suppose df(t) was localized at time t0: df(t)=ed(t-t0) Then du(t) is a function of e and t0: du(t,e,t0) Then the function (or Fréchet)derivative is: du(t)/df(t0) = lime0 [ u(t,e,t0) – u(t,e=0,t0) ] / e

  24. An impulsive perturbation in forcing ed(t-t0) Causes a perturbation in response du(t,e,t0) Then the general perturbation df in forcing causes the response du =  (du/df) df dt0 = ( du/df, df )

  25. An impulsive perturbation in forcing ed(t-t0) t t0 Causes this response du t0 defines du/da A more complicated perturbation in forcing df t t0 du = (du/da, da) Causes this response du t0

  26. Then the general perturbation df in forcing causes the response du =  (du/df) df dt0 = ( du/df, df ) In a discrete world: du1 du2 du3 = Dt … duN df1 df2 df3 … dfN du(t1)/df(t1) du(t1)/df(t2) du(t1)/df(t3) … du(t2)/df(t1) du(t2)/df(t2) du(t2)/df(t3) … du(t3)/df(t1) du(t3)/df(t2) du(t3)/df(t3) … … du(tN)/df(t1) du(tN)/df(t2) du(tN)/df(t3) … Might solve with least-squares …

  27. Part 4 Calculating the data kernel The functional derivative of data with respect to forcing

  28. The Goal to find the data kernel, gi(t)which relates a perturbation in the data, ddi, to a perturbation in the forcing df(t) through an inner product ddi = ( gi(t), df(t) )

  29. Note that since the data kernel satisfies it is a functional derivative gi(t) = ddi /df(t) ddi = ( gi(t), df(t) )

  30. Step 1:assume that a function u(t) solves a linear differential equation with forcing f(t) L u(t) = f(t)

  31. Step 2:assume the differential equation has green function F(t,t’) so the solution can be written:note that L-1 is the inverse of L, sincef=Lu and u=L-1f u(t) =  F(t,t’) f(t’) dt = (F(t,t’), f(t) )  L-1 f(t)

  32. Step 3:assume that the data, di, are related to the solution u(t) through an inner product di = ( hi(t), u(t) )

  33. Step 4:do some substitutions and manipulations di = ( hi(t), u(t) ) = ( hi(t), L-1f(t) ) = (L-1*hi(t), f(t) ) = (L*-1hi(t), f(t) )

  34. Step 4:since the problem is linear, this rule applies to perturbations of functions as well as to the functions themselves di = (L*-1hi(t), f(t) ) So ddi = (L*-1hi(t), df(t) )

  35. Step 5:by comparing the definition of the data kernelddi = ( gi(t), df(t) )to the resultddi = (L*-1hi(t), df(t) )recognize that the data kernel is gi(t) = L*-1hi(t)

  36. Step 6:since the data kernel satisfiesgi(t) = L*-1hi(t)then it must satisfy the differential equation L*gi(t) = hi(t)

  37. This is the desired resultsa way of calculating the data kernel, gi(t)by solving the differential equation L*gi(t) = hi(t)

  38. Part 5 An example Note: In this example I use very simple differential equations that can be solved analytically. In a reality, you would be using much more complicated differential equations that but be solved numerically ..

  39. Example: Newtonian cooling equation du/dt + cu = f(t) L = d/dt + c u(t) is temperature f(t) is heating c is a constant

  40. Green’s Function du/dt + cu = d(t-t’) F(t,t’) = H(t-t’) exp{ -c(t-t’) } unit step function

  41. Adjoint differential equation L = d/dt + c The adjoint of d/dt is –d/dt and the adjoint of c is c So L* = -d/dt + c And so du/dt + cu = f(t) has corresponding adjoint equation -dgi/dt + cgi = hi

  42. Greens Function of the Adjoint differential equation -dgi/dt + cgi = d(t-t’) has solution G(t,t’) = {1-H(t-t’)} exp{ c(t-t’) }

  43. interpretation Suppose hi = d(t) so that the data di is just u(t=0), temperature at time 0 Then G(t,t’=0) is the data kernel gi(t) Now suppose that we make an impulsive perturbation of heating at time t0: df(t)=d(t-t0) Then ddi = u(t=0) = ( gi(t), df(t) ) = (G(t,t’=0) , d(t-t0) ) = G(t0,t’=0)

  44. Interpretation, continued So for an impulsive perturbation of heating at time t0 du(t=0) = G(t0,t’=0) We would expect: no effect on temperature if heat applied after time t=0 large effect if applied just prior to t=0 minimal effect if it is applied way before t=1 Small effect Large effect No effect

  45. example H f0 u0 f uobs duobs t

  46. Forming data from u(t)Here I use an example of the data being averages of neighboring u’s d1 = u(t1) so h1 = [1, 0, 0, 0, 0, 0 … 0]T dj = ½ { u(tj-1) + u(tj) } for j>1 so hj = ½ [0, 0, 0, … 1, 1, … 0, 0, 0]T

  47. d0 dobs ddobs t

  48. The problem Reconstruct df from dd

  49. Setup for Least Squares ddi = ( gi(t), df(t) ) dd1 dd2 dd3 … ddN g1 df1 df2 df3 … dfN g2 = g3 … gN time varies along columns …

  50. results dftrue dfpre error t

More Related