1 / 28

Dimension Reduction in “Heat Bath” Models Raz Kupferman The Hebrew University

Dimension Reduction in “Heat Bath” Models Raz Kupferman The Hebrew University. Part I Convergence Results Andrew Stuart John Terry Paul Tupper R.K. Ergodicity results: Paul Tupper’s talk.

xanti
Download Presentation

Dimension Reduction in “Heat Bath” Models Raz Kupferman The Hebrew University

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dimension Reduction in “Heat Bath” ModelsRaz KupfermanThe Hebrew University

  2. Part IConvergence ResultsAndrew StuartJohn TerryPaul TupperR.K Ergodicity results: Paul Tupper’s talk.

  3. A (large) mechanical system consisting of a “distinguished” particle interacting through springs with a collection of “heat bath” particles. Set-up: Kac-Zwanzig Models The heat bath particles have random initial data (Gibbsian distribution). Goal: derive “reduced” dynamics for the distinguished particle.

  4. Motivation • Represents a class of problems where dimensionreduction is sought. Rigorous analysis. • Convenient test problem for recent dimension reduction approaches/techniques: • optimal prediction (Kast) • stochastic modeling • hidden Markov models (Huisinga-Stuart-Schuette) • coarse grained time stepping(Warren-Stuart, Hald-K) • time-series model identification (Stuart-Wiberg)

  5. The Hamiltonian: (Pn,Qn): coordinates of distinguished particle The equations of motion: (pj,qj): coordinates of j-th heat bath particle mj: mass of j-th particle kj: stiffness of j-th spring V(Q): external potential The governing equations

  6. Heat bath particles have random initial data: Gibbs distribution with temperature T: Initial data are independent Gaussian: i.i.d N(0,1) Initial data

  7. Solve the (p,q) equations and substitute back into the (P,Q) equation (Ford-Kac 65, Zwanzig 73) Memory kernel: Random forcing: “Fluctuation-dissipation”: Generalized Langevin Equation

  8. Heat baths are characterized by broad and dense spectra.Random set of frequencies: Ansatz for spring constants: Choice of parameters Assumption:f2() is bounded and decays faster than 1/.

  9. Generalized Langevin Eq. Memory kernel: (Monte-Carlo approximation of the Fourier transform of f2) Random forcing: (Monte-Carlo approximation of a stochastic integral)

  10. Theorem:(-a.s.) the sequence of random functions Fn(t) converges weakly in C[0,T] to a stationary Gaussian process F(t) with mean zero and auto-covariance K(t); (FnF). Proof: CLT + tightness can be extended to “long term” behavior: convergence of empirical finite-dimensional distributions (Paul Tupper’s talk) Lemma For almost every choice of frequencies (-a.s.) Kn(t) converges pointwise to K(t), the Fourier cosine transform of f2(). KnK in L2(,L2[0,T])

  11. If we choose then and, by the above theorem, Fn(t) converges weakly to the Ornstein-Uhlenbeck (OU) process U(t) defined by the Ito SDE: Example

  12. Theorem:(-a.s.) Qn(t) converges weakly in C2[0,T] to the solution Q(t) of the limiting stochastic integro-diff, equation: Proof: the mapping (Kn,Fn)Qn is continuous Convergence of Qn(t)

  13. if then Qn(t) converges to the solution of which is equivalent to the (memoryless!!) SDE Back to example

  14. single well double well triple well extremely long correlation time Numerical validation Empirical distribution of Qn(t) for n=5000 and various choices of V(Q) compared with the invariant measure of the limiting SDE

  15. “Unresolved” component of the solution are modeled by an auxiliary, memoryless, stochastic variable. • Bottom line: instead of solving a large, stiff system in 2(n+1) variables, solve a Markovian system of 3 SDEs! • Similar results can be obtained for nonlinear interactions. (Stuart-K ‘03)

  16. Part IIFractional Diffusion

  17. Fractional (or anomalous) diffusion: Found in a variety of systems and models: (e.g., Brownian particles in polymeric fluids, continuous-time random walk) In all known cases, fractional diffusion reflects the divergence of relaxation times; extreme non-Markovian behaviour. Question: can we construct a heat bath models that generated anomalous diffusion?

  18. Memory kernel: Parameters: Random forcing: If we take then power law decay of memory kernel Reminder

  19. Theorem:(-a.s.) Qn(t) converges weakly in C1[0,T] to the solution Q(t) of the limiting stochastic integro-diff, equation: The limiting GLE F(t) is a Gaussian process with covariance K(t); derivative of fractional Brownian motion (1/f-noise) (Interpreted in distributional sense)

  20. Solving the limiting GLE For a free particle, V’(Q)=0, and a particle in a quadratic potential well, V’(Q)=Q, the SIDE can be solved using the Laplace transform. Free particle: Gaussian profile, variance given by sub-diffusive (Mittag-Leffler) function of time, var(Q)~t. Quadratic potential: sub-exponential approach to the Boltzmann distribution.

  21. Numerical results Variance of an ensemble of 3000 systems, V(Q)=0(compared to exact solution of the GLE)

  22. Quadratic well: evolving distribution of 10,000 systems (dashed line: Boltzmann distribution)

  23. How? Consider the following Markovian SDE: u(t) : vector of size mA: mxm constant matrixC: mxm constant matrixG: constant m-vector What about dimensional reduction? Even a system with power-law memory can be well approximated by a Markovian system with a few (less than 10) auxiliary variables.

  24. Solve for u(t) and substitute into Q(t) equation: where Goal: find G,A,C so that fluc.-diss. is satisfied and the kernel approximates power-law decay.

  25. It is easier to approximate in Laplace domain: (and the Laplace transform of a power is a power). The RHS is a rational function of degree (m-1) over m. Pade approximation of the Laplace transform of the memory kernel (classical methods in linear sys. theory). Even nicer if kernel has continued-fraction representation

  26. Laplace transform of memory kernel (solid line) compared with continued-fraction approximation for 2,4,8,16 modes (dashed lines).

  27. Variance of GLE (solid line) compared with Markovian approximations with 2,4,8 modes. Fractional diffusion scaling observed over long time.

  28. Comment: Approximation by Markovian system is not only a computational tools. Also an analytical approach to study the statistics of the solution (e.g. calculate stopping times). Controlled approximation (unlike the use of a “Fractional Fokker-Planck equation”). Bottom line: Even with long range memory system can be reduced (with high accuracy) into a Markovian system of less than 10 variables (it is “intermediate asymptotics but that what we care about in real life).

More Related