1 / 11

Estimating ETAS 1. Straightforward aspects. 2. Main obstacles.

Estimating ETAS 1. Straightforward aspects. 2. Main obstacles. 3. A trick to simplify things enormously. 4. Simulations and examples. 1. Point process estimation (or inversion) is easy. a) Point processes can generally be very well estimated by maximum likelihood estimation (MLE). 

jethro
Download Presentation

Estimating ETAS 1. Straightforward aspects. 2. Main obstacles.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Estimating ETAS 1. Straightforward aspects. 2. Main obstacles. 3. A trick to simplify things enormously. 4. Simulations and examples.

  2. 1. Point process estimation (or inversion) is easy. a) Point processes can generally be very well estimated by maximum likelihood estimation (MLE).    MLEs are generally consistent, asymptotically normal, and efficient. Fisher (1925), Cramer (1946), Ogata (1978), Le Cam (1990). b) Maximizing a function is fast and easy. 400 yrs of work on this since Newton (1669). Every computer package has a strong quasi-Newton minimization routine, such as optim() in R.  For instance, suppose the function you want to minimize is f(a,b) = (a-4.19)4 + (b-7.23)4. f = function(p) (p[1]-4.19)^4 + (p[2]-7.23)^4 p = c(1,1) ## initial guess at (a,b) est = optim(p,f) c) The likelihood function is fairly simple. In practice, it’s a little easier to compute log(likelihood), and minimize that. log(likelihood) = ∑ log{l(ti,xi,yi)} - ∫∫∫ l(t,x,y) dt dx dy. ETAS (Ogata ‘98), l(t,x,y) = µ r(x,y) + ∑g(t-tj, x-xj, y-yj; Mj), where g(t,x,y,M) = K (t+c)-p ea(M-M0) (x2 + y2 + d)-q or K (t+c)-p {(x2 + y2)e-a(M-M0) + d}-q.

  3. 1. Point process estimation (or inversion) is easy. ETAS (Ogata ‘98), l(t,x,y) = µ r(x,y) + ∑g(t-tj, x-xj, y-yj; Mj) where g(t,x,y,M) = K (t+c)-p ea(M-M0) (x2 + y2 + d)-q or K (t+c)-p {(x2 + y2)e-a(M-M0) + d}-q. To make these spatial and temporal parts of g densities, I suggest writing g as g(t,x,y,M) = {K (p-1) cp-1 (q-1) dq-1 / p} (t+c)-p ea(M-M0) (x2 + y2 + d)-q or g(t,x,y,M) = {K (p-1) cp-1 (q-1) dq-1 / p} (t+c)-p {(x2 + y2)e-a(M-M0) + d}-q. Two reasons for this: 1) It’s easy to see how to replace g by another density. 2) For each earthquake (tj,xj,yj,Mj), ∫∫∫ g(t-tj, x-xj, y-yj; Mj) dt dx dy = K ea(M-M0).

  4. 2. Point process estimation is hard! Main obstacles: a) Small problems with optimization routines. Extreme flatness, local maxima, choosing a starting value. b) The integral term in the log likelihood. log(likelihood) = ∑ log{l(ti,xi,yi)} - ∫∫∫ l(t,x,y) dt dx dy. The sum is easy, but the integral term is extremely difficult to compute. By far the hardest part of estimation (Ogata 1998, Harte 2010). Ogata ‘98: divide space around each pt into quadrants and integrate over them. Harte ‘10: PtProcess uses optim() or nlm(). User must calculate the ∫ somehow. Numerical approximation is slow and is a poor approx. for some values of p,q, etc. 1,000 computations x 1,000 events x 1,000 function calls = 1 billion computations. Problem b) contributes to a reluctance to repeat estimation and check on a).

  5. 3. A suggested trick. Recall that log(likelihood) = ∑ log{l(ti,xi,yi)} - ∫∫∫ l(t,x,y) dt dx dy, where l(t,x,y) = µ r(x,y) + ∑g(t-tj, x-xj, y-yj; Mj). After normalizing g, ∫∫∫ g(t-tj, x-xj, y-yj; Mj) dt dx dy = K ea(M-M0), if the integral were over infinite time and space. Technically, in evaluating ∫∫∫ l(t,x,y) dt dx dy, the integral is just over time [0,T] and in some spatial region S. I suggest ignoring this and approximating ∫∫∫ g(t-tj, x-xj, y-yj; Mj) dt dx dy ~ K ea(M-M0). Thus, ∫∫∫ l(t,x,y) dt dx dy = µT + ∫∫∫ ∑ g(t-tj, x-xj, y-yj; Mj) dt dx dy = µT + ∑ ∫∫∫ g(t-tj, x-xj, y-yj; Mj) dt dx dy ~ µT + K ∑ ea(Mj-M0), which doesn’t depend on c,p,q,d, and is extremely easy to compute.

  6. 4. Simulation and example. 100 ETAS processes, estimated my way.

  7. 4. Simulation and example. % error in µ, from 100 ETAS simulations.

  8. 4. Simulation and example. CA earthquakes from 14 months after Hector Mine, M ≥ 3, from SCSN / SCEDC, as discussed in Ogata, Y., Jones, L. M. and Toda, S. (2003).

  9. 4. Simulation and example. Hector Mine data. Convergence of ETAS parameters is evident after ~ 200 days. Note that this repeated estimation over different time windows is easy if one uses the integral trick. This seems to lead to more stability in the estimates, since a little local maximum is less likely to appear repeatedly in all these estimations.

  10. 4. Simulation and example. Hector Mine data. ETAS parameter estimates as ratios of final estimates.

  11. Suggestions. 1. Write the triggering function as a density in time and space. 2. Approximate the integral term as K ∑ ea(Mj-M0). 3. Given data from time 0 to time T, estimate ETAS using data til time T/100, then 2T/100, etc., and assess convergence. Conclusions. It’s fast, easy, and works great. Huge reduction in run time. Enormous reduction in programming.

More Related