1 / 23

Permeability Estimation Using A Hierarchical Markov Tree (HMT) Model

Permeability Estimation Using A Hierarchical Markov Tree (HMT) Model. Jingbo Wang Corporate Strategic Research, ExxonMobil Company Phone: 908-730-2057 Email: jingbo.wang@exxonmobil.com Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering, Cornell University

limei
Download Presentation

Permeability Estimation Using A Hierarchical Markov Tree (HMT) Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Permeability Estimation Using A Hierarchical Markov Tree (HMT) Model Jingbo Wang Corporate Strategic Research, ExxonMobil Company Phone: 908-730-2057 Email: jingbo.wang@exxonmobil.com Nicholas Zabaras Sibley School of Mechanical and Aerospace Engineering, Cornell University Phone: 607-255-9104 Email: zabaras@cornell.edu

  2. Outline • Overview of the problem • Problem definition • Challenges • Fundamentals of Bayesian statistics • Markov Random Field (MRF) • Markov chain Monte Carlo (MCMC) simulation • Hierarchical Bayesian model for permeability estimation • Examples • Conclusions

  3. Permeability estimation Heterogeneity of a porous medium Schematic of a 9-spot problem • Permeability of the porous medium is a necessary input for simulation of • reservoir and groundwater system • Approaches: • Local (core measurement + correlation modeling) • Global (inverse modeling) • The inverse modeling approach: • Estimate permeability using flow data (pressure, concentration, …)

  4. Challenges of the inverse problem • ill-posedness • --- existence? • --- uniqueness? • --- continuous dependence of solutions on measurements? (stability) identifiability • implicit objective function (functional) • non-linearity • complex direct simulation • high computation cost • uncertainties • Heterogeneity of permeability • Multiscale nature of permeability

  5. Methods for inverse problems Solution procedure objective optimization regularization • function specification (discretization) • Tikhonov regularization • (S) future information • iterative regularization --- conjugate gradient --- EM method • minimum discrepancy principle • minimum least-squares error • minimum total absolute error • minimum maximum absolute error • Newton’s methods • steepest descent gradient • conjugate gradient (sensitivity and/or adjoint problems need to be solved) Gradient methods deterministic deterministic • greedy search • simulated annealing • genetic algorithm • evolutionary algorithm • maximum Likelihood • maximum a posteriori • maximum entropy • minimum mean square error Heuristic methods Non-deterministic • Bayesian prior distribution regularization • importance/rejection sampling • MCMC Our approach

  6. P ( Y | θ ) P ( θ ) = = P ( θ | Y ) P ( Y ) p ( , | ) p ( | , ) p ( | ) p ( ) Y q l µ q l Y q l l P ( Y , θ ) P ( Y ) q µ p ( ) p ( | ) q q p ( | Y ) Y Fundamentals of Bayesian statistics • Bayes’ formula • Bayesian statistics prior + evidence => posterior probability • Bayesian estimation Likelihood Posterior distribution Prior distribution • A hierarchical formulation

  7. Fundamentals of Bayesian statistics (cont…) The prior The likelihood • unconditional belief of unknown • before the related observations • conditional probability of data (Y) • on the parameter (θ) • role of a prior distribution • --- incorporate prior information • --- regularize the likelihood • may be “improper” • techniques of prior modeling • --- conjugate prior • --- reference prior • --- spatial statistics models Y = F(θ) + ω ω ~ i.i.d. N(0, σ2) 1 - - T - p ( | θ ) exp{ ( Y F ( )) ( F ( ))} µ q q Y Y s 2 2 posterior prior P.M. Lee, Bayesian Statistics - An Introduction, first edition, Oxford University Press, 1989. P. Congdon, Bayesian Statistical Modeling, John Wiley \& Sons, New York, 2001. C.P. Robert, The Bayesian Choice, From Decision-Theoretic Foundations to Computational Implementation, the second edition, Springer-Verlag New York, Inc., 2001.

  8. q q q q = q q p ( | , ,..., ) p ( | ) + + 1 0 1 1 k k k k g - p ( ) exp{ - å ( ( q µ )) } F q W q ij i j j ~ i Markov Random Field Markov process θi Neighbors of θ Pair-wise Markov Random Field (MRF) J. Moler (editor), Spatial statistics and computational methods, Springer-Verlag New York, Inc., 2003. J. Besag, and P.J. Green, Spatial statistics and Bayesian computation, Journal of the Royal Statistical Society, Series B, Methodological, 55:25-37, 1993.

  9. 1 N x ( ) p = ( x ) d å x i N N = i 1 1 N N ¥ ® = = i ¾ ¾ ¾ ® f ( x ) p ( x ) dx ò I ( f ) f ( x ) I ( f ) å N X N = 1 i Markov chain Monte Carlo (MCMC) simulation Monte Carlo Principle 1. draw an i.i.d. set of samples {x(i)} i = 1:N from a target density p(x) 2. approximate the target density with empirical point-mass function 3. approximate the integral (expectation) I(f) with tractable sums IN( f ) J. Besag, P. Green, D. Higdon and K. Mengersen, Bayesian Computation and Stochastic Systems, Statistical Science, vol.10, pp.3-41, 1995. P. Bremaud, Markov Chains, Gibbs Fields, Monte Carlo Simulation, and Queues, Springer-Verlag, New York, 1999. C. Andrieu, N.D. Freitas, A. Doucet and M.I. Gordan, An introduction to MCMC for machine learning, Machine Learning, vol.50, pp.5-43, 2003.

  10. - - - = 1 1 1 i i i i i i p ( x ) q ( x | x ) p ( x ) q ( x | x ) Markov chain Monte Carlo (cont…) Markov Chain r.v. x є X ={x1 x2 ...,xs}. The stochastic process xi is called a Markov chain if p(xi| xi-1 ,..., x1) = p(xi| xi-1). MCMC sampler irreducible and aperiodic Markov chains that have the target distribution as the invariant distribution. Detailed balance

  11. Initialize x0 For i=0:N-1 sample u~U(0,1) sample x* ~ q(x*|xi) if u < A(xi, x*)=min{1, p(x*)q(xi|x*)/(p(xi)q(x*|xi))} xi+1=x* else xi+1=xi Metropolis-Hastings (MH) algorithm Some properties of MH (a) The normalizing constant is not required. (b) Easy to simulate independent chains in parallel. (c) The choice of proposal distribution is crucial.

  12. Extensions of MH sampler (a) Independent sampler: q(x*|xi) =q(x*). (b) Metropolis algorithm: q(x*|xi) =q(xi|x*). (c) Cycles of kernels (d) Gibbs sampler Initialize x0 For i=0:N-1 - sample the block xi+1b1 according to proposal distribution q1(xi+1b1|xi+1-b1, xib1) and target distribution p(xi+1b1|xi+1-b1) - sample the block xi+1b2 according to proposal distribution q1(xi+1b2|xi+1-b2, xib2) and target distribution p(xi+1b2|xi+1-b2) . . - sample the block xi+1bs according to proposal distribution q1(xi+1bs|xi+1-bs, xibs) and target distribution p(xi+1bs|xi+1-bs) Initialize x0 For i = 0:N-1 For j = 1:m sample

  13. Likelihood computation Prior distribution modeling • computational mathematics • reduced-order modeling (POD) • parallel computation • conjugate priors • physical constraints • spatial statistical models Hierarchical Bayesian formulation Posterior exploration (Markov chain Monte Carlo) • Metropolis-Hastings sampler • symmetric sampler • independent sampler • hybrid & cyclic MCMC • sequential MCMC q s µ q s q s p ( , | Y ) p ( Y | , ) p ( ) p ( ) A Bayesian computational framework for inverse problems

  14. = × u q , Ñ in ( 0 , T ), W ´ ^ = f a + a + a D { I || u || [ E ( u ) E ( u )]} m l t K ( x ) p = - u Ñ , 1 m ( c ) = Ä E u u u ( ) ¶ c || u ||2 - = f + × ˆ × ( cu ) ( D c ) c Ñ Ñ Ñ q , t ¶ ¶ W ´ - × = m = m - + on 1 / 4 4 ( 0 , T ), ^ = - ( c ) ( 0 )[ 1 c M c ] u n 0 , E ( u ) I E ( u ) × = D c n 0 , Ñ = in c ( x , 0 ) c ( x ), W 0 Permeability estimation Estimate permeability using dynamic well data (pressure, concentration, …)

  15. m m 1 K r r r r r r - Ñ × + Ñ × + - + Ñ + Ñ = ( u , w ) ( p , w ) ( u , f ) (( w f ), ( u p )) ( q , f ) m K 2 K ¶ c r ò ò ò ò f W + × Ñ W + W - Ñ × Ñ W wd ( u c ) wd qcwd D c wd ¶ t W W W W ¶ nel nel c r r r å å ò ò ò + t Ñ f + × Ñ + W = W + t Ñ W ˆ ˆ u w ( u c qc ) d c qwd u w c qd e e e e e ¶ t e e W W W = = e 1 e 1 P 1 h t = e min( , 1 . 0 ) r r 3 || u || 1 2 || u || 3 = e Pe h e r r T 2 u D u e e Solution to the forward problem Masud, T. J.R. Hughes, A stabilized mixed finite element method for Darcy flow, Computer Methods in Applied Mechanics and Engineering 191 (2002) 4341-4370. R.G. Sanabria Castro, S.M.C. Malta, A.F.D. Loula, L. Landau, Numerical analysis of space-time finite element formulations for miscible displacements, Computational Geosciences 5 (2001) 301-330.

  16. Hierarchical Markov Model for multiscale parameter estimation root layer s=0 Markov chain s=1 Markov field s=2 . . . A multi-layer representation of heterogeneous parameter

  17. 1 Q µ - Q - Q - r r r r T r r p ( | Y ) exp{ [ F ( ) Y ] [ F ( ) Y ]} s 2 2 Q Q µ Q Q Q Q µ Q Q Q Q s r r r s r s r r s s r r p ( , | Y ) p ( Y | , ) p ( , ) p ( Y | ) p ( | ) p ( ) q Q Q µ q q s s r s r p ( | , ) p ( , ) - i i ~ i p i - - - Q Q Q Q µ Q Q s s 1 s 2 0 s s 1 p ( | , ,..., ) p ( | ) - Q Q Q µ Q Q Q Q Q Q s s s s 1 s s s s s s s p ( , ,..., | Y ) p ( Y | ) p ( | )... p ( | ) p ( ) 1 2 S 1 1 S S S 2 1 1 Hierarchical Markov Model for multiscale parameter estimation (cont.) • the coarse scale (r) permeability distribution • the fine scale (s) permeability distribution • Markov assumption of multiscale models

  18. A hybrid MCMC algorithm

  19. Example I: a smooth permeability field K(x,y)=exp(0.5(x-4.0)+0.5(y-4.0)) True permeability MAP estimate on 32x32 grid with data at 24 locations MAP estimate on 16x16 grid with data at 24 locations MAP estimate on 8x8 grid with data at 24 locations MAP estimate on 32x32 grid with data at 8 locations MAP estimate on 16x16 grid with data at 8 locations MAP estimate on 8x8 grid with data at 8 locations

  20. 2 - r = ( ) r e Example II: A permeability field with random discontinuities Example I: the true permeability field well distribution (pressure data) the coarse scale estimate (4x4)

  21. A permeability field with random discontinuities (cont.) 3 realizations from the fine-scale distribution sample mean of the fine-scale distribution

  22. - r = ( | r |) e A permeability field with random discontinuities (cont.) Example II: the true permeability field 2 realizations from the fine-scale distribution

  23. Conclusions • MRF is suitable for estimating smooth permeability field • The hybrid MCMC algorithm is efficient in exploring the high dimensional posterior state space • HMT model provide flexibility to model multiscale permeability • Sample permeability field from the posterior distribution provide reliable basis for scenario analysis

More Related