1 / 20

Numerical simulation of solute transport in heterogeneous porous media

Numerical simulation of solute transport in heterogeneous porous media A. Beaudoin, J.-R. de Dreuzy, J. Erhel Workshop High Performance Computing at LAMSIN ENIT-LAMSIN, Tunisia, November 27 - December 1st, 2006. Physical model. 2D Heterogeneous permeability field Stochastic model Y = ln(K)

watson
Download Presentation

Numerical simulation of solute transport in heterogeneous porous media

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Numerical simulation of solute transport in heterogeneous porous media A. Beaudoin, J.-R. de Dreuzy, J. Erhel Workshop High Performance Computing at LAMSIN ENIT-LAMSIN, Tunisia, November 27 - December 1st, 2006

  2. Physical model 2D Heterogeneous permeability field Stochastic model Y = ln(K) with correlation function

  3. Nul flux Fixed head Fixed head Nul flux Flow model Steady-state case Darcy equation Mass conservation equation Boundary conditions v = - K*grad (h) div (v) = 0

  4. Nul flux and C=0 Fixed head and dC/dn=0 Fixed head and C=0 Nul flux and C=0 Transport model Advection-dispersion equation Boundary conditions Initial condition injection dC/dt + div(v C - d gradC) = f

  5. Numerical flow simultions Finite Volume Method with a regular mesh ; N =Nx Ny cells Large sparse structured matrix A of order N with 5 entries per row Linear system Ax=b

  6. Numerical transport simulation Particle tracker injection Many independent particles Bilinear interpolation for v

  7. Examples of simulations with σ=2 Pe=10 Pe=∞

  8. Sparse direct solver memory size and CPU time with PSPASES Theory : NZ(L) = O(N logN) Theory : Time = O(N1.5) variance = 1, number of processors = 2

  9. Multigrid sparse solver CPU time with HYPRE/AMG variance = 1, number of processors = 4 residual=10-8 Linear complexity of BoomerAMG

  10. Transport with particle tracker CPU time variance = 1, number of processors = 4 Linear complexity of particle tracker

  11. Sparse linear solvers Impact of permeability variance matrix order N = 106 matrix order N = 16 106 PSPASES and BoomerAMG independent of variance BoomerAMG faster than PSPASES with 4 processors

  12. Particle tracker Impact of permeability variance and correlation length number of particles injected = 1000, Peclet number = number of processors P = 64 and matrix order N = 134.22 106 Transport CPU time increases with variance Transport CPU time slightly sensitive to correlation length

  13. Particle tracker Impact of Peclet number and correlation length number of particles injected = 2000, variance = 9.0, number of processors P = 64 and matrix order N = 134.22 106 Transport CPU time increases for small Peclet numbers Transport CPU time slightly sensitive to correlation length

  14. Parallel architecture Parallel architecture distributed memory 2 nodes of 32 bi – processors (Proc AMD Opteron 2Ghz with 2Go of RAM)

  15. Parallel algorithms and Data distribution Domain decomposition into slices Ghost cells at the boundaries

  16. Parallel algorithms and Data distribution Parallel matrix generation using FFTW Parallel sparse solver Parallel particle tracker

  17. Direct and multigrid solvers Parallel CPU time matrix order N = 106 matrix order N = 4 106 variance = 9

  18. Direct and multigrid solvers Speed-up matrix order N = 106 matrix order N = 4 106

  19. Particle tracker Parallel CPU time

  20. Flow and transport computations Summary • PSPASES is efficient for small matrices • HYPRE-AMG and PSPASES are not sensitive to the variance • HYPRE-AMG is efficient for large matrices • HYPRE-AMG and PSPASES are scalable • Particle tracker is sensitive to Peclet number • Particle tracker is efficient • transport requires less CPU time than flow for large matrices

More Related