1 / 13

Global convergence for iterative aggregation – disaggregation method

Global convergence for iterative aggregation – disaggregation method. Ivana Pultarova Czech Technical University in Prague, Czech Republic. We consider N × N column stochastic irreducible matrix B , not cyclic . The Problem is to find stationary probability vector x p , || x p || = 1 ,

iain
Download Presentation

Global convergence for iterative aggregation – disaggregation method

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Global convergence for iterative aggregation – disaggregation method Ivana Pultarova Czech Technical University in Prague, Czech Republic WORKSHOP ERCIM 2004

  2. We consider N×Ncolumn stochastic irreducible matrixB, not cyclic. The Problemis to find stationary probability vectorxp, ||xp || = 1, B xp = xp We explore the iterative aggregation-disaggregation method (IAD). Notation: • || . || denotes 1-norm. • Spectral decomposition of B, B = P + Z, P2 = P, ZP = PZ = 0, r(Z)<1(spectral radius). • Number of aggregation groups n, n < N. • Restriction matrix R of type n×N. The elements are 0 or 1, all column sums are 1. • Prolongation N×n matrix S(x) for any positive vector x : (S(x))ij := xi iff (R)ji = 1, then divide all elements in each column with the sum of the column. • Projection N×N matrix P(x) = S(x)R. WORKSHOP ERCIM 2004

  3. Iterative aggregation disaggregation algorithm: step 1. Take the first approximation x0 RN,x0 > 0, and set k = 0. step 2. Solve R Bs S(xk) zk+1 = zk+1, zk+1 Rn, || zk+1 || = 1, for (appropriate) integer s, (solution on the coarse level). step 3. Disaggregate xk+1,1 = S(xk) zk+1. step 4. Compute xk+1 = T t xk+1,1 for an appropriate integer t, (smoothing on the fine level). Block-Jacobi, block-Gauss-Seidel, T = B… step 5. Test whether || xk+1 – xk|| is less then a prescribed tolerance. If not, increase k and go to step 2. If yes, consider xk+1 be the solution of the problem. WORKSHOP ERCIM 2004

  4. Propositon. The computed approximations xk, k = 1,2,…, follow the formula xk+1 – xp = J(xk)(xk – xp), where J(x) = T t(I – P(x) Zs) -1 (I – P(x)). If t > s and T = Bthen J(x) = B t-sK(x), where K(x) = B s(I – P(x) + P(x) K(x)). WORKSHOP ERCIM 2004

  5. Example. Let T = B, s = t = 1 and B = . Then r(Z) = 0. For x0 = [1/12, 10/12, 1/12]T it is r(J(x0)) = 2.1429 while for x0 = [10/12, 1/12, 1/12]T it is r(J(x0)) = 0.0732. WORKSHOP ERCIM 2004

  6. Global convergence. When B s≥η > ηP and T = B s, then for the global core matrix V corresponding to B s J(x) = V t(I – P(x) V ) -1 (I – P(x)) = V t-1 K(x) and ||K(x)|| ≤||V|| ||I – P(x)|| + ||V|| ||P(x)|| ||K(x)||, thus ||K(x)|| < 2(1 – η) / η. So that the sufficient condition for the global convergence of IAD is(1 > ) η > 2/3, i.e. Bs > (2/3) P. In the opposite case, the value of t in J(x) = B t-1K(x) = V t-1K(x),can be easily estimated to ensure || J(x) || < 1, t ≥ log (η/2) / log (1-η). WORKSHOP ERCIM 2004

  7. We propose a method for achieving B s≥ η > 0. Let I – B = M – W be a regular splitting, M -1 ≥ 0, W ≥ 0. Then the solution of Problem is identical with the solution of (M – W) x = 0. Denoting Mx = y and setting y:=y/||y||, we have (I – WM -1) y = 0, where WM -1 is column stochastic matrix. Thus, the solution of the Problem is transformed to the solution of WM -1y = y, ||y|| = 1, for any regular splitting M, W of the matrix I – B. WORKSHOP ERCIM 2004

  8. The choice of M, W – algorithm of a good partitioning. • step 1. For an appropriate threshold τ, 0 < τ < 1, use Tarjan’s parametrized algorithm to find the irreducible diagonal blocks Bi, i = 1,…,n, of the (properly) permuted matrix B, (we now suppose “B := permuted B”). • step 2. Compose the block diagonal matrix BTar from the blocks Bi, i = 1,…,n, and set • M = I – BTar / 2 and W = M – (I – B). WORKSHOP ERCIM 2004

  9. Properties of WM -1 obtained by the algorithm of a good partitioning: • WM -1 is irreducible. • Diagonal blocks of WM -1 are positive. • (WM -1) s is positive for “low” s, s≤n+ 1, n is the number of the aggregation groups. • The second largest eigenvalue of the aggregated n× nmatrix is approximately the same as that of WM -1 . WORKSHOP ERCIM 2004

  10. Conclusion. • To achieve the global convergence of IAD method, we consider the original Problem in the form WM -1y = y, where I – B = M – W and W, M is a (weak) regular splitting of I – B constructed by “the algorithm of a good partitioning”. • When n is the number of aggregation groups, (WM -1) n+1is positive ( > η). Matrix WM -1 can be stored in the factor form. • The number of smoothing steps is given by t≥n + log (η/2) / log (1-η). • The computational complexity is equal to the IAD with the block Jacobi smoothing steps, but the global convergence is ensured here. WORKSHOP ERCIM 2004

  11. Example 1. Matrix B is composed from n× n blocks of size m. We set ε = 0.01,δ = 0.01. Then B := B + C (10% of C are 0.1) and normalized. WORKSHOP ERCIM 2004

  12. Example 1. a) IAD for B and WM -1. b) Power method for B and WM -1. c) Convergence rate for IAD and power method. WORKSHOP ERCIM 2004

  13. I. Marek and P. Mayer Convergence analysis of an aggregation/disaggregationiterative method forcomputation stationary probability vectors Numerical Linear Algebra With Applications, 5, pp. 253-274, 1998 I. Marek and P. Mayer Convergence theory of some classes of iterativeaggregation-disaggregation methods for computingstationary probability vectors of stochastic matrices Linear Algebra and Its Applications, 363, pp. 177-200, 2003 G. W. Stewart Introduction to the numerical solutions of Markov chains, 1994 A. Berman, R. J. Plemmons Nonnegative matrices in the mathematical sciences, 1979 G. H. Golub,C. F. Van Loan Matrix Computations, 1996 ETC. WORKSHOP ERCIM 2004

More Related