1 / 20

2nd December, 2013 Ashish Rauniyar Stu Id 20136126 IT Convergence Engineering

Wireless Embedded & Networking System Laboratory. Chapter 12.5-12.6 Solving Linear Systems Parallel Programming in C with MPI and Open MP by M.J.Quinn. 2nd December, 2013 Ashish Rauniyar Stu Id 20136126 IT Convergence Engineering Kumoh National Institute of Technology. Sparse Systems.

patch
Download Presentation

2nd December, 2013 Ashish Rauniyar Stu Id 20136126 IT Convergence Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wireless Embedded & Networking System Laboratory Chapter 12.5-12.6Solving Linear SystemsParallel Programming in C with MPI and Open MP by M.J.Quinn 2nd December, 2013 Ashish Rauniyar Stu Id 20136126 IT Convergence Engineering Kumoh National Institute of Technology

  2. Sparse Systems • Gaussian elimination not well-suited for sparse systems. • Coefficient matrix gradually fills with nonzero elements • Result: • Increases storage requirements • Increases total operation count

  3. Iterative Methods • Iterative method: algorithm that generates a series of approximations to solution’s value. • Require less storage than direct methods. • Since they avoid computations on zero elements, they can save a lot of computations.

  4. Jacobi Method • Values of elements of vector x at iteration k+1 depend upon values of vector x at iteration k. • Gauss-Seidel method: Use latest version available of xi.

  5. Jacobi Method Algorithm

  6. Jacobi Method Example

  7. Jacobi Method Example Contd

  8. Jacobi Method Iterations

  9. Gauss-Seidel Method

  10. Gauss-Seidel Method Contd

  11. Rate of Convergence • Even when Jacobi method and Gauss-Seidel methods converge on solution, rate of convergence often too slow to make them practical. • We will move on to an iterative method with much faster convergence.

  12. Conjugate Gradient Method has a unique minimizer that is solution to Ax = b • Conjugate gradient is an iterative method that solves Ax = b by minimizing q(x) • If rounding error is ignored, the Conjugate Gradient Method is guaranteed to converge on a solution in n or fewer iterations. • A is positive definite if for every nonzero vector x and its transpose xT, the product xTAx > 0. • If A is symmetric and positive definite, then the function

  13. Conjugate Gradient Method Algorithm • An Iteration of the Conjugate Gradient Method is of the form • x(t)=x(t-1)+s(t)d(t) • The new value of x is the function of old value of vector x, a scalar step size s, and a direction vector d. • Before Iteration value of x(0), d(0), and g(0) must be set. • Every Iteration t calculates x(t) in four steps.

  14. Conjugate Gradient Method Algorithm Steps

  15. Conjugate Gradient Method Example

  16. Conjugate Gradient Method Example Contd The result, x2, is a "better" approximation to the system's solution than x1 and x0.

  17. Conjugate Gradient Convergence Finds value of n-dimensional solution in at most n iterations

  18. Parallel Implementation of Conjugate Gradient Method • If we choose a row-wise block-striped decomposition of A and repli­cate all vectors. In this case the multiplication of A and a vector may be per­formed without any communications, but an all-gather communication is needed to replicate the result vector. • We choose a block decomposition of vectors, an all­gathercommunication is needed before the matrix-vector multiplication takes place, but no communication is needed to replicate the blocks of the result vector. • The Overall time complexity for both is O(n2w/p + nlogp).

  19. Conjugate Gradient Computations • Matrix-vector multiplication. • Inner product (dot product). • Matrix-vector multiplication has higher time complexity. • Must modify previously developed algorithm to account for sparse matrices. • Parallel Algorithm is faster depends on the size of the problem, the number of available processors, the speed of processors and the speed of communication network.

  20. THANK YOU

More Related