210 likes | 410 Views
Wireless Embedded & Networking System Laboratory. Chapter 12.5-12.6 Solving Linear Systems Parallel Programming in C with MPI and Open MP by M.J.Quinn. 2nd December, 2013 Ashish Rauniyar Stu Id 20136126 IT Convergence Engineering Kumoh National Institute of Technology. Sparse Systems.
E N D
Wireless Embedded & Networking System Laboratory Chapter 12.5-12.6Solving Linear SystemsParallel Programming in C with MPI and Open MP by M.J.Quinn 2nd December, 2013 Ashish Rauniyar Stu Id 20136126 IT Convergence Engineering Kumoh National Institute of Technology
Sparse Systems • Gaussian elimination not well-suited for sparse systems. • Coefficient matrix gradually fills with nonzero elements • Result: • Increases storage requirements • Increases total operation count
Iterative Methods • Iterative method: algorithm that generates a series of approximations to solution’s value. • Require less storage than direct methods. • Since they avoid computations on zero elements, they can save a lot of computations.
Jacobi Method • Values of elements of vector x at iteration k+1 depend upon values of vector x at iteration k. • Gauss-Seidel method: Use latest version available of xi.
Rate of Convergence • Even when Jacobi method and Gauss-Seidel methods converge on solution, rate of convergence often too slow to make them practical. • We will move on to an iterative method with much faster convergence.
Conjugate Gradient Method has a unique minimizer that is solution to Ax = b • Conjugate gradient is an iterative method that solves Ax = b by minimizing q(x) • If rounding error is ignored, the Conjugate Gradient Method is guaranteed to converge on a solution in n or fewer iterations. • A is positive definite if for every nonzero vector x and its transpose xT, the product xTAx > 0. • If A is symmetric and positive definite, then the function
Conjugate Gradient Method Algorithm • An Iteration of the Conjugate Gradient Method is of the form • x(t)=x(t-1)+s(t)d(t) • The new value of x is the function of old value of vector x, a scalar step size s, and a direction vector d. • Before Iteration value of x(0), d(0), and g(0) must be set. • Every Iteration t calculates x(t) in four steps.
Conjugate Gradient Method Example Contd The result, x2, is a "better" approximation to the system's solution than x1 and x0.
Conjugate Gradient Convergence Finds value of n-dimensional solution in at most n iterations
Parallel Implementation of Conjugate Gradient Method • If we choose a row-wise block-striped decomposition of A and replicate all vectors. In this case the multiplication of A and a vector may be performed without any communications, but an all-gather communication is needed to replicate the result vector. • We choose a block decomposition of vectors, an allgathercommunication is needed before the matrix-vector multiplication takes place, but no communication is needed to replicate the blocks of the result vector. • The Overall time complexity for both is O(n2w/p + nlogp).
Conjugate Gradient Computations • Matrix-vector multiplication. • Inner product (dot product). • Matrix-vector multiplication has higher time complexity. • Must modify previously developed algorithm to account for sparse matrices. • Parallel Algorithm is faster depends on the size of the problem, the number of available processors, the speed of processors and the speed of communication network.