1 / 22

Stencil Pattern

Stencil Pattern. 1. ITCS 4/5145 Parallel computing, UNC-Charlotte, B. Wilkinson StencilPattern.ppt Oct 14, 2014. Stencil Pattern. A stencil describes a 2- or 3- dimensional layout of processes, with each process able to communicate only with its neighbors.

aandre
Download Presentation

Stencil Pattern

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stencil Pattern 1 ITCS 4/5145 Parallel computing, UNC-Charlotte, B. Wilkinson StencilPattern.ppt Oct 14, 2014

  2. Stencil Pattern A stencil describes a 2- or 3- dimensional layout of processes, with each process able to communicate only with its neighbors. Appears in simulating many real-life situations. Examples Solving partial differential equations using discretized methods, which may be for: • Modeling engineering structures • Weather forecasting, see intro to course slides1a-1 • Particle dynamics simulations • Modeling chemical and biological structures 2

  3. Stencil pattern On each iteration, each node communicates with neighbors to get stored computed values Two-way connection Compute node Source/sink 3

  4. (Iterative synchronous) stencil pattern Often globally synchronous and iterative: Processes compute and communicate only with their neighbors, exchanging results Check termination condition Repeat Stop 4

  5. Application example of stencil pattern Solving Laplace’s Equation Already seen this one in an assignment Solve for f over the two-dimensional x-y space. For computer solution, finite difference methods appropriate Two-dimensional solution space “discretized” into large number of solution points.

  6. Question: Do you recognize this?

  7. Heat Distribution Problem (Steady State Heat Equation) Finding the static distribution of heat in a space, here a 2-dimensional space but could be 3-dimensional. An area has known temperatures along each of its borders (boundary conditions). Find the temperature distribution within. Each point taken to be the average of the four neighboring points 7

  8. Natural ordering For convenience, edges represented by points, but having fixed values, and used in computing internal values. 6.8

  9. Relationship with a General System of Linear Equations Using natural ordering, ith point computed from ith equation: which is a linear equation with five unknowns (except those with boundary points). In general form, the ith equation becomes:

  10. Question will a Jacobi iterative method converge?

  11. Sequential Code Using a fixed number of iterations for (iteration = 0; iteration < limit; iteration++) { for (i = 1; i < n; i++) for (j = 1; j < n; j++) g[i][j] = 0.25*(h[i-1][j]+h[i+1][j]+h[i][j-1]+h[i][j+1]); for (i = 1; i < n; i++) /* update points */ for (j = 1; j < n; j++) h[i][j] = g[i][j]; } using original numbering system (n x n array). Earlier we saw this can be improved by using a single 3-dimensional array. 11

  12. Algorithmic ways to improving performance of computational stencil applications

  13. Partially Synchronous Computations -- Computations in which individual processes operate without needing to synchronize with other processes on every iteration. Important idea because synchronizing processes very significantly slows the computation and a major cause for reduced performance of parallel programs. 13

  14. Heat Distribution Problem Re-visited Making Partially Synchronous Uses previous iteration results h[][] for next iteration, g[][] forall (i = 1; i < n; i++) forall (j = 1; j < n; j++) { g[i][j]=0.25*(h[i-1][j]+h[i+1][j]+h[i][j-1]+h[i][j+1]); } Synchronization point at end of each iteration The waiting can be reduced by not forcing synchronization at each iteration by allowing processes to move to next iteration before all data points computed – then uses data from not only last iteration but possibly from earlier iterations. Method then becomes an “asynchronous iterative method.” 14

  15. Asynchronous Iterative Method Convergence Conditions Mathematical conditions for convergence may be more strict. Each process may not be allowed to use any previous iteration values if the method is to converge. Chaotic Relaxation A form of asynchronous iterative method introduced by Chazan and Miranker (1969) in which conditions stated as: “there must be a fixed positive integer s such that, in carrying out the evaluation of the ith iterate, a process cannot make use of any value of the components of the jth iterate if j < i - s” (Baudet, 1978). 15

  16. Gauss-Seidel Relaxation Uses some newly computed values to compute other values in that iteration.

  17. Gauss-Seidel Iteration Formula where superscript indicates iteration. With natural ordering of unknowns, formula reduces to At kth iteration, two of the four values (before ith element) taken from kth iteration and two values (after ith element) taken from (k-1)th iteration. Have: In this form does not readily parallelize.

  18. Red-Black Ordering First, black points computed. Next, red points computed. Black points computed simultaneously, and red points computed simultaneously.

  19. Red-Black Parallel Code // compute red points forall (i = 1; i < n; i++) forall (j = 1; j < n; j++) if ((i + j) % 2 == 0) f[i][j] = 0.25*(f[i-1][j] + f[i][j-1] + f[i+1][j] + f[i][j+1]); // now compute black points forall (i = 1; i < n; i++) forall (j = 1; j < n; j++) if ((i + j) % 2 != 0) f[i][j] = 0.25*(f[i-1][j] + f[i][j-1] + f[i+1][j] + f[i][j+1]); // repeat

  20. Multigrid Method First, a coarse grid of points used. With these points, iteration process will start to converge quickly. At some stage, number of points increased to include points of coarse grid and extra points between points of coarse grid. Initial values of extra points found by interpolation. Computation continues with this finer grid. Grid can be made finer and finer as computation proceeds, or computation can alternate between fine and coarse grids. Coarser grids take into account distant effects more quickly and provide a good starting point for the next finer grid.

  21. Multigrid processor allocation

  22. Questions

More Related