1 / 18

Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference

Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference. Presented by Valerie Spencer Mentored by Jim Kohl Oak Ridge National Laboratory RAM Internship June 03 – August 16, 2002

jada-floyd
Download Presentation

Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Solution of 2-D Heat Equation Using Laplace Finite Difference Presented by Valerie Spencer Mentored by Jim Kohl Oak Ridge National Laboratory RAM Internship June 03 – August 16, 2002 This research was performed under the Research Alliance for Minorities Program administered through the Computer Science and Mathematics Division, Oak Ridge National Laboratory. This Program is sponsored by the Mathematical, Information, and Computational Sciences Division; Office of Advanced Scientific Computing Research; U.S. Department of Energy. Oak Ridge National Laboratory is managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. This work has been authored by a contractor of the U.S. Government under contract DE-AC05-00OR22725. Accordingly, the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes.

  2. Table of Contents • Purpose • Background work • Problem Statement • Linux • Parallel Virtual Machine • Two parts • Using pvm • Two models • Parallel Program Used • Jacobi Iteration Method • Parallel cluster used • Results • Conclusion

  3. Purpose The research involves solving a multi-dimensional conduction heat transfer problem on a Linux cluster using Parallel Virtual Machine (PVM). Temperature distribution in a X*Y size wall can be determined by solving the Laplace Equation. The larger the size gets, the more computing power is needed.

  4. Background Work • lab experiment at Alabama A&M University - determine the temperature distribution inside a medium of size 1.0 m x 1.0 m using the finite difference method • heat equation (2-D array notation Laplace Eq.) • [(∂2T / ∂x2)+(∂2T / ∂y2)]I,J = 0 • T(I,J) = (TE + TW + TN + TS) / 4, where • TE = T(I+1,J) • TW = T(I-1,J) • TN = T(I,J+1) • TS = T(I-1,J)

  5. Background Cont’d • Fortran program - Microsoft Developer Studio software • Visual image of the temperature distribution -Tecplot Software

  6. Problem Statement • Decrease experiment cycle time for solving the heat equation over large problems • Solution: Parallelize the heat equation simulation program and run using multiple computers in a cluster

  7. a sophisticated multitasking virtual memory operating system • directly controls the hardware • provides: • true multitasking • virtual memory • shared libraries • demand loading • shared copy-on-write executables • TCP/IP networking • file systems

  8. software that permits a heterogeneous collection of Unix and/or NT computers connected together by a network to be used as a single large parallel computer

  9. Two Part PVM System • daemon is a special purpose process that runs on behalf of the system to handle all the incoming and outgoing messages; it is represented by “pvmd3” or “pvmd” and any user with a valid login id can install and execute this on a machine • library of routines that allows the computers to interact “in parallel” (resource and task management “addhost” and “spawn”, pack and send messages…)

  10. Using PVM The problem must be able to be broken down into several tasks • functional parallelism - breaking the application into different tasks that perform different functions • data parallelism - having several similar tasks that are the same, solve over different parts of the data

  11. Two Models of PVM Codes • master/worker model - the master task creates all other tasks that are designed to work on the problem, then coordinates the input of initial data to each task, and collects the output of results from each task • hostless model - the initial task spawns off copies of itself as tasks and then starts working on its portion of the problem while the created tasks immediately begin working on their portion

  12. Jacobi Iteration Method • Based on solving for every variable locally with respect to the other variables • One iteration of the method corresponds to solving for every variable once • Simple parallel data structure • Processes exchange columns with neighbors

  13. TORC(Tennessee Oak Ridge Cluster) • Consists of 18 processors • 1 Dell PowerEdge 1300 • Dual PIII 450MHz, 512MB RAM • 4 Dell PowerEdge 6350 • Quad PII Xeon 450MHz, 2GB RAM(node4 – 1GB RAM) • PGI v3.2 components • pgf77 – fortran 77 compiler • pgf90 – fortran 90 compiler • pghpf – high performance fortran compiler

  14. Results

  15. Results

  16. Conclusion • Larger problem, more computational power needed • Clustered personal computers provides adequate computing power • Parallelization is not needed for small problems • PVM on a Linux system is an excellent tool to solve large engineering problems

  17. About Myself • Alabama A&M University, Senior • Mechanical Engineering, G.P.A – 3.8 • B.S. - May 2003 • U.S. Navy- Nupoc Program

  18. Acknowledgements I appreciate Dr. Z. T. Deng’s decision of selecting me to be a part of the Summer 2002 RAM Program. I appreciate Dr. Jim Kohl for guiding my research, dedicating time to discuss my findings, and facilitating resources from the Computer Science and Mathematics Division. I would also like to acknowledge Debbie McCoy for organizing the RAM Program that allowed me to spend the summer gaining significant information at ORNL. Other acknowledgements: Stephen Scott and Cheryl Hamby.

More Related