1 / 17

PARALLEL APPLICATIONS

PARALLEL APPLICATIONS. EE 524/CS 561 Kishore Dhaveji 01/09/2000. Parallelism. First Step towards parallelism - Finding suitable parallel algorithms. Ability of many independent threads of control to make progress simultaneously.

jhelzer
Download Presentation

PARALLEL APPLICATIONS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PARALLEL APPLICATIONS EE 524/CS 561 Kishore Dhaveji 01/09/2000

  2. Parallelism • First Step towards parallelism - Finding suitable parallel algorithms. • Ability of many independent threads of control to make progress simultaneously. • Available Parallelism - Maximum number of independent threads that can work simultaneously.

  3. Amdahl’s law • If only one percent of problems fails to parallelize, then no matter how much parallelism is available for the rest, the problem can never be solved more than hundred times faster than in the sequential case.

  4. Reasons for Parallelizing code • Problem Size • Large problems might require significant memory, which could be expensive and may not be sufficient • Real time requirements • Real time requirements of order of minutes, hours or days. • e.g. Weather forecasting, financial modeling etc • If parallelizing is being done for performance, effect of other users to be noted

  5. Categories of Parallel Algorithm • Regular and Irregular • Synchronous and Asynchronous • Coarse and Fine Grained • Bandwidth Greedy and Frugal • Latency Tolerant and Intolerant • Distributed and Shared Address Spaces • Beowulf Systems & Choices of Parallelism

  6. Regular and Irregular • Regular algorithms • Use data structures that fit naturally into rectangular arrays • Easier to partition into separate parallel processes • Irregular algorithms • Use complicated data structures, e.g., trees, graphs, lists, hash-tables, indirect addressing • Require careful consideration for parallelization • Exploit features like sparseness, hierarchies, unpredictable updates

  7. Synchronous and Asynchronous • Parallel parts in Synchronous algorithms must be in “lockstep”. • In asynchronous algorithms, different parts of algorithms can “get ahead”. • Asynchronous algorithms often require less bookkeeping and hence much easier to parallelize.

  8. Coarse and Fine Grained • Grain size refers to the amount of work performed by each process in a parallel computation. • Large grain size calls for less interprocessor communication • The relative amounts of communication and grain size computation is proportional to the ratio of surface area to volume - this ratio ought to be small for best results.

  9. Bandwidth Greedy and Frugal • tcomm = tlatency+ message length/bandwidth • CPU speed >> memory speed >> Network speed (2400MBps >> 128-182MBps >> 15MBps) • Memory bandwidth and network speeds are the limiting factors in overall system performance. • Algorithms may or may not require bandwidth • Increasing grain size leads to bandwidth frugal and better performing algorithms.

  10. Latency Tolerant and Intolerant • Latency - The time taken for the beginning of the delivery of message. • number of bytes n1/2 = latency x Bandwidth • Longer messages are bandwidth dominated and shorter messages are latency dominated. • High latencies are the most conspicuous shortcoming of Beowulf systems, successful algorithms are usually latency tolerant.

  11. Distributed and Shared Address Spaces • Shared address space • All processors are accessing a common shared address space. • Simplifies design of parallel algorithms • Race-condition and non-determinacy • Distributed address spaces • Message passing procedures for communication • Designing languages and compilers has been difficult.

  12. Choice of Parallelism • Requires large grain size algorithms • Latency tolerant and bandwidth frugal algorithms • Regular and asynchronous algorithms • Tuning of Beowulf system

  13. Process Level Parallelism • Beowulf systems are well suited for process level parallelism, i.e. running multiple independent processes • Requirement for process level parallelism is the existence of sequential code that must be run many times • If process pool is large, processes can self schedule and load balance across the system

  14. Utilities for Parallel Computing • Dispatcher program, which can take arbitrary list of commands and dispatch commands to a set of processors • No more than one command is run on any host at any one time • The order in which commands are run and dispatched to hosts is arbitrary

  15. Overheads - rsh and File I/O • Startup costs of establishing connections, user authentication, executing remote commands • Additional overhead of I/O • Transfer of executable, if it stored on NFS mounted file system, for every invocation • NFS is limited by performance of server and its network connections

  16. Overheads - rsh and File I/O • Start up problems can reduced by pre-staging data and exes to disks and storing the results of computation • Discipline in placement of files in directories to avoid network connections • NFS is sequential system without broadcast. This problem can be overcome by caching data is unchanging, e.g., input files and executables (Swap space and /scratch)

  17. Summary • Process level parallelism is easiest and cost effective for Beowulf systems • Optimization for reducing network traffic and improve I/O performance • Auto load balancing of process level parallelism • Multiple users could be accommodated with variable priorities and scheduling policies • GUI for displaying status of system

More Related