1 / 55

CSE 8383 - Advanced Computer Architecture

CSE 8383 - Advanced Computer Architecture. Week-13 April 15, 2004 engr.smu.edu/~rewini/8383. Contents. Warm up Big Picture Clusters Scheduling Mobile IP Reliability issues in Mobile IP. Group Work. Let’s test your network background. Application. Application. Presentation.

raisie
Download Presentation

CSE 8383 - Advanced Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSE 8383 - Advanced Computer Architecture Week-13 April 15, 2004 engr.smu.edu/~rewini/8383

  2. Contents • Warm up • Big Picture • Clusters • Scheduling • Mobile IP • Reliability issues in Mobile IP

  3. Group Work Let’s test your network background

  4. Application Application Presentation Presentation Session Session Transport Transport Network Network Data Link Data Link Physical Physical What? LAN LAN Internet

  5. Explain Application Telnet ftp Mail Presentation Transmission Control Protocol (TCP) Session Transport Network Internet Protocol (IP) Data Link Token ring Ethernet Physical

  6. Big Picture

  7. Skeletons OpenMP Pthreads Java Threads PVM MPI Threads Message Passing Shared Memory Distributed SM Cluster SMP CC-NUMA Myrinet ATM Leopold’s View of the Field Numerous Application Programs High Hiding Details Low Concrete Architectures

  8. Parallel and Distributed Architecture (Leopold, 2001) SIMD SMP CC-NUMA DMPC Cluster Grid SIMD MIMD Shared Memory Distributed Memory Degree of Coupling tight loose Supported Grain Sizes coarse fine Communication Speed fast slow

  9. Clusters (Commodity Off The Shelf)

  10. What is a Cluster? • A Collection of interconnected stand-alone computers working working together as a single integrated computing resource • Cluster nod3es may exist in a single cabinet or be physically separated and connected via a LAN

  11. OS OS OS M M M I/O I/O I/O C C C P P P Clusters Programming Environment Middleware Interconnection Network

  12. Clusters offer these features • High Performance • Expandability and Scalability • High Throughput • High Availability

  13. High Performance Clusters • Tuned to derive maximum performance • Share processing load • Needs software customization • E.g. Beowulf

  14. Cluster Components • Homogeneous Clusters • All nodes have same configuration • Heterogeneous Clusters • Nodes with different configurations e.g. different OSs

  15. Cluster Architecture and OS • Cluster of PCs • Cluster of Workstations • Cluster of SMPs (Symmetric Multiprocessors) • Linux clusters (Beowulf) • Solaris clusters (NOW) • NT clusters

  16. Cluster Size • Group clusters – 2-99 nodes • Departmental clusters – 99-999 nodes • Organizational clusters – many 100’s • Global clusters – 1000’s + • Internet wide

  17. Typical Cluster Environment User Application Middleware PVM/MPI OS/Hardware

  18. PVM & MPI • PVM (Parallel Virtual Machine) • Allows heterogeneous collection of computers linked by a network to be used as a single large parallel computer • MPI (Message Passing Interface) • Library specification for message passing • Free and vendor supplied implementations available

  19. PVM Introduction • http://www.netlib.org/pvm3/ • http://www.epm.ornl.gov/pvm/ • Started as a research project in 1989 • Developed at Oak Ridge National Lab & University of Tennessee • It makes it possible to develop applications on a set of heterogeneous computers connected by a network that appears logically to user as a single parallel computer

  20. PVM Environment • Virtual machine • Dynamic set of heterogeneous computer systems connected via a network and managed as a single parallel computer • Computer nodes  hosts • Hosts are uniprocessors or multiprocessors running PVM software

  21. PVM Software • Two Components: • Library of PVM routines • Daemon • Should reside on all hosts in the virtual machine • Before running an application, the user must start up PVM and configure a virtual machine

  22. PVM Application • A number of sequential programs, each of which will correspond to one or more processes in a parallel program • These programs are compiled individually for each host in the virtual machine • Object files are placed in locations accessible from other hosts

  23. PVM Application (Cont.) • One of these sequential programs, which is called the initiation task, has to be started manually on one of the hosts • Tasks on other hosts are started automatically by the initiation task • Tasks comprising a PVM application can be identical (SPMD) [common in most applications] or can be different (pipeline: input processing, output)

  24. Application Structure • Start graph • Middle node is call supervisor or master • Supervisor-workers or Master-slaves • Tree • Root is the top supervisor • Hierarchy

  25. Task Creation • A task in PVM can be started manually or can be spawned from another task • The function pvm_spawn() is used for dynamic task creation. • The task that calls the function pvm_spawn() is referred to as the parent • The newly created tasks are called children.

  26. To Create a child, you must specify: • The machine on which the child will be started • A path to the executable file on the specified machine • The number of copies of the child to be created • An array of arguments to the child tasks

  27. Task ID • All PVM tasks are identified by an integer task identifier • When a task is created it is assigned a unique identifier (TID) • Task identifiers can be used to identify senders and receivers during communication. It can also be used to assign functions to different tasks based on their TIDs

  28. Task ID Retrieval • Task’s TID  pvm_mytid() Mytid = pvm_mytid; • Child’s TID  pvm_spawn() pvm_spawn(…,…,…,…,…, &tid); • Parent’s TID  pvm_parent() my_parent_tid = pvm_parent(); • Daemon’s TID  pvm_tidtohost() daemon_tid = pvm_tidtohost(id);

  29. Scheduling

  30. Explicit Approach Sequential Implicit Approach Program Dependence Analyzer Grains of Ideal Sequential Code Parallelism Parallel Partitioner Program Tasks Program Tasks Parallel/Distributed System Scheduler Processors Time Schedule Scheduling

  31. Scheduling • Introduction • Model • Program tasks • Machine • Schedule • Execution and communication time • Problem Complexity

  32. Introduction to Scheduling • This problem has been described in a number of different ways in different fields • Classical problem of job sequencing in production management has influenced most of the solutions • Set of resources and set of consumers

  33. Scheduling System Consumers Scheduler Resources Policy

  34. Program Tasks (T, <, D, A) • T set of tasks • < partial order on T • D  Communication Data • A  amount of computation

  35. Task Graph A 10 5 5 5 8 7 B C D E F 15 10 15 10 20 5 5 5 4 4 G H 15 15 20 5 10 I 30

  36. Machine • m heterogeneous processors • Connected via an arbitrary interconnection network (network graph) • Associated with each processor Pi is its speed Si • Associated with each edge (i,j) is the transfer rate Rij

  37. 1 2 6 3 5 4 Examples of Network Graphs Ring Linear Array Fully Connected Mesh

  38. Task Schedule • Gantt Chart • Mapping (f) of tasks to a processing element and a starting time • Formally: f: T  {1,2,3, …, m} x [0,infinity] f(v) = (i,t)  task v is scheduled to be processed by processor i starting at time t

  39. Gantt Chart

  40. Execution and Communication Times • If task ti is executed on pj Execution time = Ai/Sj • The communication delay between ti and tj, when executed on adjacent processing elements pk and pl is Dij/Rkl

  41. Complexity • Computationally intractable in general • Small number of polynomial optimal algorithms in restricted cases • A large number of heuristics in more general cases • Quality of the schedule vs. Quality of the scheduler

  42. Mobile Computing

  43. What is Driving Mobile Computing • Advances in Wireless Communication Technology • Advances in Portable Computing Technology • Reliance on Network Computing • Mobile Workforce

  44. Mobile Computing • Using small size portable computers, hand-helds, and other small wearable devices, • To run applications and access information resources via wireless connections • By mobile and nomadic users

  45. Mobility versus Nomadicity • Mobile Node The node is able to change its point of attachment from one subnet to another while maintaining all existing communication. • Nomadic Node The node must terminate all existing communication before changing its point of attachment.

  46. Main Components • Mobile Hosts • Backbone Network • Wired • Wireless Multi-hop • Hybrid

  47. Mobile Host Mobile Host Wired Backbone Base Station Base Station Fixed Host Fixed Communication Network Fixed host Base Station Base Station Fixed Host Fixed Host Mobile Host Mobile Host Wired Backbone

  48. Mobile Host Mobile Host Wireless Multi-hop Backbone Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Wireless Multi-hop Backbone

  49. Hybrid Backbone Mobile Host Mobile Host Wired Backbone Base Station Base Station Mobile Host Fixed Host Fixed Communication Network Fixed host Base Station Base Station Fixed Host Fixed Host Mobile Host Mobile Host Wireless Multi-hop Backbone Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Mobile Host Hybrid backbone

  50. IETF Mobile IP • System Components • Mobile host • Home address • Home agent • Foreign agent (NOT in V6) • Care of address

More Related