1 / 50

Parallel Job Deployment and Monitoring in a Hierarchy of Mobile Agents

Parallel Job Deployment and Monitoring in a Hierarchy of Mobile Agents. Munehiro Fukuda Computing & Software Systems, University of Washington, Bothell Funded by. Outline. Introduction Execution Model System Design Performance Evaluation Related Work Conclusions. 1. Introduction.

Download Presentation

Parallel Job Deployment and Monitoring in a Hierarchy of Mobile Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Job Deployment and Monitoring in a Hierarchy of Mobile Agents Munehiro Fukuda Computing & Software Systems, University of Washington, Bothell Funded by CSS Speaker Series

  2. Outline • Introduction • Execution Model • System Design • Performance Evaluation • Related Work • Conclusions CSS Speaker Series

  3. 1. Introduction • Problems in Grid Computing • Background of Mobile Agents • Objective • Project Overview CSS Speaker Series

  4. Quiet Laboratories • UW1-320 and UW1-302 at 3pm on a weekday • No more computing resources needed? CSS Speaker Series

  5. Demands for Computing ResourcesIn Teaching • I'm in the 320 lab testing my program, and for some reason, whenever I attempt to use 15 hosts, it asks me for passwords for hosts 31 - 18 and then freezes and does nothing. • I noticed that uw1-320-20 is being bogged down by zombie processes that someone left going on it: • I went around looking on some of the other computers, and its all over the place: user A has almost 40 processes running on uw1-320-30 since April 23rd, user B has about 10 on host 29, and there’s a ton more on almost every host. • I got tired of manually running a bunch of ssh commands to run rmi on many different machines. • I have narrowed the problem down to three machines: 16, 20, and 30. First of all uw1-320-20 is dead. It drops all incoming ssh connections. The other two, uw1-320-16 and uw1-320-30, both have a mysterious problem that I don't know how to solve. CSS Speaker Series

  6. Demands for Computing ResourcesIn Research • http://setiathome.berkeley.edu/ • http://boinc.bakerlab.org/rosetta/rah_about.php • These are an effective way to collect numerous computing resources from all over the world. • But, here is a question: • Why don’t they use idle machines on their campuses first? CSS Speaker Series

  7. Grid-Computing Brokers • Desktops • Buyers: a desktop user • Sellers: hardware components • Brokers: Windows, Linux • Clusters • Buyers: multiple users (e.g., CSS434 students) • Sellers: cluster computing nodes • Brokers: PBS, LSF • Grid computing • Buyers: Seti@home, Rosseta@home, etc. • Brokers: Globus, Condor, Legion (Avaki), NetSolve, Ninf, Entropia, etc. • Okay, no need to implement any more? CSS Speaker Series

  8. Problems in Grid Computing • Targeting large business models • A central entry point • A lot of installation work http://www.globus.org/toolkit/docs/4.0/ • Little system faults • Too gigantic CSS Speaker Series

  9. Our Target • Targeting a group of computer users • No central entry point • No central managers • No programming model restrictions • Easy installation work • Easy participation but necessity of fault tolerance Network CSS Speaker Series

  10. Background of Mobile Agents • An execution model previously highlighted as a prospective infrastructure of distributed systems. • Static job deployment and result collection: No more than an alternative approach to centralized grid middleware implementation • Our goal: Let mobile agents do unique tasks in grid computing FTP Cycle Cycle Cycle Central manger HTTP Server Server Server RPC User Internet CSS Speaker Series

  11. Focus on a group of independent computers Turned on and off independently Not controlled by a scheduler such as PBS and LSF Not managed by a central server Let mobile agents do unique tasks in grid computing Runtime job migration: Moving a program from a faulty/busy site to an active/idle site Seeking for fault tolerance and better load balancing Negotiation: Negotiating with other agents about computing resources Seeking for better load balancing Inherent parallelism: Deploying and monitoring jobs in parallel Decentralized job management Objective CSS Speaker Series

  12. Project Overview • Funded by: NSF Middleware Initiative • Sponsored by: University of Washington • In Collaboration of: Ehime University • In a Team of: UWB Undergraduates CSS Speaker Series

  13. 2. Execution Model • System Overview • Execution Layer • Programming Environment CSS Speaker Series

  14. Snapshot Methods Snapshot Methods GridTCP GridTCP User program wrapper User program wrapper Results Results snapshot snapshot snapshot User A User B FTP Server snapshots snapshots System Overview User A’s Process User A’s Process User B’s Process TCP Communication Snapshot Methods GridTCP User program wrapper Sentinel Agent Sentinel Agent Sentinel Agent Commander Agent Resource Agent Resource Agent Commander Agent CSS Speaker Series BookkeeperAgent Bookkeeper Agent

  15. mpiJava-S mpiJava-A Execution Layer Java user applications mpiJava API Java socket GridTcp User program wrapper Commander, resource, sentinel, and bookkeeper agents UWAgents mobile agent execution platform Operating systems CSS Speaker Series

  16. MPI Java Programming public class MyApplication { public GridIpEntry ipEntry[]; // used by the GridTcp socket library public int funcId; // used by the user program wrapper public GridTcp tcp; // the GridTcp error-recoverable socket public int nprocess; // #processors public int myRank; // processor id ( or mpi rank) public int func_0( String args[] ) { // constructor MPJ.Init( args, ipEntry, tcp );// invoke mpiJava-A .....; // more statements to be inserted return 1; // calls func_1( ) } public int func_1( ) { // called from func_0 if ( MPJ.COMM_WORLD.Rank( ) == 0 ) MPJ.COMM_WORLD.Send( ... ); else MPJ.COMM_WORLD.Recv( ... ); .....; // more statements to be inserted return 2; // calls func_2( ) } public int func_2( ) { // called from func_2, the last function .....; // more statements to be inserted MPJ.finalize( );// stops mpiJava-A return -2; // application terminated } } CSS Speaker Series

  17. 3. System Design • Mobile Agents • Job Coordination • Distribution • Resource allocation and monitoring • Resumption and migration • Programming Support • Language preprocessing • Communication check-pointing • Inter-Cluster Job Deployment (Current Research Topic) • Over-gateway agent migration • Over-gateway communication • Job distribution CSS Speaker Series

  18. UWInject: submits a new agent from shell. id 0 Agent domain (time=3:30pm, 8/25/05 ip = medusa.uwb.edu name = fukuda) id 0 -m 4 -m 3 id 1 id 1 id 2 id 3 id 2 User A user job UWPlace id 12 Agent domain (time=3:31pm, 8/25/05 ip = perseus.uwb.edu name = fukuda) id 4 id 5 id 6 id 7 id 8 id 9 id 10 id 11 UWAgents – Concept of Agent Domain CSS Speaker Series

  19. User snapshot snapshot Job Distribution Job Submission Commander id 0 XML Query Spawn eXist Resource id 1 Sentinel id 2 rank 0 Bookkeeper id 3 rank 0 Sensor id 4 Sensor id 5 Sentinel id 8 rank 1 Sentinel id 9 rank 2 Sentinel id 10 rank 3 Sentinel id 11 rank 4 Bookkeeper id 12 rank 1 Bookkeeper id 13 rank 2 Bookkeeper id 14 rank 3 Bookkeeper id 15 rank 4 id: agent id rank: MPI Rank Sentinel id 32 rank 5 Sentinel id 33 rank 6 Sentinel id 34 rank 7 Bookkeeper id 48 rank 5 Bookkeeper id 49 rank 6 Bookkeeper id 50 rank 7 CSS Speaker Series

  20. Sensor id 4 Sensor id 5 User Node 1 Node 0 Node 2 Node 3 Node 4 Node5 Performance data ttcp Sensor id 16 Sensor id 17 Sensor id 18 Sensor id 19 Sensor id 20 Sensor id 21 Sensor id 22 Sensor id 23 ttcp ttcp Resource Allocation and Monitoring Job submission Our own XML DB total nodes x multiplier eXist Resource id 1 Commander id 0 An XML query A list of available nodes CPU Architecture OS Memory Disk Total nodes Multiplier Spawn Case 1: Total nodes = 2 Multiplier = 1.5 Sentinel id 2 rank 0 Sentinel id 8 rank 1 Sentinel id 2 rank 0 Bookkeeper id 12 rank 5 Sentinel id 8 rank 1 Bookkeeper id 2 rank 0 Bookkeeper id 12 rank 5 Bookkeeper id 2 rank 0 Case 2: Total nodes = 2 Multiplier = 3 Future use Future use Future use CSS Speaker Series

  21. (2) Search for the latest snapshot (3) Retrieve the snapshot (4) Send a new agent (1) Detect a ping error New Sentinel id 11 rank 4 (5) Restart a user program (0) Send a new snapshot periodically Job Resumption by a Parent Sentinel Sentinel id 2 rank 0 MPI connections Sentinel id 8 rank 1 Sentinel id 9 rank 2 Sentinel id 10 rank 3 Sentinel id 11 rank 4 Bookkeeper id 15 rank 4 CSS Speaker Series

  22. (12) Restart a new resource agent from its beginning Commander id 0 (11) Detect a ping error (13) Detect a ping error and follow the same child resumption procedure as in p9. (10) Send a new agent (6) No pings for 2 * 5 (= 10sec) (7) Search for the latest snapshot New Resource id 1 Sentinel id 2 rank 0 (2) Search for the latest snapshot (8) Search for the latest snapshot (1) No pings for 8 * 5 (= 40sec) (9) Retrieve the snapshot No pings for 12 * 5 (= 60sec) (5) Send a new agent (3) Search for the latest snapshot (4) Retrieve the snapshot Job Resumption by a Child Sentinel Commander id 0 New Resource id 1 Sentinel id 2 rank 0 Bookkeeper id 3 rank 0 Sentinel id 8 rank 1 Bookkeeper id 12 rank 1 CSS Speaker Series

  23. User Program Wrapper User Program Wrapper Source Code func_0( ) { statement_1; statement_2; statement_3; return 1; } func_1( ) { statement_4; statement_5; statement_6; return 2; } func_2( ) { statement_7; statement_8; statement_9; return -2; } statement_1; statement_2; statement_3; check_point( ); statement_4; statement_5; statement_6; check_point( ); statement_7; statement_8; statement_9; check_point( ); int fid = 1; while( fid == -2) { switch( func_id ) { case 0: fid = func_0( ); case 1: fid = func_1( ); case 2: fid = func_2( ); } } check_point( ) { // save this object // including func_id // into a file } Preprocessed Cryptography CSS Speaker Series

  24. Pre-proccesser and Drawback Preprocessed Code Preprocessed Source Code int func_0( ) { statement_1; statement_2; statement_3; return 1; } int func_1( ) { while(…) { statement_4; if (…) { statement_5; return 2; } else statement_7; statement_8; } } int func_2( ) { statement_6; statement_8; while(…) { statement_4; if (…) { statement_5; return 2; } else statement_7; statement8; } } statement_1; statement_2; statement_3; check_point( ); while (…) { statement_4; if (…) { statement_5; check_point( ); statement_6; } else statement_7; statement_8; } check_point( ); • No recursions • Useless source line numbers indicated upon errors • Still need of explicit snapshot points. Before check_point( ) in if-clause After check_point( ) in if-clause CSS Speaker Series

  25. User Program Wrapper rank ip 1 n1.uwb.edu user program 2 n2.uwb.edu n3.uwb.edu incoming TCP ougoing backup User Program Wrapper rank ip 1 n1.uwb.edu user program 2 n3.uwb.edu incoming TCP ougoing backup GridTcp – Check-Pointed Connection User Program Wrapper user program TCP outgoing backup incoming Snapshot maintenance n1.uwb.edu n2.uwb.edu • Outgoing packets saved in a backup queue • All packets serialized in a backup file every check pointing • Upon a migration • Packets de-serialized from a backup file • Backup packets restored in outgoing queue • IP table updated n3.uwb.edu CSS Speaker Series

  26. Inter-Cluster Job DeploymentCurrent Research Topic Sentinel id 2 Commander id 0 • Over-gateway agent deployment • Over-gateway TCP communication • Over-gateway agent tree creatioin How? Internet medusa.uwb.edu Sentinel id 8 Sentinel id 9 Private domain uw1-320-00.uwb.edu uw1-320-01.uwb.edu 10.0.0.3 10.0.0.4 10.0.0.7 CSS Speaker Series

  27. UWAgents – Over Gateway Migration • Parent and children keep track of a route to each other’s current position. • A daemon maintains where a gateway is. talk( ) hop( ) spawnChild( ) id 1 id 1 id 0 Internet hop( ) medusa.uwb.edu hop( ) id 1 id 1 Private domain uw1-320-00.uwb.edu uw1-320-01.uwb.edu mnode0 mnode1 mnode4 CSS Speaker Series

  28. User Program Wrapper User Program Wrapper User Program Wrapper rank rank rank dest dest dest gateway gateway gateway 0 0 0 uw1-320-00 uw1-320-00 uw1-320-00 medusa - - 1 1 1 mnode0 mnode0 mnode0 medusa - - 2 2 2 medusa medusa medusa - - - user program user program user program GridTcp – Over-Gateway Connection Sentinel id 9 rank 2 Sentinel id 2 rank 0 Commander id 0 Internet medusa.uwb.edu Sentinel id 8 rank 1 Private domain uw1-320-00.uwb.edu uw1-320-01.uwb.edu mnode0 mnode1 mnode4 CSS Speaker Series

  29. User Over-Gateway Agent Tree CreationPossible Solutions Commander id 0 Partition 1 Partition 2 Resource id 1 Sentinel id 2 rank 0 Bookkeeper id 3 rank 0 Cluster 0 Sentinel id 8 rank 1 Sentinel id 9 rank 2 Sentinel id 10 rank 3 Sentinel id 11 rank 4 Cluster 2 Sentinel id 32 rank 5 Sentinel id 33 rank 6 Sentinel id 34 rank 7 Sentinel id 35 rank 8 Sentinel id 46 rank 19 Sentinel id 47 rank 20 Cluster 1 CSS Speaker Series

  30. User Over-Gateway Agent Tree CreationFinal Solution Commander id 0 Sentinel id 2 Resource id 1 Bookkeeper id 3 rank 0 Cluster gateway 0 Desktop computers Sentinel id 8 rank -8 Sentinel id 9 rank X Cluster gateways 1, 2, and 3 Sentinel id 32 rank 0 Sentinel id 33 rank -33 Sentinel id 34 rank -34 Sentinel id 35 rank -35 Sentinel id 36 rank X+1 Sentinel id 37 rank X+2 Sentinel id 38 rank X+3 Sentinel id 39 rank X+4 Sentinel id 128 rank 1 Sentinel id 129 rank 2 Sentinel id 130 rank 3 Sentinel id 131 rank 4 Sentinel id 132 rank 6 Cluster 1 Cluster 3 Cluster 2 Cluster 0 Sentinel id 512 rank 5 Sentinel id 528 rank 7 Sentinel id 529 rank 8 Sentinel id 530 rank 9 Sentinel id 531 rank 10 CSS Speaker Series

  31. 4. Performance Evaluation • Evaluation Environment: • A 8-node Myrinet-2000 cluster: 2.8GHz pentium4-Xeon w/ 512MB • A 24-node Giga-Ethernet cluster: 3.4GHz Pentium4-Xeon w/512MB • Computation Granularity • Java Grande MPJ Benchmark • Process Resumption Overhead • File Transfer CSS Speaker Series

  32. Computational Granularity 1 Master-slave computation Master Communication Slave Slave Slave Slave Slave CSS Speaker Series

  33. Computational Granularity 2 Heartbeat communication Process Process Process Process Process Communication CSS Speaker Series

  34. Computational Granularity 3 All to all broadcast Communication Process Process Process Process Process CSS Speaker Series

  35. Performance Evaluation - Series Master-slave computation CSS Speaker Series

  36. Performance Evaluation - RayTracer All reduce communication but few data to send CSS Speaker Series

  37. Performance Evaluation – MolDyn All to all broadcast CSS Speaker Series

  38. Overhead of Job Resumption CSS Speaker Series

  39. User File Transfer Commander id 0 Resource id 1 Sentinel id 2 rank 0 Bookkeeper id 3 rank 0 Sentinel id 8 rank 1 Sentinel id 9 rank 2 Sentinel id 10 rank 3 Sentinel id 11 rank 4 AgentTeamwork vs NFS Pipelined Transfer in AgentTeamwork Sentinel id 32 rank 5 Sentinel id 33 rank 6 Sentinel id 34 rank 7 Sentinel id 35 rank 8 Sentinel id 46 rank 19 Sentinel id 47 rank 20 CSS Speaker Series

  40. 5. Related Work From the viewpoints of: • System Architecture • Fault Tolerance • Job Deployment and Monitoring CSS Speaker Series

  41. System Architecture • Difference from Catalina/J-SEAL2 • They are not fully implemented. • They are based on a master-slave model CSS Speaker Series

  42. Fault Tolerance CSS Speaker Series

  43. Job Deployment and Monitoring CSS Speaker Series

  44. 6. Conclusions • Project Summary • Next Two Years CSS Speaker Series

  45. Project summary • Applications • Computation granularity: 40,000 doubles x 10,000 floating-point operations • Message transfer: Any types except all-to-all communication • Entire application size: 3+ times larger than computation granularity • Current status • UWAgent: completed • Agent behavioral design: basic job deployment/resumption implemented • User program wrapper: completed including security features • GridTcp/mpiJava: in testing • Preprocessor: almost completed CSS Speaker Series

  46. Next Two Years • Application support • Fault tolerance in file transfer • GUI improvement • Agent algorithms • Over-gateway application deployment • Dynamic resource allocation and monitoring • Priority-based agent migration • Performance evaluation • Dissemination CSS Speaker Series

  47. AgentTeamwork Can AgentTeamwork Become Their Competitor? Nimrod CSS Speaker Series

  48. Questions? CSS Speaker Series

  49. MPJ.Send and Recv Performance CSS Speaker Series

  50. Mobile Agents CSS Speaker Series

More Related