1 / 45

Globus

Globus. Grid Middleware Toolkit Otto Sievert CSE 225 8 June, 2000. Globe. and other European Grid Activities Otto Sievert CSE 225 8 June, 2000. EGRID. European Grid Community Collaborative community, not a standards group Commercial and Academic Interests www.egrid.org.

kaz
Download Presentation

Globus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Globus Grid Middleware Toolkit Otto Sievert CSE 225 8 June, 2000

  2. Globe and other European Grid Activities Otto Sievert CSE 225 8 June, 2000

  3. EGRID • European Grid Community • Collaborative community, not a standards group • Commercial and Academic Interests • www.egrid.org

  4. European Tour • Netherlands • Germany • Poland • Italy • Sweden

  5. Grid Theme 1 • Be very (very) careful when choosing a project name.

  6. Amsterdam: Globe • Vrije Universiteit • Maarten van Steen • Andrew Tanenbaum • “Middleware to facilitate large-scale distributed applications” • Web focus • object-based coherency

  7. Globe Uniqueness • Too much data, too little resources (bandwidth, etc.) • Caching helps • Data Coherency integral to Cache Policy • Release constraint of a single replication/ distribution policy for all objects • example: web pages

  8. Globe Object • Physically Distributed • Replicated • Distribution Policy

  9. Globe Local Object • 4 subobjects (minimum) • Modularity

  10. Globe Binding 1. Name server 2. Object handle 3. Location service 4. Contact points 5. Choose point 6. Repository 7. Protocol 8. Bind!

  11. Legion Binding • Two-stage name resolution • Binding agent • No implementation repository

  12. Autopilot Binding 1. Sensor registers with the sole manager 2. AP client requests sensors from the manager 3. Manager returns available sensors 4. Client and sensor communicate directly

  13. Globe objects can be replicated still maintain one state allows complex coherency policies Legion in theory, supports replication replicated state allows some but not all coherency policies in practice is not allowed Globe Claim: Remote Object Model Lacks Replication

  14. Globe Architecture • Why all these servers? • separate naming from locating • allow extensible binding protocols (?)

  15. Centralized NWS Globus MDS* Simple Management Single Point of Failure Distributed NetSolve Fran’s Sensor Net Complex Management Scalable Mixed Legion Globe Autopilot Grid Theme 2: Communication

  16. Globe Implementation Example • Set of HTML/image/ Java files • One semantics subobject • browsers aren’t that extensible, so …use gateway gateway http://globe.foo.edu:8989/dir/file.html browser

  17. Globe Example (cont’d) • Replication Policies • Object-based • “Permanent store” • “Object-initiated store” • Client-based • “Client-initiated store” • How is this any better than what we have now?

  18. Globe Live Demo ...

  19. Globe Location Service • Scalability Questions • Tree Heirarchy • again Legion-like in its worst-case behavior • Typical sol’n assumes mobile client • Globe sol’n assumes mobile software

  20. Does This Work? • Single experiment - 5 wk. web server trace • compare • no caching • various complex replication/coherency policies • automatic adaptive policy • Results • (essentially) any replication scheme wins big • individual object adaptivity didn’t perform well

  21. Globe: Conclusion • Explicit coherency is clearly a Good Thing • Security? • Representative implementation?

  22. Germany: Cactus • Albert Einstein Institute, Potsdam • Thomas Radke • Ed Seidel • Distributed Astrophysics • Software Engineering • NCSA “hot code”

  23. Cactus • Separate CS from disciplinary science • Flesh = CS architecture • Thorns = science modules • 2-stage compilation • encapsulation • modularity • reuse

  24. Cactus Compilation • Two stage • Permanently bind thorns [Perl] • Compile binary [C++/F77] • Efficient • Don’t carry unneeded thorn info

  25. Grid Theme 3: Applications • Numerical, or Non • Computation vs. Specialization • Performance Measures: • Execution Time • Scale • Efficiency • Distribution

  26. Grid Theme 4: Transparency • Ease of Use vs. High Performance • As system becomes opaque, EoU increases • As system becomes opaque, Perf decreases • Where is the balance?

  27. Germany: Gridware • 1999 San Jose-based merger of two companies: Genias GmbH and Chord (U.S.) • CoDINE • Compute farm load balancing system • Recently adopted by Sun™ • PaTENT • WinNT MPI

  28. Grid Theme 5: Commoditization • Reuse is strong in the Grid • Resources (Beowulf) • Middleware (Globus, PaTENT) • Applications (Cactus) • Industry is influential • Largest grid apps in use today are commercial • Grid-ernet is profitable

  29. To This Point ...

  30. UNIform Computer Resources (German SC access) Goal is to provide uniform access to high performance computers - painful to learn OS details Data storage conventions Administration policies 3 phase project: I self-contained jobs II remote data access III concurrent remote execution Germany: Unicore

  31. Unicore (cont’d) • How is this done? • Web (Java) user interface • X.509 authentication • Network Job Supervisor • interprets Abstract Job Objects • manages jobs and data • interfaces with local batch systems (like LoadLeveler and CoDINE) • vs. Globus?

  32. Poland: POL-34 • National Grid • Very like the system used by Unicore, a collection of widely-distributed parallel computers • Tree-connected ATM network

  33. POL-34 • Yellow = 2 Mb/s • Red = 34 Mb/s • Cyan = 155 Mb/s • Single administrative domain via Virtual Users (skirting the grid issue) • Use Load Sharing Facility (LSF)

  34. Italy: SARA • University of Lecce, Italy Giovanni Aloisio (with Roy Williams of Caltech) • Synthetic Aperature Radar Atlas • Distributed data-intensive app • Alan Su and the UCSD AppLeS group is involved

  35. SARA Architecture • The goal: easy, fast, efficient retrieval and processing of SAR image data • Issues • data is distributed, stored in tracks • complex hierarchical system • Prototypical grid app

  36. Sweden: Computational Steering • Parallelldatorcentrum Royal Institute of Technology Per Oster • Using Globus and the Visualization Toolkit (VTK) to steer a single CFD code. • Little data available • Eclipsed by Autopilot

  37. Grid Theme 6: Heterogeneity • Some attempt to hide it • Globus, CORBA, Java/Jini • Some take advantage of it • Netsolve, Ninf • Some characterize and manage it • AppLeS (SARA), NWS

  38. To This Point ...

  39. Conclusion • Explored Globe, Cactus, and other EuroGrid favorites in the context of • Communication architectures • Grid application characteristics • Grid transparency • Commodity computing influence • Grid heterogeneity

  40. Network Weather Service (NWS) • Rich Wolski, U. Tenn. • Monitors and Predicts Grid Resources • network latency, bandwidth • CPU load, avail. Mem. • Central NWS data server • nws.npaci.edu/NWS

More Related