1 / 15

WP4 report

WP4 report. Plans for testbed 2 Olof.Barring@cern.ch [Including slides prepared by Lex Holt.]. Summary. Reminder on how it all fits together What’s in R1.2 (deployed and not-deployed but integrated) Piled up software from R1.3, R1.4 Timeline for R2 developments and beyond Conclusions.

tacy
Download Presentation

WP4 report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WP4 report Plans for testbed 2 Olof.Barring@cern.ch [Including slides prepared by Lex Holt.]

  2. Summary • Reminder on how it all fits together • What’s in R1.2 (deployed and not-deployed but integrated) • Piled up software from R1.3, R1.4 • Timeline for R2 developments and beyond • Conclusions

  3. Other Wps WP4 subsystems How it all fits together (job management) ResourceBroker(WP1) Grid InfoServices(WP3) Grid User - Submit job - Optimized selection of site • Authorize • Map grid  local credentials FabricGridification - publish resource and accounting information Data Mgmt(WP2) • Select an optimal batch queue and submit • Return job status and output Monitoring ResourceManagement Local User Farm A (LSF) Farm B (PBS) Grid DataStorage(WP5) (Mass storage, Disk pools)

  4. Other Wps WP4 subsystems Information Invocation Monitoring &Fault Tolerance ResourceManagement Farm A (LSF) Farm B (PBS) ConfigurationManagement Automation Installation &Node Mgmt How it all fits together (system mgmt) • Remove node from queue • Wait for running jobs(?) - Node malfunction detected • Put back node in queue - Node OK detected - Update configuration templates - Repair (e.g. restart, reboot, reconfigure, …) - Trigger repair

  5. How it all fits together (node autonomy) Central (distributed) Monitoring Measurement Repository Correlation engines Buffer copy Automation Monitoring Buffer Node mgmt components Configuration Data Base Node profile Cfg cache Local recover if possible (e.g. restarting daemons)

  6. What’s in R1.2 (and deployed) • Gridification: • Library implementation of LCAS

  7. What’s in R1.2 but not used/deployed • Resource management • Information provider for Condor (not fully tested because you need a complete testbed including a Condor cluster) • Monitoring • Agent + first prototype repository server + basic linuxproc sensors • No LCFG object  not deployed • Installation mgmt • LCFG light exists in R1.2. Please provide us feedback on any problems you have with it.

  8. Piled up software from R1.3, R1.4 • Everything mentioned here is ready, unit tested and documented (and rpms are built by autobuild) • Gridification • LCAS with dynamic plug-ins. (already in R1.2.1???) • Resource mgmt • Complete prototype enterprise level batch system management with proxy for PBS. Includes LCFG object. • Monitoring • New agent. Production quality. Already used on CERN production clusters sampling some 110 metrics/node. Has also been tested on Solaris. • LCFG object • Installation mgmt • Next generation LCFG: LCFGng for RH6.2 (RH7.2 almost ready)

  9. New LCFG [Lex Holt] • EDG release 1.3: more recent LCFG version (LCFGng) • Many improvements: • Supports Red Hat 7.2 as well as 6.2 • Install/boot: full DHCP support, PXE support, can mix init.d scripts \& LCFG components • Single LCFG server can configure machines in multiple domains • Spanning maps: profile generator (mkxprof) can gather individual machine data (e.g., MAC addresses) and publish to component (e.g., DHCP server) • Component method semantics clarified; native Perl components possible; EDG-style monitoring support

  10. LCFG Migration [Lex Holt] • Clients require reinstallation • Will be guidelines for migrating servers without reinstallation---some manual tweaking necessary, e.g.: • Locations (pathnames) changed • Resources changed or moved as a consequence of component changes • Component writers/maintainers need to absorb a few technical changes

  11. Timeline for R2 developments • Configuration management: complete central part of framework • High Level Definition Language: 30/9/2002 • PAN compiler: 30/9/2002 • Configuration Database (CDB): 31/10/2002 • Installation mgmt • LCFGng for RH72: 30/9/2002 • Monitoring: Complete final framework • TCP transport: 30/9/2002 • Repository server: 30/9/2002 • Repository API WSDL: 30/9/2002 • Oracle DB support: 31/10/2002 • Alarm display: 30/11/2002 • Open Source DB (MySQL or PostgreSQL): mid-December 2002

  12. Timeline for R2 developments • Resource mgmt • GLUE info providers: 15/9/2002 • Maintenance support API (e.g. enable/disable a node in the queue): 30/9/2002 • Provide accounting information to WP1 accounting group: 30/9/2002 • Support Maui as scheduler • Fault tolerance framework • Various components already delivered • Complete framework by end of November

  13. Beyond release 2 • Conclusion from WP4 workshop, June 2002: LCFG is not the future for EDG (see WP4 quarterly report for 2Q02) because: • Inherent LCFG constraints on the configuration schema (per-component config) • LCFG is a project of its own and our objectives do not always coincide • We have learned a lot from LCFG architecture and we continue to collaborate with the LCFG team • EDG future: first release by end-March 2003 • Proposal for a common schema for all fabric configuration information to be stored in the configuration database, implemented using the HLDL. • New configuration client and node management replacing LCFG client (the server side is already delivered in October). • New software package management (replacing updaterpms) split into two modules: an OS independent part and an OS dependent part (packager).

  14. WP4 plans (sketch/snapshot) [Lex Holt] • Caveat: installation & configuration tasks only • Release 2 to allow (but not require) use of the new high-level description language (HLDL) • Release 3: LCFG architecture roughly retained, but • HLDL replaces LCFG source file syntax • HLDL files accessed via new configuration database (akin to API wrapper round CVS repository) • XML profile much as before • Redesigned (probably Perl) components interpret profile through more substantial API/libraries (registration, dependency analysis, …) • Single Configure() call to component does everything • Generalized updaterpms may handle non-RPM formats

  15. Summary • Substantial amount of s/w piled up from R1.3, R1.4 to be deployed now • R2 also includes two large components: • LCFGng – migration is non-trivial but we already perform as much as the non-trivial part ourselves so TB integration should be smooth • Complete monitoring framework • Beyond R2: LCFG is not future for EDG WP4. First version of new configuration and node management system in March 2003

More Related