1 / 16

Distributed Computing Resources

This article discusses the distributed computing resources used by BaBar, including Tier-A centres like SLAC, RAL, IN2P3, Padova, GridKa, and their usage for data analysis, skimming, and prompt reconstruction. It also highlights the success of the BaBar collaboration in breaking the centralized computing model and the future prospects of incorporating grid tools and grid authorizations.

Download Presentation

Distributed Computing Resources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Computing Resources • Tier A Centres • SLAC,RAL, IN2P3, Padova, GridKa • Grid BaBar Distributed Computing Resources

  2. RAL • Analysis and skimming • Typically 40 users – mostly non-UK • 39TB now, 21 imminent • 368 CPUs (1.0-2.4 GHz) • 2 farms – running RH 7.2 and 7.3 • Validation of RedHat 7.3 by BaBar a top priority BaBar Distributed Computing Resources

  3. CPUs just arrived BaBar Distributed Computing Resources

  4. RAL Tier 1/A usage http://www.gridpp.ac.uk/tier1a BaBar Distributed Computing Resources

  5. UK Tier B/C • ~9 University 80 CPU+ few TB sites • Used for SP (and a bit of analysis) • Not open to all BaBarians – yet • Expect new Regional Grid centres (ScotGrid…) to have strong BaBar use BaBar Distributed Computing Resources

  6. IN2P3 • Objectivity centre • 453x2 CPUs available. (Good fraction used) • 30TB available, 16TB on order (Objectivity and Xrootd according to need) • 133TB available through HPSS (with new 200GB cartridges) BaBar Distributed Computing Resources

  7. Padova • PromptReco and SP and analysis farm • 194 CPUs (various types) for reprocessing, 51 for SP, 53 for analysis (plus servers) • 38 TB now, 19TB arriving BaBar Distributed Computing Resources

  8. Karlsruhe - GridKa • SP now (analysis later) • 16 node start (300+ nodes at the centre) BaBar Distributed Computing Resources

  9. The Tier A Success Story • BaBar has broken the ‘centralised computing’ model – which based on (a) prejudice and (b) experience • This was essential as one site (SLAC) could not support all activities of the whole collaboration • Success thanks to generous BaBar rebates, imaginative funding agencies, proactive user support at sites, adaptability of collaboration. BaBar Distributed Computing Resources

  10. Grid comes next • Tools to manage distributed resources • Grid Tools requirement for funding explicit for GridKa, but linked in other places too. And makes sense BaBar Distributed Computing Resources

  11. Grid Authorisation • Have a VO (Virtual Organisation) that works • Upgrading to VSC (Virtual Smart Card) method as it becomes available • Pool accounts • Mutual recognition of certificates – set of Certificate Authorities recognised by all BaBarGrid sites BaBar Distributed Computing Resources

  12. Data Location • Existing skimData tool gaining Grid features • Extended skimData tables enable users to get information about file existence at other sites BaBar Distributed Computing Resources

  13. Data movements • BdbServer++ - a grid version of BdbServer – for accessing data across the network/grid (Tim Adye’s talk) BaBar Distributed Computing Resources

  14. Job submission • BaBar a member of EDG (European Data Grid: LHC plus some others • EDG will hand over to LCG (LHC Computing Grid) • Use EDG and LCG technology - benefit from LHC computing manpower – but not tied to the whole package • Experience with Resource Broker at IC+Spain, Replica Catalog at Manchester • Job submission between sites – becoming routine - on the verge of becoming useful BaBar Distributed Computing Resources

  15. Present status • Grid at many sites • Storage elements and Compute elements set up • Incompatible versions of EDG releases, globus, etc (Red Hat versions, EDG on 6.2) • LCG forking from EDG • LCG-1 deployment early July (in parallel with EDG-2) • Suggestion - go with LCG rather than EDG • Reliable service January 2004 BaBar Distributed Computing Resources

  16. Conclusions • Tier A expansion will continue • Lots of BaBarGrid activity • Grid can be assimilated within BaBar computing model • Will see more and more Grid tools in use BaBar Distributed Computing Resources

More Related