1 / 12

M.C. Vetterli; SFU/TRIUMF

The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I might not depending on time and the scope Les wants covered. The Canadian Model.

gwylan
Download Presentation

M.C. Vetterli; SFU/TRIUMF

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The following is a collection of slides from a few recent talkson computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I might not depending on time andthe scope Les wants covered.

  2. The Canadian Model • Establish a large computing centre at TRIUMF that willbe on the LCG and will participate in the common tasksassociated with Tier-1 and Tier-2 centres. • Canadian groups will use existing CFI facilities (or what they will become) to do physics analysis. They will access data and the LCG through TRIUMF. • The jobs are smaller at this level and can be more easily integrated into shared facilities. We can also be independent of LCG middleware. In this model, the TRIUMF centre acts as the hub of theCanadian computing network, and as an LCG node M.C. Vetterli; SFU/TRIUMF

  3. CERN USA, Germany, France, UK, Italy, … - ESD- access to RAW & ESD - MC data- ESD’- calibration- access to CDN Grid ATLAS Grid TRIUMF Gateway - access to ATLAS Grid- AOD- DPD- technical expertise Canadian Grid CA*Net4 • algorithms- calibration- MC production UVic, SFU, UofA, UofT, Carleton, UdeM (CFI funded) The ATLAS-Canada Computing Model cpu/storageExperts

  4. What Will We Need at TRIUMF? • Total computing power needed: 1.8 MSI2k 250 dual 10GHz 5000 x 1GHz CPUs • Total storage required: 340 TB of disk 1.2 PB of tape • We assume that the network will be 10 GbitE for both the LAN and WAN • These numbers have been supported by an expert advisory committee M.C. Vetterli; SFU/TRIUMF

  5. Acquisition Profile M.C. Vetterli; SFU/TRIUMF

  6. The TRIUMF Centre - II • 8 NEW people to run the center are included in the budget. • 4 for system support; 4 for software/user support. • Also one dedicated technician. • Personnel in the university centers will be mostly for system support. • More software support will be available from ATLAS postdocs. M.C. Vetterli; SFU/TRIUMF

  7. Status of Funding • The TRIUMF center will be funded through the next TRIUMF 5-year plan; starts Apr.1, 2005. • Decision on this is expected around the end of this year. • University centers are funded through the Canada Foundation for Innovation and the provincial governments. These centers exist and should continue to be funded. Shared facilities. • Ask CFI for funds for a second large center? Driven by new requirements for T1 centers. Just started discussing this. M.C. Vetterli; SFU/TRIUMF

  8. The TRIUMF Prototype Centre • Hardware:- 5 dual 2.8 GHz Xeon nodes - 6 white boxes (2 CE, LCfGng, UI, LCG-GIIS, spare) - 1 SE (770 Gbytes usable disk space) • Functionality:- LCG core node (CE #1) - Gateway to Grid-Canada & WestGrid (CE #2) - Canadian regional centre: + coordinates & pre-certifies Canadian LCG centres + primary contact with LCG • Middleware:- Grid inter-operability: + integrate non-LCG sites; there is a lot of interest in this (UK, US) Rod Walker (SFU research associate) as been invaluable! M.C. Vetterli; SFU/TRIUMF

  9. The Other Canadian Sites • Victoria:- Grid-Canada Production Grid (PG-1) - Grid inter-operability (Dan Vanderster et al) • SFU/WestGrid:- Non-LCG test site (incorporate into LCG through TRIUMF) • Alberta:- Grid-Canada Production Grid (PG-1) - LCG node - Coordination of DC2 for Canada (Bryan Caron) • Toronto: - LCG node - ATLAS software mirror • Montreal: - LCG node • Carleton: - LCG node M.C. Vetterli; SFU/TRIUMF

  10. Canadian DC2 Computing Resources Note: 1 kSI2k  2.8 GHz Xeon  400 x 2.8 GHz CPUs 23 TBytes of disk 50 TBytes of tape M.C. Vetterli; SFU/TRIUMF

  11. LCG/Grid-Can LCG/WestGrid WestGrid SFU/TRIUMF LCG Federated Grids for ATLAS DC2 In addition to LCG resources in Canada Grid-Canada PG-1 M.C. Vetterli; SFU/TRIUMF

  12. 1) Each GC resource publishes a class ad to the GC collector LCGBDII/RB/ scheduler 1) The LCG RB decides where to send the job (GC/WG or the TRIUMF farm) 2) The GC CE aggregates this info and publishes it to TRIUMF as a single resource 2) Job goes to the TRIUMF farm or Job class ad MDS 3) The same is done for WG RB/scheduler 4) TRIUMF aggregates GC & WG and publishes this to LCG asone resource TRIUMF negotiator/scheduler 5) TRIUMF also publishes its own resourcesseparately 6) The process is repeated on GC if necessary TRIUMFcpu &storage Grid-Cannegotiator/scheduler TRIUMF decides to sendthe job to GC Class ad TRIUMF decides to sendthe job to WG Class ad WGUBC/TRIUMF GCRes.1 GCRes.n ..... Linking HEPGrid to LCG 3) The CondorG job manager at TRIUMF builds a submission script for the TRIUMF Grid 4) The TRIUMF negotiator matches the job to GC or WG 5) The job is submitted to the proper resource M.C. Vetterli; SFU/TRIUMF

More Related