1 / 13

User Board

User Board. Glenn Patrick GridPP20, 11 March 2008. Tier 1: Non-Grid Access. Classical PBS/qsub access to Tier 1 restricted on 21 Feb. Access to UI also restricted. List of exclusions agreed through UB. ATLAS 4 identifiers BABAR 9 CALICE 2 CMS 7 Dteam 1 LHCb 4

gavin-hale
Download Presentation

User Board

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. User Board Glenn Patrick GridPP20, 11 March 2008

  2. Tier 1: Non-Grid Access • Classical PBS/qsub access to Tier 1 restricted on 21 Feb. Access to UI also restricted. • List of exclusions agreed through UB. ATLAS 4 identifiers BABAR 9 CALICE 2 CMS 7 Dteam 1 LHCb 4 MINOS 24 (3 months after working Castor reduces to 8) TOTAL 51 • Includes some AFS accounts.

  3. Tier 1 Squeeze – 2008/Q1 CPU March CPU capacity = 1439 KSI2K March Requests = 2640 KSI2K CPU over-allocated for 2008/Q1 Pain spread by fairshare system. Headroom ATLAS ALICE • DISK • March Disk capacity = 922TB • March Disk requests = 920.8TB • March Headroom =1.2TB • ATLAS/CMS/LHCb got 70% of their request. • BaBar reduced from 100TB to ~41TB. • All other experiments frozen until March. • “Special measures” taken through the quarter. • Living dangerously!

  4. Tier 1 Disk Squeeze BaBar(49TB) LHCb (116TB) CMS (242TB) ATLAS (291TB) ALICE (5.9TB)

  5. Tier 1 Disk Use

  6. Tier 1 CPU Fairshares LHCb ...Allocated CMS BaBar ATLAS ALICE LHCb CMS BaBar ATLAS Reality…

  7. LHC approaches! Friday 7 March 2008 CMS Plenary 25 Feb Machine cold by 1 June? Protons could be injected by mid-June.

  8. HEALTH WARNING! Plus others…

  9. Weighing things up… Need to get to robust running of LHC experiments (plus others). Assume CCRC08 covered in other talks, but still some way to go on this. Tier 1 Jobs Yesterday CMS BaBar ATLAS

  10. Tier 1 - 2008 Ramp Up Current Allocated/Total Dec 2008 Request 2008 Request CPU (67%) Tape (54%) Disk (39%) Latest procurement should satisfy all experiment 2008 requests if they don’t change. Need to worry about 2009 now.

  11. dCache – Castor2 Migration Timeline 20 June. At the UB meeting it was agreed that 6 month notice be given for dCache termination. 26 November. Proposal to terminate dCache by end of May. Castor Data Going to be Tight…. LHCb – all disk data migrated and 60% of tape data. ATLAS – disk and tape migration ongoing (?). On 20 Feb, 12TB trimmed from CMS allocation to help ATLAS migration (270TB allocation+20TB). ALICE – Updated request received 25 Jan. Allocated one server on 6 February. Need xrootD plug-in. MINOS – agreed 3 month period from date of working Castor instance.

  12. User Support Posts - GridPP3 Janusz Martnyiak (Imperial) = 50% FTE Ex-portal post. Technical assistance with Grid related software, interfacing experiments to middleware, development of tools, etc. First priority is to help smaller non-LHC experiments to get established on the Grid. LHC projects, generic PP tools (eg. Ganga) and KE with non-HEP VOs also possible work areas. Experiments bid through UB chair for support. MICE (Ganga and LFC) already approved and some SuperNemo work (LFC). Stephen Burke (RAL) = 50% FTE Documentation post. Focussed on immediate and short-term issues. For example, helping answer technical enquiries (outside ticket system), trouble-shooting user/VO problems, locating suitable documentation, etc.

  13. The End (and The Start) GridPP2 GridPP3

More Related