1 / 18

LondonGrid Status Duncan Rand

LondonGrid Status Duncan Rand. LondonGrid. Five Universities with seven GOC sites Brunel University Imperial HEP and Imperial LeSC Queen Mary Royal Holloway UCL Central and UCL-HEP Thirteen compute clusters, eight storage elements. Administrative News.

haven
Download Presentation

LondonGrid Status Duncan Rand

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LondonGrid StatusDuncan Rand

  2. LondonGrid • Five Universities with seven GOC sites • Brunel University • Imperial HEP and Imperial LeSC • Queen Mary • Royal Holloway • UCL Central and UCL-HEP • Thirteen compute clusters, eight storage elements LondonGrid Status

  3. Administrative News • Brunel – new sys-admin starts beginning October • RHUL – sys-admin post vacant since Jan 08 - starting recruitment soon • QMUL – sys-admin post vacant - about to be advertised • Imperial - sys-admin went on maternity leave start Aug • Barry MacEvoy replaces her part time • LondonGrid - recruited new Technical coordinator • 0.25 FTE Imperial sys-admin • starts mid-September LondonGrid Status

  4. Site news Brunel • Site has been running OK over last 6 months • predominantly as CMS site but also ATLAS MC • however has recently suffered air conditioning problems • DPM too small and somewhat unreliable • Recent purchase of 60TB disk and 1 MSI2k CPU - in process of installing • Still only 400 Mbit/s WAN link LondonGrid Status

  5. Imperial • HEP • Workhorse ce00 running well • Mainly CMS analysis and MC • ATLAS MC jobs changed to stage out to RHUL SE • IBM cluster (gw39) retired • Recent purchase of 100 TB disk and ~2.4 MSI2k CPU • Added in new 120 WN to ce00 this week • Will the air conditioning cope? • ICT/ LeSC • Essentially run ATLAS and CMS MC plus biomed LondonGrid Status

  6. Royal Holloway • Manpower low - awaiting new system administrator • Nevertheless commissioned new cluster in April 2008 • Generally runs well • Have had networking issues • WAN issues seem to have been related to external firewall • Set up as CMS Tier-3 LondonGrid Status

  7. Queen Mary • No full time admin • Makeover completed • Now running SGE • CE gets overloaded need to upgrade it • DPM on Lustre • works OK – 50 MB/s WAN access, not yet stress tested LAN • will replace DPM head node with modern machine (10 Gbit/s) • Storm test SE – works OK but difficult to add ATLAS space tokens • Running mainly ATLAS and biomed jobs • need to get CMS MC running ASAP LondonGrid Status

  8. University College • UCL-Centralfinally passing SAM tests yesterday • Need to complete acceptance tests, install ATLAS software and get MC running • SE working • requires update of space tokens • UCL-HEP purchasing new equipment- should bring online soon LondonGrid Status

  9. CPU contribution by site • RHUL obvious arrival! • QMUL/LeSC –increase LondonGrid Status

  10. CPU usage by VO • CMS and ATLAS are the big users in last 6 months • Also biomed • LHCb still low • Concentrating on CMS & ATLAS… LondonGrid Status

  11. CPU usage by Tier-2 LondonGrid Status

  12. CMS • Imperial (T2-London-IC) and Brunel (T2-London-Brunel) • Running OK • SE’s quite full • London plan to use non-CMS sites as CMS Tier-3’s • Make use of T3-UK-London-RHUL cluster when ATLAS not using it • ~75 TB CMS data already on site • Up to 100 MB/s download from multiple CMS Tier-1’s • Running many analysis jobs (mostly Imperial users) • Just joined MC production • A real success in terms of supporting multiple VO’s LondonGrid Status

  13. CMS download to T3_UK London_RHUL ppMuX_pt10 2.4TB transferred in an afternoon LondonGrid Status

  14. CMS analysis jobs From 2008-08-23 to 2008-08-26 LondonGrid Status

  15. CMS Site View Monitoring Page • Excellent page summarising status of sites from CMS point of view • The much sought after single source of information http://dashb-cms-sv.cern.ch/dashboard/request.py/siteviewhome LondonGrid Status

  16. Other VO ‘s • vo.londongrid.ac.uk for local users • Set up local LFC catalogue for • supernemo.vo.eu-egee.org • mice • vo.londongrid.ac.uk • UKQCD VO supported at RHUL • large memory jobs and possibly MPI LondonGrid Status

  17. Conclusion • Still suffering from an acute shortage of manpower • New Technical coordinator starting very soon and three admin posts will be filled in autumn – hopefully! • Nevertheless Royal Holloway and QMUL sites brought on line and now contributing as ATLAS sites • RHUL also acting successfully as CMS Tier-3 – MC and analysis • Imperial and Brunel continue to service CMS and also run ATLAS MC • Looking forward to increased contribution by UCL LondonGrid Status

  18. Thanks to all of the LondonGrid Team Mona Aggarwal, David Colling, Austin Chamberlain, Clare Gryce, Simon George, Kostas Georgiou, Barry Green, Paul Kyberd, William Hay, Alex Martin, Giuseppe Mazza, Henry Nebrensky,Gianfranco Sciacca, Keith Sephton, Ben Waugh, Jeremy Yates LondonGrid Status

More Related