1 / 19

Funding Sources for PP Computing in the UK

Funding Sources for PP Computing in the UK. Until recently (last 2 years), UK funding for particle physics computing had two components:. Outcomes. Installed equipment small scale, but well-tailored at Universities. Large facility @RAL but needs expt’s to motivate changes.

Download Presentation

Funding Sources for PP Computing in the UK

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Funding Sources for PP Computing in the UK Until recently (last 2 years), UK funding for particle physics computing had two components: Outcomes • Installed equipment • small scale, but • well-tailored at • Universities. • Large facility @RAL • but needs expt’s to • motivate changes. • Direct funding to the individual University Groups • Central funding to the IT Group of the CCLRC in Rutherford-Appleton Lab. Enter, the New Concept from UK Government Get the individual experiments and/or University Groups to bid for (big) money. LHCb Software Week at CERN, Andrew Halley (CERN)

  2. Joint Research Equipment Initiative (JREI) The aim of JREI is to contribute to the physical research infrastructure and to enable high-quality research to be undertaken, particularly in areas of basic and strategic priority for science and technology, such as those identified by Foresight. £99M in 1999, the 4th round Joint Infrastructure Fund (JIF) £700M over three years The money will enable universities to finance essential building, refurbishment and equipment projects to ensure that they remain at the forefront of international scientific research. New external sources of computing funding LHCb Software Week at CERN, Andrew Halley (CERN)

  3. Personal summary: PPARC comp. JREI bids Following represents a summary of what information is available from various sources, including PPARC. LHCb Software Week at CERN, Andrew Halley (CERN)

  4. Personal summary: PPARC comp. JIF bids Following represents a summary of what information is available from various sources, excluding PPARC. LHCb Software Week at CERN, Andrew Halley (CERN)

  5. BaBar JREI 98 - awarded £800K for disk and servers at 10 UK sites 12.5TB RAID ~10TB usable Sun won tender, installation soon LHCb JREI 98 - awarded MAP - Montecarlo Array Processor 300 Linux PCs in custom-built chassis CDF JIF 98 - submitted December 98 postponed until next round T-Quarc at FNAL, 10TB disk, 4 SMP workstations at RAL, 5TB disk, 5TB tape, SMP and line to FNAL at 4 univs, single cpu machine and 1.7TB of disk Particle Physics Bids (1) LHCb Software Week at CERN, Andrew Halley (CERN)

  6. BaBar JIF 99 - submitted April 99 Line to SLAC computers for analysis, big SMP at RAL, smaller SMPs at each site, Linux farm(s) for simulation. LHCb JREI 99 - submitted May 99 40 PCs with 1TB of disk each to store data generated by MAP and analyse it. Particle Physics Bids (2) LHCb Software Week at CERN, Andrew Halley (CERN)

  7. Timescales and deadlines for bid procedures JIF JREI The 1999 bids had to be submitted by Spring 1999 with decision expected around November 1999. Recent round needed to be submitted by May 1999 with decisions not expected before January 2000. Next round submission dates is 11th October 1999 for the “decision point” expected to be March 2000. LHCb Software Week at CERN, Andrew Halley (CERN)

  8. Current Computing Resources in the UK Considering the central facilities currently available: • Large central datastore, combining both a large disk pool, together with backing store and tape robots. • Central CPU farms and servers currently comprising of : • CSF facility based on Hewlett-Packard processors. • Windows NT facility based on Pentium processors. • Upgraded Linux-CSF facility based on Pentium processors. In addition, home Universities have considerable power in workstation clusters and dedicated farms, often “harvested” by software-”robots” which serve out tasks remotely. LHCb Software Week at CERN, Andrew Halley (CERN)

  9. Current usage statistics of the RAL datastore Typically, ~10->15 Tb accessible from the datastore, but only ~5 Tb actively used, at any recently given time, LHCb Software Week at CERN, Andrew Halley (CERN)

  10. Usage of the HP CSF facility at RAL As an example snapshot of the use of the service, from April ‘99 to September ‘99, average use is ~80%. LHCb Software Week at CERN, Andrew Halley (CERN)

  11. Linux CSF farm and its usage at RAL The Linux farm now consists of: • Forty Pentium II 450 MHz CPU’s with 256Kb memory, • 10 Gb fast local disk, • 100 Mb/s fast ethernet. maximum capacity Currently well used by active experiments, and with excellent potential for upgrades. LHCb Software Week at CERN, Andrew Halley (CERN)

  12. Windows NT Farm and usage at RAL Ten dual processor machines with 450 MHz CPUs added to the farm. Upgrade increases the capacity of the farm by factor of ~5. • Service used heavily by both ALEPH and LHCb for MC production. • Will be used as part of LHCb plans to generate large numbers (106) inclusive bbar events in the near future. • Automatic job submission software set-up for LHCb • System software replication set-up so it’s now very easy to extend the system as appropriate. LHCb Software Week at CERN, Andrew Halley (CERN)

  13. New computing resources outside of RAL On the basis of the new funding arrangements in the UK, University of Liverpool was given funds to make MAP,a large MC processor based on cut-down linux nodes: • 300 processors • 400MHz PII • 128 Mbytes memory • 3 Gbytes disk • D-Link 100BaseT ethernet +hubs • commercial units BUT • custom boxes for packing and cooling The nodes are rack mounted and running a stripped down version of RedHat Linux 5.2. Tailored for production using a 1 Tb local mounted disk, but needs corresponding solution for analysing the data locally. LHCb Software Week at CERN, Andrew Halley (CERN)

  14. Computing resources outside of RAL: MAP The idea: MAP Slaves Master External Ethernet and in reality: Hub 100BaseT Hub System is scalable, can be increased by adding more slaves, and/or network hubs. Benefits from bulk purchase of uniform hardware…. LHCb Software Week at CERN, Andrew Halley (CERN)

  15. Future plans for cpu upgrades etc. Intention is to develop the Linux farm @ RAL: • Order 30 new dual-processor 600 MHz nodes to be added to the existing cluster. • Add more hardware around April/May next financial year to keep up with demand. Also plans to augment MAP at Liverpool with subsystems at additional LHCb UK sites, also: • Developing COMPASS, a model for LHC analyses. • Using a fast Linux server to check large disk pool read/write speeds of 50/20 Mbs with over 1 Tb of data space attached. LHCb Software Week at CERN, Andrew Halley (CERN)

  16. Future “plans” for LHC computing in the UK Given the new funding arrangements in the UK, and the challenges facing us with the LHC computing needs: Tier-1 Regional Centre Submission of an LHC-wide UK JIF bid for capital funding for the years through the LHC start-up. Tier 1 regional Centre CERN Tier-2 Regional Centre Tier-1 Regional Centre UK plans to operate a Tier 1 Regional Centre based @ RAL, with several Tier 2 Centres (such as MAP/COMPASS) at the Universities. Tier-2 Regional Centre Service Centre Institutes LHCb Software Week at CERN, Andrew Halley (CERN)

  17. Ramping up the UK resources for the LHC The resources needed are dependent, somewhat, on the computing models adopted by the experiments, but are currently: • An additional tape robot will be purchased in 2003, to allow datastore extensions to 320 Tb. • Network bandwidth to CERN is assumed to be 50 Mbs with similar performances achieved to Tier 2 centres in 2003 and increased thereafter to 500 Mbs. LHCb Software Week at CERN, Andrew Halley (CERN)

  18. Tentative conclusions and summary. Clearly, the field is evolving quickly. Status can be broken down into : • upgraded linux (NT?) farms ~doubling • capacity every year or so, • increases in datastore size. Short term • new massive simulation facilities like • MAP coming online, • analyses engines being developed • to cope with generated data rates. Medium term • development of Tier 1 and 2 data • centres with 2 orders of magnitude • increases in stored data & cpu power, • 2-3 orders of magnitude in bandwidth • improvements in network access. Long term LHCb Software Week at CERN, Andrew Halley (CERN)

More Related