wlcg vision
Download
Skip this Video
Download Presentation
WLCG Vision

Loading in 2 Seconds...

play fullscreen
1 / 19

WLCG Vision - PowerPoint PPT Presentation


  • 129 Views
  • Uploaded on

Ian Bird CERN, 17 th July 2013. WLCG Vision. WLCG today. Successfully supported LHC run 1 Many lessons have been learned – already several significant changes to the computing models Experiments pushing to higher and higher data rates

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' WLCG Vision' - zuri


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
wlcg today
WLCG today

[email protected]

  • Successfully supported LHC run 1
  • Many lessons have been learned – already several significant changes to the computing models
  • Experiments pushing to higher and higher data rates
    • Already in 2012 x2 data volume vs a “nominal” LHC year as planned
  • Funding for future computing is a problem
    • Flat budgets are the (optimistic) working assumption
requirements vs resources
Requirements vs resources

+363 kHS06/yr

+34 PB/yr

2008-2012 was essentially a linear increase – with ~flat budgets

[email protected]

  • Simple extrapolation of what is optimistically affordable will barely accommodate minimal likely requirements
    • Significant increases in need anticipated from LS2
  • 2015 will already be a problem if we start from the 2014 base
wlcg strategy
WLCG strategy
  • HEP now risks to compromise physics because of lack of computing resources
    • Has not been true for ~20 years

[email protected]

  • Live within ~fixed budgets
  • Gain x10 - x?? in event processing throughput
    • Find additional resources (additional budgets)
    • Make (much!) better use of existing resources
      • Optimisation of cost of [CPU, storage, network, power]
      • Invest (limited) available effort where it is important
        • Software, data management
  • Collaborate with other science communities
    • Share expertise, experience
from the 2013 update to the european strategy for particle physics
From the 2013 update to the European Strategy for Particle Physics

i. The success of particle physics experiments, such as those required for the high-luminosity LHC, relies on innovative instrumentation, state-of-the- art infrastructures and large-scale data-intensive computing. Detector R&D programmes should be supported strongly at CERN, national institutes, laboratories and universities. Infrastructure and engineering capabilities for the R&D programmeand construction of large detectors, as well asinfrastructures for data analysis, data preservation and distributed data-intensive computing should be maintained and further developed.

High Performance Computing

[email protected]

g. Theory is a strong driver of particle physics and provides essential input to experiments, witness the major role played by theory in the recent discovery of the Higgs boson, from the foundations of the Standard Model to detailed calculations guiding the experimental searches. Europe should support a diverse, vibrant theoretical physics programme, ranging from abstract to applied topics, in close collaboration with experiments and extending to neighbouring fields such as astroparticle physics and cosmology.Such support should extend also to high- performance computing and software development.

wlcg strategies
WLCG Strategies

[email protected]

  • Update of the computing models – to show that every effort is being made to make best use of resources
    • Ready for LHCC in September
    • Covers the period of LHC Run 2
    • Will have estimate of the 3 year resource needs
  • Concurrency forum
    • Encourage work on all aspects of software –
    • Critical in optimising resource use
    • Critical in preparing to use modern architectures, HPC, etc.
  • Reduce operational effort so that WLCG Tiers can be self supporting (no need for external funds for operations)
  • Position ourselves so that the experiments can use pledged and opportunistic resources with ~zero configuration
    • (Grid) clusters, clouds, HPC, …
implications
Implications:

[email protected]

  • Must simplify the grid model (middleware) to as thin a layer as possible
    • Make use of any resources made available
    • Make service management v lightweight
    • Centralise key services at a few large centres where possible
  • Rely on the networks as a resource
    • Push to ensure that all Tier 2s are connected at realistic bandwidths
external funding
External funding?

[email protected]

  • WLCG benefitted greatly from funding from EC, US (DoE/NSF), and other national initiatives
  • This funding has largely stopped now
  • Prospects for future funding exist – but the boundary conditions will be very different:
    • Must demonstrate how we benefit other sciences and society at large
    • Must engage with Industry (e.g. via PPP)
    • HEP-only proposals unlikely to succeed
    • Also it is essential that any future proposal is fully engaged in by CERN (IT+PH) and experiments and other partners
hep value
HEP value?

[email protected]

  • Building and operation of the world’s largest globally federated, distributed, infrastructure
  • Management of multi-petabyte data sets and facilities
  • Record of collaborating with other scientific domains (EGEE, OSG), and industry (openlab, Helix Nebula, …)
  • And more…
  • Other sciences now need to address some of the same problems as HEP: we must collaborate
    • This is one reason why we must avoid HEP-specific solutions as much as possible, we don’t have a good record of building broadly useful tools
speculation future e infrastructure
Speculation – future e-infrastructure?

[email protected]

  • Will need to cover many aspects:
    • Facilities:
      • Networking and federated identity services
      • An academic cloud resource (most sciences, esp smaller groups need this)
      • Experienced and sustainable data archive centres
    • Software tools:
      • to allow a science community to federate their own resources (e.g. HEP)
      • Tools for data management, workflows, application support
      • Tools to aid integration of daily activities with the e-infrastructure (collaborative tools, “dropboxes”, etc….)
      • Software tools to build citizen-cyberscience infrastructures
    • Also need investment in software –
      • Today we benefit from commodity hardware;
      • Today our software is not that efficient on the current hardware
      • Future CPU and storage commodities may be very different
      • We need to adapt our software (potentially continually)
outlook
Outlook

[email protected]

  • HEP can no longer expect funding without demonstrating its relevance
  • Need to broker collaborations with other relevant sciences
    • Data-intensive or relevant to society
    • Must be mutual benefit, but we can also learn from other communities (e.g. HPC)
  • Funding, expertise, and effort will be harder to attract and retain in next few years
    • Focus on key issues and what we need for the long term
  • However, HEP has significant expertise and experience that we can build on
evolution key points
Evolution – key points

[email protected]

  • Need to demonstrate that we are doing as much as possible to make best use of resources available
  • Software
    • Parallelism, new architectures, etc – significant challenges and lack of expertise
    • Requires some investment
  • Commonality
    • Between experiments
    • With other sciences
  • Simplicity
    • Reduce complexity where possible:
      • Grid services
      • Deployments (e.g. central service is simpler)
  • Focus HEP efforts
    • Where we must – e.g. data management tools
    • Cannot afford (nor should we) do everything ourselves
  • Collaborate
    • To bring in expertise and to share ours
data management data preservation
Data Management/Data Preservation

[email protected]

  • Data Management: LHC has demonstrated very large scale data management. Must build on this success:
    • Improve our own efficiency – improved models of data placement, caching, and access
    • Work with wider scientific community to build community solutions or move towards open standards if possible
  • Should drive/explore collaborations with other data intensive sciences and industry to consolidate data management tools in the long term
    • Build a broader support community, helps our sustainability
  • Data preservation and open access to data is now very important and highly publicly visible
    • policy and technical investments are needed now
  • CERN could provide the leadership to build a set of policies and technical strategies to address this for HEP, building on the work of DPHEP and collaborating with other big data sciences
  • It is already urgent for LHC experiments to ensure the ability for long term use and re-use of the data
e infrastructures
e-Infrastructures

[email protected]

  • Grid computing has been very successful in enabling global collaboration and data analysis for LHC
  • We should generalise this (with technical evolutions already under way) for a more general HEP-wide infrastructure
    • Several requests from other experiments to use the WLCG
    • Must understand how this would fit with regional infrastructures, national and international efforts etc., and how to integrate with other science communities
  • We must plan for the next generation of e-infrastructure (e.g. on 5-10 yr timescale), making use of new technologies, e.g.:
    • Terabit networks – could enable true remote access
    • How does “cloud” computing fit? – need to understand costs of commercial clouds vs HEP (or science) clouds: cloud federations would be needed for our collaborations
investment in software
Investment in software

[email protected]

There is a growing consensus that HEP codeneeds to be re-engineered to introduce parallelism at all levels

We need to act now or run the risk that HEP software will not run at all in the future

 afternoon session

long term lhc experiments
Long term – LHC experiments

[email protected]

  • Recognise the importance of re-thinking computing models for post-LS2 timeframe
    • Could easily imagine order(s) of magnitude more data
    • Current models simply will not continue to scale
  • Recognition that experiments’ solutions today are not so different
    • Must devote effort on effective areas:
      • CPU-heavy algorithms, simulation, data management, etc
    • Frameworks and data serving mechanisms could be common
    • Willingness to bring these ideas together and start to plan what needs to be done
long term strategy
Long Term strategy

[email protected]

  • HEP computing needs a forum where these strategic issues can be coordinated since they impact the entire community:
    • Build on leadership in large scale data management & distributed computing – make our experience relevant to other sciences – generate long term collaborations and retain expertise
    • Scope and implementation of long term e-infrastructures for HEP – relationship with other sciences and funding agencies
    • Data preservation & reuse, open and public access to HEP data
    • Significant investment in software to address rapidly evolving computer architectures is necessary
    • HEP must carefully choose where to invest our (small) development effort – high added value in-house components, while making use of open source or commercial components where possible
    • HEP collaboration on these and other key topics with other sciences and industry
  • CERN plans a workshop to discuss these topics and to kick-start such an activity
how does hep computing adapt
How does HEP computing adapt?

[email protected]

  • HEP-wide project on future computing?
    • Coordinate all the projects in HEP that address future evolution of computing and software
    • Governance by the HEP community itself
    • Launch new projects in areas where there are identified holes
    • Act as the focal point for collaboration with other sciences and industry
  • We have experience in this level of collaboration
  • We have a bigger problem now than we have had in the past
  • HEP computing needs to take a change of direction now
summary
Summary

[email protected]

  • WLCG/LHC experiments recognise the need to plan for the future
  • HEP has significant expertise/experience now on core technologies and has an opportunity to engage with other sciences and industry
    • Essential if we are to maintain funding and attract people
  • Plan community-wide discussion/engagement on key themes
ad