1 / 5

Tier 3 Analysis Paul Sheldon, Vanderbilt

Tier 3 Analysis Paul Sheldon, Vanderbilt. CMS currently has a “data-tethered” analysis model . A copy of the data must be local to the computing used. For Tier 1 and for most Tier 2 work this is necessary due to limited resources (including network resources – as was discussed yesterday).

donnabrewer
Download Presentation

Tier 3 Analysis Paul Sheldon, Vanderbilt

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tier 3 AnalysisPaul Sheldon, Vanderbilt • CMS currently has a “data-tethered” analysis model. A copy of the data must be local to the computing used. • For Tier 1 and for most Tier 2 work this is necessary due to limited resources (including network resources – as was discussed yesterday). • No excess resources – these experiment owned resources are only sufficient if they are used efficiently. • Work is “highly structured” (production reconstruction and Monte Carlo) so planning and staging is not hard. LHCOPN Transatlantic Networking Workshop

  2. Breaking the Tetherfor Tier 3’s ? • Tier 3’s are different. Limited and spotty resources. • No experiment funding – and no commitments. • Many small groups working on rapidly changing data products – only need access for short periods. • Work is much more chaotic – hard to predict and plan. • Providing access to distributed/remote storage – breaking the tether – could be a big help • Allows use of opportunistic resources, allows sharing or pooling of resources, requires less planning/coordination LHCOPN Transatlantic Networking Workshop

  3. Working Storage and Data Logistics • Many groups of 6—12 researchers working on short-lived, individual data products. Working storage is worth a try: • Serve data from one remote server that serves several Tier 3’s • Or from a distributed cloud of disk (M. Swany calls this “Cloud-D”) • Or, if data products are small or short lived, make copies on the cloud nearby everyone who wants a copy • Network and storage resources may not be as scarce relative to demand – but they are also not as reliable. • Need tools to move and share data in a fault tolerant, automated, policy-driven way (maybe with access controls) • REDDnet calls this data logistics: themanagement of time-related positioning & movement of data LHCOPN Transatlantic Networking Workshop

  4. Phoebus is a network optimization “inlay” • Based on the eXtensible Session Protocol – XSP • Protocol tuning and translation • Transparent dynamic network resource allocation • Phoebus can serve as an on-ramp to ION • Recent work with the Linux implementation performs well over 10GE • (I stole this slide from Martin Swany). LHCOPN Transatlantic Networking Workshop

  5. REDDnet transfers from CERN to Vanderbilt Phoebus Read Phoebus Write Direct Read Direct Write LHCOPN Transatlantic Networking Workshop

More Related