1 / 11

Mass Storage System Forum

Mass Storage System Forum. HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof B ärring (CERN). technical areas of concern?. interoperability data movement user application interfaces wrap on top of local MSS APIs

blaze
Download Presentation

Mass Storage System Forum

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)

  2. technical areas of concern? • interoperability data movement • user application interfaces • wrap on top of local MSS APIs • identify necessary extensions beyond existing standards interfaces, e.g. pre-staging, access hints • monitoring • security obstacles roadmap • inter-site storage resource management • scheduling of large transfers with respect to local load • cooperate on best use of WAN resources • seamless extension to the local resources managed by the local MSS • collaborate with WAN experimentalists on management interfaces for routing and circuit switching • what else...?

  3. data movement • have gridftp and bbftp for the data movements • do we need more? • is file movement sufficient or are there some good arguments for working on WAN file access? • do we need being informed about layer-5 (and above) protocols

  4. user application interfaces • with user applications being transparently scheduled to run on any site a common layer for file access is needed • two approaches • it’s the headache for higher level components closer to the applications, e.g. ROOT, POOL • it’s our headache • we should at least agree on common denominator API that can be wrapped on top of our existing APIs (e.g. RFIO, dCap) • commit to implement and support the API for our MSS • GFAL has been developed for LCG

  5. extensions beyond POSIX • Pre-staging of files before submitting a CPU job is another popular concept • however, resource reservation is a mixed blessing • massive use implies locking up resources and lead to data starvation for running jobs • competing reservations may lead to over-allocation/deadlocks • there are more clever ways to pre-stage but no standard interface (à la POSIX) to express them

  6. inter-site storage resource mgmt • SRM implements some level of negotiation • it allows client to provide precise resource estimation • the server can basically just answer yes or no • client often needs to negotiate with at least two SRMs and maybe also request WAN bandwidth • is SRM too generic? • do we need extensions for tighter service negotiation between large MSS installations?

  7. WAN resources • can we leverage some of the work going on for optimizing WAN transfers? • do we want to do that? • is MSS just a matter of managing storage or is it about managing data flows? • seamless extension of local resources • “WAN mover” • problem: no means for optimal sharing. Lack management APIs for WAN routing

  8. Large Systems and Wide Area Networks • Large storage systems at several labs are developing “grid side” interfaces. • Advanced Networking is being explored generally and under the banner of LHC in particular. • Large storage systems have immense bandwidth and can present a realistic mix of traffic to research networks. • Network researchers are interested in end-to-end performance, which includes interaction with “disk.”

  9. Some Salient Network Research:Layer 1-3 • Layer 1-3: • Lambda Networking – light path circuits for substantial all of the paths. • Requesting circuits and dealing with unavailability, faults • Network details of getting to such circuits

  10. Layer 4: • Example: TCP stacks with good properties in the WAN. • AIMD “additive increase, multiplicative decrease.” • Issues of fairness, when conventional TCP is present.

  11. Layer 5+: • Layer 5 example:-. • Current generation of // FTP’s. • Next generation of FTP (or more general protocols). • Layer 5+ :-. • Ability to take high level applications and steer them on to a research WAN, (possibly by making small changes to storage system).

More Related