1 / 47

FAX Dress Rehearsal Status Report

FAX Dress Rehearsal Status Report. Ilija Vukotic on behalf of the atlas- adc -federated- xrootd working group Computation and Enrico Fermi Institutes University of Chicago Software & Computing Workshop March 13, 2013. All the slides are made by Rob Gardner, I just updated them.

santa
Download Presentation

FAX Dress Rehearsal Status Report

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FAX Dress Rehearsal Status Report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi Institutes University of Chicago Software & Computing Workshop March 13, 2013

  2. All the slides are made by Rob Gardner, I just updated them. He can’t be here due to the OSG All Hands meeting.

  3. Data federation goals • Create a common ATLAS namespace across all storage sites, accessible from anywhere • Make easy to use, homogeneous access to data • Identified initial use cases • Failover from stage-in problems with local storage • Gain access to more CPUs using WAN direct read access • Allow brokering to Tier 2s with partial datasets • Opportunistic resources without local ATLAS storage • Use as caching mechanism at sites to reduce local data management tasks • Eliminate cataloging, consistency checking, deletion services • WAN data access group formed in ATLAS to determine use cases & requirements on infrastructure

  4. Implications for Production & Analysis • Behind the scenes in the Panda + Pilot systems: • Recover from stage-in to local disk failures • This is in production at a few sites • Development coming to allow advanced brokering which includes network performance • Would mean jobs no longer require dataset to be complete at a site • Allows “diskless” compute sites • Ability to use non-WLCG resources • “Off-grid” analysis clusters • Opportunistic resources • Cloud resources

  5. FDR testing elements Starting week of January 21, we’ve been following a bottoms-up approach which builds stability in lower layers At-large users HammerCloud & WAN-FDR jobs (programmatic) Complexity Network cost matrix (continuous) Basic functionality (continuous)

  6. Site Metrics • “Connectivity” – copy and read test matrices • HC runs with modest job numbers • Stage-in & direct read • Local, nearby, far-away • Load tests • For well functioning sites only • Graduated tests 50, 100, 200 jobs vs. various # files • Will notify the site and/or list when these are launched • Results • Simple job efficiency • Wallclock, # files, CPU %, event rate,

  7. Probes, integrated with AGIS At start of the FDR 22 sites Currently 32sites Redirection network touches six clouds (DE, FR, IT, RU, UK, US) plus CERN Redirectors ready for ES and Asia regions Direct xrdcp copy of test files Copy using regional redirector

  8. Basic redirection functionality • Direct access from clients to sites • Redirection to non-local data (“upstream”) • Redirection from central redirectors to the site (“downstream”) Uses a host at CERN which runs set of probes against sites

  9. Redirectors - regional and global Service monitor

  10. Connectivity matrix Survey revealedcomplex security dependencieson various vomsand xrootdclients found at sites

  11. Cost matrix measurements Cost-of-access: (pairwise network links, storage load, etc.)

  12. Comparing local to wide area performance local Ping time (ms) read time (s) local Each site can check its connectivity and IO performance for copy and direct read

  13. Programmatic Hammer Cloud tests • Defined a set of Hammer Cloud tests that probe the infrastructure and which collect measures of various data access patterns • Setup by Johannes and Federica using Higgs  WW, and a SUSY D3PD analysis • 17.2.2 (Root 5.30) HWW analysis code which analyzes NTUP SMWZ • 17.6.0 (Root 5.34) HWW analysis code which analyzes NTUP SMWZ • 17.5.0 (Root 5.32) SUSY analysis code which analyzes NTUP SUSYSKIM (p1328, p1329)

  14. Hammer Cloud testing • Pre-placed, site-unique SUSY and Higgs datasets at all sites • Realistic, typical analysis templates for SUSY D3PD maker and Higgs analysis • New pilot equipped for stage-in or direct access with XrootD • Choose ANALY queue, and redirector • Submission runs for (both modes): • Phase 1: Local performance • Phase 2: Nearby performance (e.g. within a cloud) • Phase 3: Far-away performance

  15. Test datasets Each of these datasets gets copied to a version with site-specific names in order to so as to automatically test redirection access and to provide a benchmark comparison

  16. Test dataset distribution Both sets of test datasets distributed to most sites with small amounts of cleanup left. These datasets will be used to gather reference benchmarks for the various access configuration

  17. Queue configurations • This turns out to be the hardest part • Providing federated XRootD access exposes the full extent of heterogeneity of sites, in terms of schedconfig queue parameters • Each site’s “copysetup” parameters seems to differ, and specific parameter settings need to be tried in the Hammer Cloud job submission scripts using –overwriteQueuedata • Amazingly, in spite of this there are a good fraction of FAX-functional sites

  18. First phase of HC tests: local access • HC run • http://hammercloud.cern.ch/hc/app/atlas/test/20018041/ • HWW code with regular SMWZ input, FAX directIO, production version pilots • This is for access to local data, but via direct access xrootd • Results: • 26 sites in the test • 16 sites with job successes • 3 sites where no job started/finished during test • (CERN, ROMA1, OU_OCHEP_SWT2) • 1 site does not have input data (GLASGOW) • 1 site blacklisted (FZU) • 1 site used xrdcp instead of directIO (BNL) • 4 sites with 100% failures (EDCF, IHEP, JINR, LANCS) • 4 sites with job successes and failures • (FRASCATI, NAPOLI, LRZ, RAL) • LRZ experienced again xrootdcrashes • SLAC jobs finally succeed Johannes, 3 weeks ago

  19. HC efficiencies for selected sites

  20. First phase of HC tests: local access • HC run • http://hammercloud.cern.ch/hc/app/atlas/test/20018258/ • HWW code with regular SMWZ input, FAX directIO, production version pilots • This is for access to local data, but xrdcp to scratch • Results: • 28 sites in the test • 17 sites with job successes • 12 sites with actual xrdcp job successes • 7 sites used directIO AGLT2, LRZ, MPPMU, MWT2, SLAC, SWT2_CPB, WUPPERTAL • 3 sites with all job failures IHEP, JINR, SWT2_CPB • 3 sites with no jobs started during test ECDF, CAM, CERN • 1 site with black-listed ANALY queue OU_OCHEP_SWT2 • 2 sites with no input data LANCS, GRID-LAL Johannes, 2 weeks ago

  21. Systematic FDR load tests in progress Adapted WAN framework for specific FDR load tests Choose analysis queue & FAX server sites, #jobs, #files Choose access type: copy files or direct ROOT access (10% events, 30 MB client cache) Record timings in Oracle @ CERN

  22. Systematic FDR load tests in progress Individual job lists + links back to Panda logs Drill down

  23. Systematic FDR load tests in progress US cloud results. 10 jobs * 10 SMWZ files ~ 50GB CPU limited Factors affecting spreads: pair-wise network latency, throughput, storage “business”

  24. Systematic FDR load tests in progress US cloud results

  25. Systematic FDR load tests in progress EU cloud results

  26. Systematic FDR load tests in progress EU cloud results

  27. Controlled site “load” testing Two sites being in IT cloud read by jobs running at CERN

  28. Federated traffic seen in the WLCG dashboard

  29. Federation traffic Modest levels now will grow when in production • Oxford and ECDF switched to xrootd for local traffic • Prague users reading from EOS • Co-located Tier 3 client  Tier 2 server

  30. Studies from Shuwei Ye at BNL Comparing wall and CPU times for access from Tier3 to datasets at BNL, NET2 and RAL (only BNL results shown) Concludes nearby redirector reduces time to process (validates ATLAS redirection model) Usual performance hit for “long reach” datasets over slow networks (to RAL) More systematic studies to come.

  31. ATLAS throughputs (from US) FAX traffic a tiny fraction of the total ATLAS throughput (for now)

  32. By destination (FTS + FAX)

  33. FAX by source cloud

  34. FAX by destination cloud

  35. Daily FAX transfer UDP collector down

  36. Conclusions • The FDR has been a good exercise in exposing a number of site & system integration issues • Site specific client differences  limited proxy check not always working • Non-uniform copysetup parameters in schedconfig for sites • Lack of fault checking in the rungen script for read failures • Tweaks necessary to brokering to allow sending jobs to sites missing datasets • In spite of this, much progress: • New functionality in the pilot to handle global paths without using dq2-client & forcing python 2.6 compatibility at all sites • First phase of programmatic HC stress testing nearing completion (local site access) • Some FAX accesses from Tier 3s • Test datasets in place • Next steps • Programmatic HC stress tests for regional data access (Phase 2) • Address remaining integration issues above & continue to validate sites • Recruit, acquire feedback from early-adopting users • Outsource monitoring services where possible to WLCG, including central UDP collectors, availability probes, etc. • Global and Rucio namespace mapping, dev. of new N2N module • Set a timeframe for an ATLAS requirement of federating xrootd services at sites

  37. Thanks • A hearty thanks goes out to all the members of the atlas-adc-federated-xrootd group, especially site admins and providers of redirection & monitoring infrastructure • Special thanks to Johannes and Federica for preparing HC FAX analysis stress test templates and detailed reporting on test results • Simone & Hiro for test dataset distribution & Simone for getting involved in HC testing • Paul, John, Jose for pilot and wrapper changes • Rob for testing and pushing us all • Wei for doggedly tracking down xrootd security issues & other site problems & Andy for getting ATLAS’ required features into xrootd releases

  38. EXTRA SLIDES

  39. Data federated (1) Top 100 sites used by ATLAS (bold=FAX accessible) * * * Includes tape, which we do not federate

  40. Data federated (2) Top 100 sites used by ATLAS (bold=FAX accessible) 30061 IN2P3-LAPP 1016122 497.957 18663 597276 371.101 GRIF-LAL

  41. Data federated (3) Top 100 sites used by ATLAS (bold=FAX accessible)

  42. Full SMWZ DATA+MC coverage (>96% of total 694 datasets) Average number of replicas ~2.5

  43. SkimSlimService • FAX killer app. • Free physicists from dealing with big data • Free IT professionals from dealing with physicists, let them deal with what they do the best - big data. • Efficiently use available resources (over the pledge, OSG, ANALY queues, EC2)

  44. How it works? Use FAX to access all the data without overhead of staging. Use optimally situated replicas. (possible optimization - production D3PDs preplaced at just several sites, maybe even just one) Physicists request skim/slim through a web service. Could add a few variables in flight. Produced datasets registered in the name of requester. Delivered to a site requested. As all of the data is available in FAX, one can do skims of not only production D3PDs but of any flat ntuple, or multi-pass SkimSlims.

  45. SSS - QOS Timely result is a paramount! Several levels of service depending on size of input and output data and importance: Example: • <1TB (input+output) - 2 hours service – this one is essential, as only in this case people will skim/slim to only variables they need without thinking of – “what if I forget something I’ll need”. • 1-10TB – 6 hours • 10-100TB – 24 hours • Extra fast delivery: at EC2 but comes with a sticker tag

  46. SkimSlimService Handmade server1 receives web queries, collects info on datasets, files, trees, branches Web site at CERN gets requests, shows their status OracleDBat CERN Stores requests, splits them in tasks, serves as a backend for the web site Executor at UC31 gets tasks from the DB, creates, submits condor SkimSlim jobs2 makes and registers resulting DS3 • 1 We have no dedicated resources for this I used UC3but any queue that has cvmfs will suffice. • 2 Modified version of filter-and-merge.py used. • 3 Currently under my name as I don’t have production role.

  47. http://ivukotic.web.cern.ch/ivukotic/SSS/index.asp

More Related