1 / 23

3D Workshop @ SARA , Amsterdam

This document discusses the replication of the ATLAS database and its resource requirements for the T1 sites. It also includes information on replication tests and future plans.

gmckay
Download Presentation

3D Workshop @ SARA , Amsterdam

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 3D Workshop @ SARA , Amsterdam ATLAS and LHCb experiment plans Florbela Tique Aires Viegas, Gancho Dimitrov, Marco Clemencic

  2. ATLAS Online Database, T0 and T1 Architecture - full production in autumn 2007 PVSS2COOL application TAGs application ATLAS Online RAC 6 node ATONR COOL,MDT_BARREL ATLAS Offline RAC 6 node ATLR Redo log transport services Real-time DownStream Capture ATLDSC PIC BNL NORDUGRID CNAF SARA IN2P3 TRIUMF RAL Gridka ASGC

  3. ATLAS resource requirements to the T1s • ATLAS will probably have maximum 200 days data taking per year • 50K active seconds/day = 58% efficiency for each active day, 10^7 events/day • 200 Hz during active data taking Year 2008 40% of a nominal year gives the following estimate TAGs 1,42 TB , COOL 190 GB TOTAL : 1,61 TB Year 2009 60% of a nominal year gives the following estimate TAGs 3.65 TB , COOL 490 GB TOTAL: 4.14 TB For each additional nominal year TAGs 6.09 TB , COOL 818 GB TOTAL: 6.9 TB

  4. ATLAS resource requirements to the T1s For each nominal year TAGs 6.09 TB , COOL 818 GB TOTAL: 6.9 TB This acounts are for « real » data: All the indexing, materialized view structures for COOL and single collection for TAGS T1s should add to this numbers the Oracle overhead: • Auxiliary tablespaces (SYSTEM,SYSAUX,UNDO) • Space for Backup and Recovery Policy agreed with CERN IT for consistency : • Archive log management • Flashback recovery area • Backup on disk • Space for the mirroring in the ASM storage system

  5. Results from the latest replication tests • We have been handling with two replication flows - one for the COOL ‘mini’ conditions data challenge ATLAS validation cluster INTR => RAL, GridKa, IN2P3, SARA, CNAF - second one, for the Streams throughput test via Downstream Capture INTR => ATLDSC => RAL, GridKA, CNAF, IN2P3 (later, substituted by ASGC ) Outcome: The COOL ‘mini’ cond. replication was successful for about 3 weeks. The first 2 weeks every 60 min a job had been inserting approx. 18 MB of data (daily ~ 432MB), on the 3th week every 30 min (daily ~ 864MB) mainly large strings or CLOBs ( not many LCRs had been generated ) Both replica flows had hard time to recover from the accumulated latency if one of the T1s have problems: spilling of LCRs to the disk takes place. (snapshots on the next slides)

  6. Test environment -for the Richard/Stefan’s mini Conditions Data Challenge Jobs inserting about 18MB every 30 min into the ATLAS_COOL_REPL_T1S schema INTR ATLAS Validation RAC Replication of COOL_REPL_T1S data Gridka IN2P3 SARA RAL CNAF Athena jobs reading COOL data from the replicas

  7. ATLAS downstream capture test environment - status on 15-16.03.2007 Stefan Stonjek’s COOL client appl. TAGs test application ATLAS Validation RAC 2 node INTR Redo log transport services Real-time DownStream Capture ATLDSC PIC BNL Phase 2 sites, to be added .. NORDUGRID CNAF SARA IN2P3 TRIUMF RAL Gridka ASGC

  8. One of the COOL replica test • 10 COOL clients insert into 200 COOL tables, rate ~ 5.05 MB per min • Duration of the test 8 hours - spilling of LCRs on the disk after the first 3 hours

  9. Pic 1: Propagation to RALPic 2: Propagation to In2P3

  10. Known problems • A trial to drop the propagation to the non-performing site causes severe cache row locking. The whole streams setup on the source DB has to be dropped and re-created. Known bug - waiting for a fix from Oracle • Adding very simple rules to filter out certain set of tables at the CAPTURE causes performance degradation. Most of the time the CAPTURE is in state ‘evaluating rule’ • The above has to checked in case we move the rules from the CAPTURE to the PROPAGATIONs • The GRANT statements on any object from the replicated schema are rejected by the CAPTURE when having rules mentioned above

  11. Next steps • Test some of the Tier1s with 1000 Athena jobs requesting the same COOL data at almost the same short time window - for that we need the help of the T1s institutes to provide resources for the time of the tests (simulation of massive second-pass reconstruction) • Hypothesis tested: Can the T1 clusters handle the predicted load? If not, how much resources are consumed by these jobs, and how much is needed to expand in the future • Resources needed for this test: O(100) client machines with the reconstruction software and suficient memory and CPU. • Challenges: to get all the clients to connect at the same time window • European T1s will be tested in turn, to provide conclusive measurements. • Can the T1s spare some client machines?

  12. Next steps • Backup and recovery tests at Tier1s. When and how? • Recovery tests between CERN and T1s, to provide test of backup and replication strategies established by Gordon and Eva. • When do we schedule them? • What does the coordinator of the tests need of Atlas, in terms of time and configuration preparation ?

  13. Next steps • Preparation for the COOL replication in production mode - second half of April (when the ATLAS software release 13.0.1 will be available) • This will imply migration of schemas, and some preparation of the production database to be ready for replication • Management of system will be IT’s responsibility • ATLAS is still uneasy with the response capability to problems, and the Oracle Streams software robustness for production

  14. Next steps • Organize the COOL accounts as shown on the picture in the next slide and test the MV refresh jobs • This database architecture was devised by the ATLAS DBAs and Richard Hawkings to accomodate the efficient querying of the COOL database without influencing the Apply process • Hopefully we can test it before the move to production, if the present Oracle Streams problems are solved in a timely fashion

  15. COOL schema organization PVSS2COOL application Atlas Offline RAC Tier 1s ATLAS_COOL_DCS3D schema - the COOL tables are with PKs only ATLAS_COOL_DCS3D schema - the COOL tables are with PKs only Atlas Online RAC Refresh MVs on demand Refresh MVs on demand Atlas PVSS Oracle archive ATLAS_COOL_DCS schema - Mat. views with full set of indexes ATLAS_COOL_DCS schema - Mat. views with full set of indexes COOL online sub - detector accounts ATLAS_COOLONL_xxx - with all indexes ATLAS_COOLONL_xxx - with all indexes ATLAS_COOLONL_xxx - with all indexes ATLAS_COOLOFL_xxx - with all indexes ATLAS_COOLOFL_xxx - with all indexes

  16. ATLAS concerns and open issues • Preparedness of the Oracle Streams mechanism for production • Administration of the system very difficult if there is a need for reinstatiation of sites due to a large growth rate. Problem is ten-fold with the TAGS application • Problems with Transportable Tablespaces on large partitioned tables raised concern for backup solution for TAGs replication. Tests are underway to pin the problem. • Responsabilities of management of the Streams environment in production mode. Protocols of interaction with the IT Division, and with the T1 DBAs for efficient problem response and maintenance tasks.

  17. LHCb plans - update from Marco Clemencic • By the end of March (in 2 weeks time), test the new version of the LHCb software against all the tier-1s • Beginning of April, the system set up for production to be used for the alignment challenge • LFC services at all tier-1s to come up before summer (next slides: reminder of the LHCb deployment model and resource requirement presented on the previous 3D workshop)

More Related