1 / 8

CAF Meeting June 2019 - FR-T1/T2 Operation and Cloud Management OTPs

This report provides information on the recent CAF meeting, allocated OTPs for FR-T1/T2 operation and cloud management, and upcoming meetings and conferences in the ATLAS community.

jjarvis
Download Presentation

CAF Meeting June 2019 - FR-T1/T2 Operation and Cloud Management OTPs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction Frédéric Derue, LPNHE Paris Calcul ATLAS France (CAF) meetingCC-IN2P3 Lyon, 17th June 2019

  2. ● Today : - CAF on 17th June [indico] next CAF meeting should be in (early) September …. ● Recent meetings : - LCG-FR Sites meeting, 22-24th May in LAPP [indico] ● Meetings to come : - ATLAS S&C, New York, 24-28 June [indico] - LCG-FR CoDir, 5th July [indico

  3. ATLAS ressources usage since last CAF Infos taken from [link] Infos taken from [link] ● Different activities ● >300k grid+cloud job slots ● dominated by MC Full Sim

  4. Network ● LHCONE Japan-CERN WLCG overview board (13th June) → to save money, cancel option CERN - SURFSara in 2022 [link] → this link is used as a LHCOPN backup for Corea, Taiwan and Russia and as a LHCOne link for Japan, Corea, Taiwan and Russia

  5. OTPs for T1/T2 operation and Cloud Operation/Management Message for ICB members to collect OTPs for T1/T2 operation and Cloud Operation Management for 1st semester 2019 → in practice collected by S. Jézéquel, to be provided by 1st July ⇒ as from previous reports + as in recent email to CAF members 1) OTP FOR FR-T1 & FR-T2sLaboratory FTE (among which FTE of ATLAS members) LAL 0.30 IRFU 1.15 (among which 0.15 J-P. Meyer ATLAS) LPNHE 1.10 (among which 0.10 F. Derue ATLAS) → + V. Mendoza 0.6, A. Bailly-Reire 0.4 as ATLAS LPSC 0.85 (among which 0.10 S. Crépé-Renaudin ATLAS) CPPM 0.60 (among which 0.40 E. Knoops ATLAS) LPC 0.75 LAPP 1.25 (among which 0.10 S. Jézéquel and 0.1 F. Chollet ATLAS) CC 3.30 (among which 2.80 for FA-IN2P3 and 0.60 for FA-CEA) Requested are identical to Allocated: → but all these numbers are for entire year, not for one semester ! Class 4 : → should/could add explicitely all CAF members and/or T2 site representatives ? At 0.1 FTE ? For example are missing → LAL : L. Poggioli (0.1 FTE) CPPM : E. Le Guirriec (0.05 FTE), A. Dupperrin (0.05 FTE) CC : E. Vamvakopoulos (xx FTE) OTP link

  6. OTPs for T1/T2 operation and Cloud Operation/Management Message for ICB members to collect OTPs for T1/T2 operation and Cloud Operation Management for 1st semester 2019 → in practice collected by S. Jézéquel, to be provided by 1st July 2) OTP FOR FR-CLOUD OPERATION OTP 1er semester 2019 Cloud support: - Allocated : 75% FR Funding Agencies : S. Crépé (10%), F. Derue (15%), E. Le Guirriec (15%), L. Poggioli (30%) non-FR : C. Visan (5%) - Requested : 100% OTP 1st semester 2019 Cloud management: - Allocated : 70% FR Funding Agencies : J-P. Meyer (10%), C. Biscarat (5%), F. Derue (15%), L. Poggioli (15%) non-FR : G. Stoicea (5%), M. Ciubancan (5%), T. Mashimo (5%), X. Wu (5%), - Requested : 70% Class 3 : → should correspond to individual tasks → cloud management : don’t we mix cloud management and site management (i.e Class 4) ?

  7. Abstracts, conferences etc. FR-ALPES CHEP 2019 abstract proposal * Title Implementation and performances of a DPM federated storage and integration within the ATLAS environment * Type: Oral (poster if oral not possible) * Authors Claire Bourdarios, Frédérique Chollet, Sabine Crépé-Renaudin, Christine Gondrand, Muriel Gougerot, Stéphane Jézéquel, Philippe Séraphin * Proposed speaker/presenter Sabine Crépé-Renaudin or Stéphane Jézéquel * Proposed track Track 4 - Data Organisation, Management and Access * Whether the abstract should be proposed as a plenary talk No * Abstract contents itself (plain text) The increase of storage usage at the HL-LHC horizon will induce scalability challenges on the data management tools and storage operation by site administrators. The evaluation of possible solutions for storage and their access within the DOMA, DOMA-FR (IN2P3 project contribution to DOMA) and ESCAPE initiatives is a major activity to select the most optimal ones from the experiment and site point of views. The LAPP and LPSC teams have put their expertise and computing infrastructures together to build the FR-ALPES federation and set up a DPM federated storage. Based on their experience of Tier2 WLCG site management, their implication in the ATLAS Grid infrastructure and thanks to the flexibility of ATLAS and Rucio tools, the integration of this federation into the ATLAS Grid infrastructure has been straightforward. In addition, the integrated DPM caching mechanism including volatile pools is also implemented. This infrastructure is foreseen to be a testbed for a DPM component within a DataLake. This presentation will describe the testbed (infrastructures separated by few ms in Round Trip Time unit) and its integration into the ATLAS computing framework. The impact on the sites and ATLAS operations of both the testbed implementation and its use will also be shown, as well as the measured performances on data access speed and reliability.

  8. Abstracts, conferences etc. Conference: ICASC, 12-14.09.2019, Sinaia, Romania, http://icasc2019.ifin.ro Title: Status and prospects of ATLAS (FR Cloud) computing Presenter: Frédéric Derue (LPNHE Paris) Keywords: ATLAS; Distributed computing; Computing models; HL-LHC The ATLAS experiment successfully commissioned a software and computing infrastructure to support the physics program during LHC Run 2. The next phases of the accelerator upgrade will present new challenges in the offline area. In particular, at High Luminosity LHC the data taking conditions will be very demanding in terms of computing resources: between 5 and 10 KHz of event rate from the HLT to be reconstructed (and possibly further reprocessed) with an average pile-up of up to 200 events per collision and an equivalent number of simulated samples to be produced. The same parameters for the current run are lower by up to an order of magnitude. While processing and storage resources would need to scale accordingly, the funding situation allows one at best to consider a flat budget over the next few years for offline computing needs. In this presentation we present a status of the current usage of ATLAS of computing and storage resources, the expected challenge for the HL-LHC phase and present ideas about the possible evolution of the ATLAS computing model, the distributed computing tools, and the offline software to cope with such a challenge. The particular case of the ATLAS FR-Cloud which includes the WLCG sites in Romania will be discussed.

More Related