Evolution of the atlas data and computing model for a tier 2 in the egi infrastructure
Download
1 / 61

Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure - PowerPoint PPT Presentation


  • 96 Views
  • Uploaded on

Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure. Álvaro Fernández Casaní on behalf of the IFIC Atlas computing group. II PCI2010 Workshop Valencia, 10 th -12 th January 2012. Outline. Introduction: IFIC and Atlas Tier2 SPAIN

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure' - imogene-pennington


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Evolution of the atlas data and computing model for a tier 2 in the egi infrastructure

Evolution of the Atlas data and computing model for a Tier-2 in the EGI infrastructure

Álvaro Fernández Casaní

on behalf of the IFIC Atlas computing group

II PCI2010 Workshop

Valencia, 10th-12th January 2012


Outline
Outline in the EGI infrastructure

  • Introduction: IFIC and Atlas Tier2 SPAIN

  • Evolution of the Atlas computing model

    • Flattening the model to a Mesh

    • Tier 2 duties

    • Data distribution: Tier2s policy

      • Data shares ( site classification)

      • Availability and Connectivity

      • Networking

    • Dynamic data distribution and caching

    • Remote data access

  • Real Users example


Atlas centers
ATLAS CENTERS in the EGI infrastructure

http://dashb-earth.cern.ch/dashboard/dashb-earth-atlas.kmz


Introduction
Introduction in the EGI infrastructure


Spain contribution to atlas
SPAIN CONTRIBUTION TO ATLAS in the EGI infrastructure


Spanish atlas tier2
Spanish ATLAS Tier2 in the EGI infrastructure


Ific computing infrastructure resources
IFIC Computing Infrastructure Resources in the EGI infrastructure

ATLAS

  • CSIC resources (not experiment resources):

    • 25% IFIC users

    • 25% CSIC

    • 25% European Grid

    • 25% Iberian Grid

  • To migrate Scientific application to the GRID

EGI


Last year summary
Last year in the EGI infrastructuresummary

  • 3,579,606 Jobs

  • 6,038,754 CPU consumption hours

  • 13,776,655 KSi2K (CPU time normalised)

  • Supporting 22 Virtual Organizations (VOs)


Prevision 2012 for ific
Prevision 2012 in the EGI infrastructureFor IFIC

  • For 2012:

    • We already fulfil the CPU requirements

    • Increasing Disk:

      • 230 TB => 4 SuperMicro x 57.6 (2 TB disks)


Evolution of the atlas computing model
EVOLUTION OF THE ATLAS in the EGI infrastructureCOMPUTING MODEL


Previous atlas cloud model
Previous ATLAS cloud model in the EGI infrastructure

  • Hierarchical Model based on the Monarc network topology

  • CLOUDS OF Tier-1 with its geographical related Tier-2

  • Possible communications:

    • T0-T1

    • T1-T1

    • Intra-cloud T1-T2

  • Forbidden communications:

    • Inter-cloud T1-T2

    • Inter-cloud T2-T2

Simone CampanaSoftware & Computing Workshop (April’11)

https://indico.cern.ch/conferenceDisplay.py?confId=119169


Shortcomings of cloud boundaries
Shortcomings of Cloud Boundaries in the EGI infrastructure

DATA FLOWS TOO STRICT

OPERATIONAL PROBLEMS

More info: Simone CampanaSoftware & Computing Workshop (April’11)

https://indico.cern.ch/conferenceDisplay.py?confId=119169

  • Consolidation of User Analysis outputs problematic

    • Analysis runs over many cloud

    • Consolidation needs to “hop” the data through the T1s

  • MonteCarlo production must confine one task to one cloud

    • To facilitate the output aggregation at T1

  • Replication of datasets (PD2P) more inflexible

    • Need to replicate from T1 to T2s of the same cloud

    • Or “hop” through a T1

  • T2s can not really be used as storage of “primary” data

    • Issues in creating secondary copies at other T1s

  • Tier-2 limited usability due to T1 downtimes

    • Dependency on LFC data catalog


Solving issues
Solving issues in the EGI infrastructure

LFC Spanish Cloud planned this week !

  • Make Tier-2 activities more independent

  • Reduce service dependencies:

    • Move Catalog (LFC) to CERN:

      • Backup at US . Another at UK

      • Remove from tier-1

      • Analysis jobs will be able to run during T1 downtime

      • Production jobs can keep running during T1 downtime

  • Benefit from technology improvements:

    • The network model today does not really resemble the Monarc model

      • Many T2s are connected very well with many T1s

      • Many T2s are not that well connected with their T1

    • So it makes sense to break cloud boundaries.


Flattening the model
Flattening the model in the EGI infrastructure

Network is a key component to optimize the use of the storage and CPU

  • to break cloud boundaries.

    • Let DDM freely transfer from every site to every site

    • Inter-cloud direct transfers

    • Multi-cloud production

  • Not quite there

    • Some links simply have limited bandwidth

    • In those cases, several hops will anyway be needed

  • Defining T2Ds is an attempt to break cloud boundaries “for the cases where it makes sense”


Categorization of sites t2d
Categorization of Sites: T2D in the EGI infrastructure

  • T2Ds are the Tier-2 sites that can directly transfer any size of files (especially large files) not only from/to the T1 of the cloud (to which they are associated to) but also from/to the other T1s of the other clouds.

  • T2Ds are candidates for

    • multi-cloud production sites

    • primary replica repository sites (T2PRR)


Atlas tier2 share revision
ATLAS Tier2 Share Revision in the EGI infrastructure

  • In 2010, many Tier2s got full; T2 data distribution share revision

  • The data distribution to Tier-2 (and Tier-3g) sites should take into account

  • the network connectivity of the sites (thus, T2Ds should have more data)

  • the availability for analysis.

  • Introduced at Software & Computing Workshop 20 July 2011

    • Up to ADC Ops to define the distribution policy among T2s: Preference to

      • Reliable sites for analysis (fraction of time with analysis queue online)

      • Well connected sites (T2Ds) to transfer datasets quickly

    • Started from summer 2011

      • Based on T2D list defined on 1st July 2011

    • Simplified with 4 groups treated equally :

      • Alpha (60% share-17 sites) : T2Ds with > 90 % reliability (60%)

      • Bravo (30 % share-21 sites) : non-T2Ds with > 90% reliability

      • Charlie (10 % share-12 sites) : Any T2 with 80%<reliability<90%

      • Delta (0% share-13 sites): Any T2 with reliability < 80 %

    • one single list both for pre-placement and dynamic data


Atlas tier2
ATLAS Tier2 in the EGI infrastructure


Spanish cloud data shares last month
Spanish Cloud Data shares Last Month in the EGI infrastructure

http://atladcops.cern.ch:8000/drmon/crmon_TiersInfo.html

IFIC

IFIC is alpha T2D site -> also candidates for more datasets in PD2P ( see later)


Availability
Availability in the EGI infrastructure

  • Hammercloud: for ATLAS

    • Distributed Analysis testing system

    • For avoiding jobs go to problematic sites

    • Can exclude sites if test jobs are not passed

  • EGIhas own different availability tools:

    • Based on ops VO

    • Can have conflicting results:

      • Last month IFIC was ATLAS ALPHA SITE ( >90 % availability)

      • But had just 64% ops egi availability due to a config issue

LAST MONTH

http://dashb-atlas-ssb.cern.ch/dashboard/request.py/siteviewhistorywithstatistics?columnid=564&view=shifter%20view


Connectivity
Connectivity in the EGI infrastructure

Transfer monitoring

Inter-cloud direct transfers


Services
Services in the EGI infrastructure

  • User Interfaces:

    • UI00: (LVSDR)

      • UI04: SL 5 64 bits

      • UI05: SL 4 64 bits

      • UI06: SL 5 64 bits

  • Computing Elements (cream-ce) and Worker nodes :

    • WN (CE02): SL 5 64 bits Glite 3.2 with MPI and shared home in Lustre

    • WN (CE03): SL 5 64 bits Glite 3.2 (puppet)

    • WN (CE05): SL 5 64 bits Glite 3.2 (puppet)


Job distribution at tier 2
Job Distribution at Tier-2 in the EGI infrastructure

PIC TIER-1 – 2012

More Analisys jobs at IFIC TIER 2

  • Tier 2 usage in data processing

    • More job slots used at Tier2s than at Tier1

      • Large part of MC production and analysis done at Tier2s

  • More ‘digi+reco’ jobs and “group production” at Tier-2s

    • less weight at Tier-1s

  • Production shares to be implemented to limit “group production” jobs at Tier-1, and run at Tier-2s

  • Analysis share reduced at Tier-1s


Datadisk in tier2
Datadisk in the EGI infrastructure in tier2

February ‘12

Total Capacity (Usable/Margin)

T2_datadisk

Used (according SRM)

T1_datadisk

primary

secondary

Used (according dq2)

20PB

http://bourricot.cern.ch/dq2/accounting/atlas_stats/0/

  • Tier 2 usage disk space

    • T2_datadisk ≈ T1_datadisk in volume

    • T2_datadisk ≈ input for data processing (secondary replicas)


For spanish sites
For Spanish Sites in the EGI infrastructure

February ‘12

http://bourricot.cern.ch/dq2/accounting/t2_spacetoken_view/SPAINSITES/datadisk/30/


Ific storage resources
IFIC Storage resources in the EGI infrastructure

  • Based in SUN:

    • X4500 + X4540

    • Lustre v1.8

    • New Disk servers:

      • SuperMicro, SAS disks, capacity 2 TB per disk and 10 Gbe connectivity

    • Srmv2 (STORM)

    • 3 x gridftp servers


Storage resources
Storage resources in the EGI infrastructure

Pledges


Storage resources1
Storage resources in the EGI infrastructure


Ific lustre filesystems and pools
IFIC Lustre Filesystems and Pools in the EGI infrastructure

  • Different T2 / T3 ATLAS pools , and separated from other Vos

  • Better Management and Performance

  • 4 Filesystems with various pools:

    • /lustre/ific.uv.es Read Only on WNs and UI. RW on GridFTP + SRM

    • /lustre/ific.uv.es/sw. Software: ReadWrite on WNs, UI (ATLAS USES CVMFS)

    • /lustre/ific.uv.es/grid/atlas/t3 Space for T3 users: ReadWrite on WNs and UI

    • [email protected]:/homefs on /rhome type lustre. Shared Home for users and mpi applications: ReadWrite on WNs and UI

CVMFS FOR ATLAS

  • With the Lustre release 1.8.1, we have added pool capabilities to the installation.

  • Allows us to partition the HW inside a given Filesystem

    • Better data management

    • Assign determined OSTs to a application/group of users

    • Can separate heterogeneous disks in the future


Cern cvmfs
CERN CVMFS in the EGI infrastructure

http://atlas-install.roma1.infn.it/atlas_install/usage_plots.php

CVMFS: A caching, http based read-only filesystem optimised for delivering experiment software to (virtual) machines.


Cvmfs at ific squid
CVMFS at IFIC (+SQUID) in the EGI infrastructure

  • Installed in all our WNs and UIs since September 2011

    • Easy installation (only 2 configuration files)

    • 20 GB/repository

    • There is not a dedicated partition

  • Using the same SQUID as frontier (sq5.ific.uv.es)

    • the squid server is pointing to the public replicas (CERN, BNL, RAL)

  • The performance until now is so good.

  • We are monitoring SQUID via CACTI (SNMP)

  • Reduced job setup time

  • Better performance for analysis jobs

  • Reduced load on Lustre MDS

Sept’2011

Sept’2011


Storm srm server
Storm SRM server in the EGI infrastructure

Access like a local file system, so it can create and control all the data available in disk with a SRM interface.

Coordinate data transfers, real data streams are transferred with a gridFTPserver in another physical machine.

Enforce authorization policies defined by the site and the VO.

Developed Authorization plugin to respect local file system with the corresponding user mappings and ACL's


Ific network
IFIC Network in the EGI infrastructure

Recent Upgrade FTP servers to satisfy requirements for alpha sites

  • Cisco 4500 – core centre infrastructure.

  • Cisco 6500 – scientificcomputinginfrastructure

  • Data servers:

  • Sun with 1GB connection.

  • Channel bonding tests were made aggregation 2 channels

  • SuperMicro with10 Gbe

  • WNs and GridFTP servers with 1GB

  • 10 Gbe

reach 1Gbit per data server

Data Network based on gigabyte ethernet.

10GBe uplink to the backbone network




Pandadynamicdataplacement pd2p
PandaDynamicDataPlacement May 2011, Washington(PD2P)

https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaDynamicDataPlacement

  • resources utilization policy evolved during First year of data taking:

    • “jobs go to data” “data and jobs move to the available CPU resources”.

  • Also a dynamic data placement approach has been employed

  • Tier-1 algorithm

    • Primary copy at tier-1 basend on Planned data placement

    • Secondary copy when data popular, with location based on pledges

  • Tier-2 algoritm:

    • Jobs submitted to Panda trigger PD2P

    • Replicates popular datasets to a Tier2 with highest weight

  • More details:

    https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaDynamicDataPlacement


A further step caching
A further step: caching May 2011, Washington

  • PD2P makes data movement more dynamic and user-driven, but replication of dataset may suffer from other problems for the users:

    • Latency to finally access the data

    • Complexity of the tools

  • Still works on the idea of datasets replication

  • Explore the possibility to access to file or subfile dynamically without explicit replication

    • Xrootd allows federation of resources and to redirect the client to the data source.

    • Shorten latencies by caching in the xrootd server

    • To work correctly, there has to be efficient event data I/O with minimal transactions between application and storage


Pd2p in es cloud last month
PD2P in Es-Cloud last month May 2011, Washington

  • EsCloud T2Ds = IFIC and IFAE

ES 166

IFIC 49

http://panda.cern.ch/server/pandamon/query?mode=pd2p&action=plots

Es-Cloud Tier 2 sites getting more datasets

Bit more that Tier-1 Pic


User example

User Example May 2011, Washington


Example grid physics analysis
Example; Grid & Physics Analysis May 2011, Washington

  • Distributed Computing and Data Management Tools based on GRID technologies have been used by the IFIC physicists to obtain their results

  • As an example, the Boosted Top candidate presented by M. Villaplana


Distributed analysis in atlas
Distributed Analysis in ATLAS May 2011, Washington

  • ATLAS has a specifically system for Production and Distributed Analysis (PANDA):

    • Including all ATLAS requirements

    • Highly automated

    • Low manpower

    • Unifies the different grid environments (EGI-Glite, OSG and EGI-ARC)

    • Monitoring web pages

  • Reference: http://panda.cern.ch


Distributed analysis in atlas1
Distributed Analysis in ATLAS May 2011, Washington

  • For ATLAS users, GRID tools have been developed:

    • For Data management

      • Don Quijote 2 (DQ2)

        • Data info: name, files, sites, number,…

        • Download and register files on GRID,..

      • ATLAS Metadata Interface (AMI)

        • Data info: events number, availability

        • For simulation: generation parameter, …

      • Data Transfer Request (DaTri)

        • Users make request a set of data (datasets) to create replicas in other sites (under restrictions)

    • For Grid jobs

      • PanDa Client

        • Tools from PanDa team for sending jobs in a easy way for user

      • Ganga (Gaudi/Athena and Grid alliance)

        • A job management tool for local, batch system and the grid


Tier2 and tier3 examples from spain
Tier2 and Tier3 examples from Spain May 2011, Washington

  • At IFIC the Tier3 resources are being split into two parts:

    • Resources coupled to IFIC Tier2

      • Grid environment

      • Use by IFIC-ATLAS users

      • Resources are idle, used by the ATLAS community

    • A computer farm to perform interactive analysis (proof)

      • outside the grid framework

  • Reference:

    • ATL-SOFT-PROC-2011-018


Daily user activity in distributed analysis
Daily user activity in Distributed Analysis May 2011, Washington

  • An example of Distributed Analysis in heavy exotic particles

    • Input files

    • Work flow:


Daily user activity in distributed analysis1
Daily user activity in May 2011, WashingtonDistributed Analysis

  • Just in two weeks, 6 users for this analysis sent:

  • 35728 jobs,

  • 64 sites,

  • 1032 jobs ran in T2-ES (2.89%),

  • Input: 815 datasets

  • Output: 1270 datasets

  • 1) A python script is created where requirements are defined

    • Application address,

    • Input, Output

    • A replica request to IFIC

    • Splitting

  • 2) Script executed with Ganga/Panda

    • Grid job is sent

  • 3) Job finished successfully, output files are copied in the IFIC Tier3

    • Easy access for the user


User experience
User experience May 2011, Washington

  • PD2P is transparent to users, but eventually they learn that it is in action:

    • “a job was finally sent to a destination that did not originally had the requested dataset”

    • “I check later PD2P has copied my original dataset”

    • “ I just realized because I used dq2 previously to check where the dataset was”

  • Another question is that user datasets are not replicated:

    • “We see a failure because it is not replicating our homemade D3PDs”


Summary
Summary May 2011, Washington


Backup slides
BACKUP SLIDES May 2011, Washington


A tier 2 in atlas
A TIER 2 in ATLAS May 2011, Washington

Main activities


Spanish tier2 number more info pepe s slides
Spanish Tier2 Number May 2011, Washington(more info: Pepe’s slides)

  • The ATLAS Spanish Tier2 (T2-ES) consists in a federation of 3 Spanish Institutions (see Jose’s talk):

    • IFAE-Barcelona (25%)

    • UAM- Madrid (25%)

    • IFIC-Valencia (50%, coordinator)

  • The T2-ES represents 5% of the ATLAS resources (between 60-70 T2s):

  • References:

    • J. Phys. Conf. Ser. 219 072046


Summary1
Summary May 2011, Washington


Summary2
Summary May 2011, Washington




Lhcone for atlas
LHCONE for ATLAS May 2011, Washington


Network
NETWORK May 2011, Washington

T2 network


For spanish sites1
For Spanish Sites May 2011, Washington

IFAE

IFIC

TIER2s DATADISK-SPAIN SITES

http://bourricot.cern.ch/dq2/media/T2s/T2s-DATADISK-SPAINSITES.html

UAM


Availability1
Availability May 2011, Washington

http://hammercloud.cern.ch/atlas/autoexclusion/?cloud=20&site=

  • ES-Tier2 availability for ATLAS


Availability2
Availability May 2011, Washington

  • Hammercloud:

    • Distributed Analysis testing system

    • For avoiding jobs go to problematic sites

    • Can excluded sites if test jobs are not passed

    • Reference:

      • https://twiki.cern.ch/twiki/bin/view/IT/HammerCloud

  • ATLAS grid tools are improving day to day

    • For instance: automatic jobs for merging output files

    • https://twiki.cern.ch/twiki/bin/viewauth/ATLAS/AnalysisJobOutputMerging

  • ATLAS users can ask to Distributed Analysis Support Team (DAST, [email protected]):

    • Problems with her/his jobs

    • Useful for developers

      • Improve the tools and services


Distributed analysis in atlas2
Distributed Analysis in ATLAS May 2011, Washington

References: http://twiki.cern.ch/twiki/bin/viewauth/Atlas/AtlasComputing


Analysis efficiency in september atlas tier0 tier1s analy queues
Analysis Efficiency in September May 2011, WashingtonATLAS Tier0 + Tier1sANALY*_queues


Analysis efficiency in september atlas tier2s analy queues
Analysis Efficiency in September May 2011, WashingtonATLAS Tier2s (ANALY*_queues)


ad