Xrootd @ CC-IN2P3
Download
1 / 8

Jean-Yves Nief, CC-IN2P3 - PowerPoint PPT Presentation


  • 127 Views
  • Updated On :

Xrootd @ CC-IN2P3. Jean-Yves Nief, CC-IN2P3. HEPiX, SLAC October 11th – 13th, 2005. Overview. CC-IN2P3: Tier A for BaBar since 2001. Xrootd deployed primarily for BaBar (2003). Smooth transition from the Objectivity architecture: The 2 systems are running on the same servers.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Jean-Yves Nief, CC-IN2P3' - makenna


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Xrootd @ CC-IN2P3

Jean-Yves Nief, CC-IN2P3

HEPiX, SLAC

October 11th – 13th, 2005


Overview
Overview.

  • CC-IN2P3: Tier A for BaBar since 2001.

  • Xrootd deployed primarily for BaBar (2003).

  • Smooth transition from the Objectivity architecture:

    • The 2 systems are running on the same servers.

  • Hybrid storage (disks + tapes):

    • Tapes: master copy of the files.

    • Disks: temporary cache.

  • Interfaced with the Mass Storage System (HPSS) using RFIO in Lyon.

HEPIX conference, SLAC, October 11th-13th 2005


Lyon architecture

(5)

(4)

(3)

(6)

(1)

(2)

Lyon architecture.

(4) + (5):

dynamic staging

T1.root

HPSS

20 data servers –

70 TB of disk cache

140 To: ROOT

180 To: Objy

Slave server:

Xrootd / Objy

Slave server:

Xrootd / Objy

(6): random access

Master server:

Xrootd / Objy

2 servers

(etc…)

Client

(1) + (2): load balancing

T1.root ?


Xrootd for other experiments
Xrootd for other experiments.

  • Master copy of the data in HPSS for most of the experiments.

  • Transparent access to these data.

  • Automatic management of the cache resource.

  • Used on a daily basis within the ROOT framework by (up to 1.5 TB of disk cache used):

    • D0 (HEP).

    • AMS (astroparticle).

    • INDRA (nuclear physics).

HEPIX conference, SLAC, October 11th-13th 2005


Assessment
Assessment …

  • Very happy with Xrootd !

  • Fits really well our needs.

    • Random access between the client and data server.

    • Sequential access between MSS and servers.

  • Lots of freedom in the configuration of the service.

  • Administration of servers very easy (fault tolerance).

  • No maintenance to do even under heavy usage (more than 600 clients in //).

  • Scalability: very good prospects.

HEPIX conference, SLAC, October 11th-13th 2005


And outlook
… and outlook.

  • Going to deploy it for Alice and also CMS (A. Trunov):

    • Xrootd / SRM interface.

  • Usage outside the ROOT framework:

    • I/O for some projects (e.g.: astrophysics) can be very stressfull compared to regular HEP applications.

    • Needs transparent handling of the MSS.

    • Using Xrootd POSIX client APIs for reading and writing.

HEPIX conference, SLAC, October 11th-13th 2005


Xrootd vs dcache

offset

« time »

Xrootd vs dCache.

I/O profile for Orca client

  • Doing comparison tests between the 2 protocols:

    • I/Os taken out from a CMS application (Orca).

    • Pure I/Os (random access).

    • Stress test using up to 100 clients accessing 100 files.

  • Sorry! Preliminary results cannot be revealed…

  • To be continued…

STRONGLY ENCOURAGING PEOPLE TO DO SOME TESTING !


Issues for the lhc era
Issues for the LHC era.

  • Prospects for CC-IN2P3:

    • 4 Pbytes of disk space foreseen in 2008.

  • Hundreds of disk servers needed !

  • Thousands of clients.

  • Issues:

    • Choice of the protocol not innocent (€, $, £, CHF).

    • Need to be able to cluster hundreds of servers.

  • Point 2 is a key issue and has to be addressed !!

  • Xrootd is able to answer it.

HEPIX conference, SLAC, October 11th-13th 2005


ad