1 / 11

LCG Fabric status

LCG Fabric status. http://lcg-computing-fabric.web.cern.ch/LCG-Computing-Fabric/. Major achievements during the last 3 month (1). In December 2002 the ALICE-IT Computing Data Challenge reached 280-290 MB/s sustained (for 7 days) dataflow from an emulated DAQ

hope-combs
Download Presentation

LCG Fabric status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LCG Fabric status http://lcg-computing-fabric.web.cern.ch/LCG-Computing-Fabric/ Bernd Panzer-Steindel, CERN/IT

  2. Major achievements during the last 3 month (1) • In December 2002 the ALICE-IT Computing Data Challenge • reached 280-290 MB/s sustained (for 7 days) dataflow from an emulated DAQ • system into the HSM system CASTOR (disk and tape) with peak values • of 350 MB/s. The goal for the MDC4 in 2002 was 200 MB/s. • In January ATLAS used 230 testbed nodes successfully • for an Online Computing Data Challenge (postponed from October 2002) • to test event building and run control issues • In January and February the move of Lxplus and Lxbatch to RH 7.3 at • the >75% level • (very high support load, problems in reaching all users) Bernd Panzer-Steindel, CERN/IT

  3. Major achievements during the last 3 month (2) • In February the migration of the COMPASS data from Objectivity • to Oracle finished (270 TB processed) • LCG EDG consolidation hardware resource allocation and planning • improved • HEP wide availability of ORACLE licenses (more details and summary over major IT activities over the last 6 month can be found in the last IT FOCUS presentation http://mgt-focus.web.cern.ch/mgt-focus/private/vonruden.ppt) Bernd Panzer-Steindel, CERN/IT

  4. Important decisions during the last 3 month • No more investment into tape infrastructure between 2003 and 2005, necessary Computing Data Challenges need to take equipment from the production systems • fixed budget lines for extra resources (cpu, disks) in 2003-2005. COCOTIME takes care of non-LHC experiments, PEB for LHC  back to ‘mainframe’ type resource sharing within fixed envelope (http://mgt-focus.web.cern.ch/mgt-focus/private/morsch.pdf) agreed budget profile for the prototype 2003 - 2005 agreed budget profile for the physics part (lxbatch, lxplus)  ~ 1.2 million per year Bernd Panzer-Steindel, CERN/IT

  5. Manpower changes (I) • In January/February reorganization in IT • better group structure mapping onto the LCG project, merge of three • (ADC,DS,FIO) groups into two (ADC,FIO), clear separation of the Grid • Deployment activities into one newly created group (GD) • GD, Grid Deployment Group Leader : Ian Bird •  testing, certification and integration of GRID middleware, LCG-1 preparation, • management of the CERN GRID infrastructure, first level support, • FIO, Fabric Infrastructure and Operations Group Leader : Tony Cass •  Lxbatch, Lxplus, system administration, computer centre infrastructure, fabric • automation, CASTOR service • ADC, Architecture and Data Challenges Group Leader : Bernd Panzer-Steindel •  Linux expertise, AFS service, CASTOR development, DC organization, openlab • more details : http://it-mgt-dmm.web.cern.ch/it-mgt-dmm/ • http://it-mgt-dmm.web.cern.ch/it-mgt-dmm/public/Documents/DivisionalMeetings/Pdf/2003-02-28.pdf Bernd Panzer-Steindel, CERN/IT

  6. Manpower changes (II) • Olof Barring will replace Jean-Philippe Baud as the project leader of the CASTOR project There is now an open post for an additional CASTOR developer • Two LCG persons have left the Fabric area, one will be replaced in April Bernd Panzer-Steindel, CERN/IT

  7. Milestones (I) • 1.2.1.7 L2M Production Pilot I starts 1/15/03 •  hardware was made available, not heavily used due to late LCG1 software • definition (milestone okay) • 1.2.2.3 L3M Deployment of large monitoring prototype 1/6/03  disk server performance monitoring and cpu server exception monitoring aready since last year, since end February. both metrices on all systems ORACLE database used as repository very good consolidation between WP4 (DataGrid) and PVSS (commercial), the IT reorganization streamlined the activities Installation+configuration and monitoring in one group (FIO) (milestone okay, small delay) • 1.2.2.5.6 L3M Basic infrastructure for the Prototype in production 3/10/03  40 nodes done since January, 150 expected in mid March (milestone okay) Bernd Panzer-Steindel, CERN/IT

  8. Milestones (II) • 1.2.5.7 L3M CPU and disk capacity upgrade of Lxbatch 2/24/03 •  delayed until end March (delays in the acquisition process) • (milestone late) • 1.2.5.8 L3M Node capacity upgrade of the Prototype 2/24/03 •  delayed until end April (delays in the acquisition process) • (milestone late) • 1.2.1.12.5 L3M Lxbatch job scheduler pilot      2/3/03 •       postponed to April, due to late definition of LCG-1 • software, intensive collaboration between GD and FIO group • has started (GRID  Fabric) • (milestone late) Bernd Panzer-Steindel, CERN/IT

  9. Personnel in the Fabrics area Fabric area LCG(Q402) LCG(Q103) EDG IT System Management and 2.5 2.0 - 14.3 Operation Development 4.5 2.0 3.0 4.3 (management automation) Data Storage Management 2.0 2.0 - 10.1 Fabric+Grid Security 1.0 2.0 - 1.0 Grid-Fabric Interface - 1.0 1.0 0.8 Focus of the IT personnel is on service After the IT reorganization this table is currently under review and will change Bernd Panzer-Steindel, CERN/IT

  10. Outlook for the next 3 month (I) • Move of major part of the equipment into the vault in Building 513 especially the move of the STK Tape Silos • Delivery of the 28 new 9940B tape drives and removal of the old 9940A tape drives • 1 GByte/s IT Computing Data Challenge in April The overlap period of old and new tape drives offers an opportunity to test a ‘CDR’ system at a large scale  50 cpu server + 50 disk server + 50 tape drives coupled through mock data challenge programs and CASTOR  1 GByte/s into CASTOR onto tape Bernd Panzer-Steindel, CERN/IT

  11. Outlook for the next 3 month (II) • Delivery and Installation of this years new resources (cpu and disk server) in April • New cost calculations for the CERN T0 T1 centre until beginning of April • Integration of Lxbatch into the LCG-1 environment • Multiple smaller IT and ALICE-IT Computing Data Challenges Bernd Panzer-Steindel, CERN/IT

More Related