Computing development projects
Download
1 / 26

LHC d ata rate and filtering - PowerPoint PPT Presentation


  • 101 Views
  • Uploaded on

Computing development projects GRIDS M. Tura l a The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and the Academic Computing Center Cyfronet AGH, Kraków.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' LHC d ata rate and filtering' - olwen


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Computing development projectsGRIDSM. Turala The Henryk Niewodniczanski Instytut of Nuclear Physics PAN and the Academic Computing Center Cyfronet AGH, Kraków

Warszawa, 25 February 2005


Outline- computing requirements of the future HEP experiments- HEP world wide computing models and related grid projects- Polish computing projects: PIONIER and GRIDS- Polish participation in the LHC Computing Grid (LCG) project

Warszawa, 25 February 2005


LHC data rate and filtering

Data preselection in real time- many different physics processes- several levels of filtering- high efficiency for events of interest- total reduction factor of about 107

40 MHz (1000 TB/sec) equivalent)

Level 1 - Special Hardware

75 KHz (75 GB/sec)fully digitised

Level 2 - Embedded Processors/Farm

5 KHz(5 GB/sec)

Level 3 – Farm of commodity CPU

100 Hz(100 MB/sec)

Data Recording &

Offline Analysis

Warszawa, 25 February 2005


Data rate for LHC p-p events

Typical parameters:

Nominal rate - 109 events/s

(luminosity 1034/cm2s, collision rate 40MHz)

Registration rate - ~100 events/s(270 events/s)

Event size - ~1 M Byte/ event

(2 M Byte/ event)

Running time ~ 107 s/ year

Raw data volume ~2 Peta Byte/year/experiment

Monte Carlo’s ~1 Peta Byte/year/experiment

The rate and volume of HEP data doubles every 12 months !!!

Already today BaBar, Belle, CDF, DO experiments produce 1 TB/ day

Warszawa, 25 February 2005


35K SI95

250K SI95

350K SI95

64 GB/sec

0.1 to 1GB/sec

Data analysis scheme

1-100 GB/sec

One Experiment

200 TB / year

Event Filter

(selection &

reconstruction)

Detector

~200 MB/sec

Event

Summary

Data

1 PB / year

Raw data

Batch Physics

Analysis

500 TB

Processed

Data

Event

Reconstruction

~100 MB/sec

analysis objects

Event

Simulation

Interactive

Data Analysis

Thousands of scientists

from M. Delfino

Warszawa, 25 February 2005


Multi-tier model of data analysis

Warszawa, 25 February 2005


Lhc computing model cloud

Lab m

Uni x

USA

Brookhaven

Uni a

UK

USA

FermiLab

Lab a

France

Tier 1

Physics

Department

Uni n

CERN

Tier2

……….

Italy

Desktop

Lab b

Germany

NL

Lab c

Uni y

Uni b

LHCcomputing model (Cloud)

The LHC Computing Centre


Icfa network task force 1998 required network bandwidth mbps
ICFA Network Task Force (1998): required network bandwidth (Mbps)

100–1000 X Bandwidth Increase Foreseen for 1998-2005

See the ICFA-NTF Requirements Report:

http://l3www.cern.ch/~newman/icfareq98.html


LHC computing – specifications for Tier0 and Tier1

CERN ALICE ATLAS CMS LHCb

CPU (kSI95) 824 690 820 225

Disk Pool (TB) 535 410 1143 330

Aut. Tape (TB) 3200 8959 1540 912

Shelf Tape (TB) - - 2632 310

Tape I/O (MB/s) 1200 800 800 400

Cost 2005-7 (MCHF) 18.1 23.7 23.1 7.0

Tier 1

CPU (kSI95) 234 209 417 140

Disk Pool (TB) 273 360 943 150

Aut. Tape (TB) 400 1839 590 262

Shelf Tape (TB) - - 683 55

Tape I/O (MB/s) 1200 800 800 400

# Tier 4 6 5 5

Cost av (MCHF) 7.1 8.5 13.6 4.0

Warszawa, 25 February 2005


Development of Grid projects

Warszawa, 25 February 2005


EU FP5 Grid Projects 2000-2004

(EU Funding: 58 M€)

from M. Lemke at CGW04

  • Infrastructure

    • DataTag

  • Computing

    • EuroGrid, DataGrid, Damien

  • Tools and Middleware

    • GridLab, GRIP

  • Applications

    • EGSO, CrossGrid,BioGrid, FlowGrid, Moses, COG, GEMSS, Grace, Mammogrid, OpenMolGrid, Selene,

  • P2P / ASP / Webservices

    • P2People, ASP-BP,GRIA, MMAPS, GRASP, GRIP, WEBSI

  • Clustering

    • GridStart

Applications

Middleware

Infrastructure


Strong polish participation in fp5 grid research projects
Strong Polish Participation in FP5 Grid Research Projects

2 Polish-led Projects (out of 12)

  • CrossGrid

    • CYFRONET Cracow

    • ICM Warsaw

    • PSNC Poznan

    • INP Cracow

    • INS Warsaw

  • GridLab

    • PSNC Poznan

      Significant share of funding to Poland versus EU25

  • FP5 IST Grid Research Funding: 9.96 %

  • FP5 wider IST Grid Project Funding: 5 %

  • GDP: 3.8 %

  • Population: 8.8 %

from M. Lemke at CGW04

CROSSGRID partners


CrossGrid testbeds

16 sites in 10 countries, about 200 processors and 4 TB disk storage

Middleware:

from EDG 1.2

to LCG-2.3.0

SZCZECIN

BYDGOSZCZ

TORUŃ

  • Testbeds for

  • development

  • production

  • testing

  • tutorials

  • external users

OPOLE

LUBLIN

Last week CrossGrid has concluded successfullyits final review

Warszawa, 25 February 2005


CrossGrid applications

OLSZTYN

ELBLĄG

SZCZECIN

Medical

Flood prediction

BIAŁYSTOK

TORUŃ

POZNAŃ

Blood flow simulation, supporting vascular surgeons in the treatment of arteriosclerosis

Flood prediction and simulation based on

weather forecasts and geographical data

WARSZAWA

PUŁAWY

WROCŁAW

KIELCE

OPOLE

Meteo

Pollution

Physics

CZĘSTOCHOWA

KRAKÓW

KATOWICE

RZESZÓW

Distributed data mining in high energy physics, supporting the LHC collider experiments at CERN

Large-scale weather forecasting combined with

air pollution modeling (for various pollutants)

BIELSKO-BIAŁA

Warszawa, 25 February 2005


Grid for real time data filtering

GDAŃSK

KOSZALIN

OLSZTYN

ELBLĄG

SZCZECIN

BYDGOSZCZ

BIAŁYSTOK

TORUŃ

POZNAŃ

WARSZAWA

ZIELONA

GÓRA

RADOM

ŁÓDŹ

PUŁAWY

WROCŁAW

KIELCE

OPOLE

LUBLIN

CZĘSTOCHOWA

KRAKÓW

KATOWICE

RZESZÓW

BIELSKO-BIAŁA

Studies on a possible use of remote computing farms

for event filtering; in 2004 beam test data shipped to Cracow, and back to CERN, in real time.

Warszawa, 25 February 2005


LHC Computing Grid project-LCG

  • Objectives

  • - design, prototyping and implementation of the computing

  • environment for LHC experiments (MC, reconstruction and

  • data analysis):

  • - infrastructure

  • - middleware

  • - operations (VO)

  • Schedule

  • phase 1 (2002 – 2005;~50 MCHF); R&D and prototyping (up to

  • 30% of the final size)

  • - phase 2 (2006 – 2008 ); preparation of a Technical Design

  • Report, Memoranda of Understanding, deployment (2007)

  • Coordination

  • Grid Deployment Board: representatives of the world HEP

  • community, supervising of the LCG grid deployment and testing

Warszawa, 25 February 2005


Computing resources dec 2004

From F. Gagliardi at CGW04

Computing Resources – Dec. 2004

Three Polish institutions involved

- ACC Cyfronet Cracow

- ICM Warsaw

- PSNC Poznan

Polish investment in the local infrastructure

EGEE supporting the operations


Polish Participation in LCG project

  • Polish Tier2

  • INP/ ACC Cyfronet Cracow

    • resources (plans for 2004)

      • 128 processors (50%),

      • storage: disk ~ 10TB, tape (UniTree) ~10 TB (?)

    • manpower

      • engineers/ physicists ~ 1 FTE + 2 FTE (EGEE)

    • ATLAS data challenges – qualified in 2002

  • INS/ ICM Warsaw

    • resources (plans for 2004)

      • 128 processors (50%),

      • storage: disk ~ 10TB, tape ~ 10 TB

    • manpower

      • engineers/ physicists ~ 1 FTE + 2 FTE (EGEE)

  • Connected to LCG-1 world-wide testbed in September 2003

Warszawa, 25 February 2005


Polish networking - PIONIER

5200 km fibres installed, connecting 21 MAN centres

Stockholm

GEANT

Prague

  • Good connectivity of HEP centres to MANs

  • - IFJ PAN to MAN

  • Cracow – 100 Mb/s

  • -> 1 Gbs,

  • INS to MAN

  • Warsaw – 155 Mb/s

Multi-lambda connections planned

from the report of PSNC to ICFA ICIC, Feb. 2004 (M. Przybylski)

Warszawa, 25 February 2005


INTERNET

PC Linux clusterat ACC Cyfronet

CrossGrid – LCG-1

4 nodes 1U 2x PIII, 1GHz

512 MB RAM

40 GB HDD

2 x FastEthernet 100Mb/s

23 nodes 1U 2x Xeon 2,4Ghz

1 GB RAM

40 GB HDD

Ethernet 100Mb/s+1Gb/s

HP ProCurve Switch

40 ports 100Mb/s,

1 port 1Gb/s (uplink)

Monitoring: 1U unit KVM

keyboard

touch pad

LCD

Ethernet

100 Mb

PIII 1GHz

Xeon 2,4 GHz

Last year 40 nodes of I64 processors have been added; in 2005 investments of 140 Linux 32 bit processors and 20 - 40 TB of disk storage are planned

Warszawa, 25 February 2005


ACC Cyfronet in LCG-1

Sept. 2003: Sites taking part in the initial LCG service (red dots)

Small Test clusters at 14 institutions;

Grid middleware package(mainly parts of EDG and VDT)  a global Grid testbed

Kraków

Poland

Karlsruhe Germany

from K-P. Mickel

at CGW03l

This is the very first really running global computing and data Grid, which covers participants on three continents

Warszawa, 25 February 2005


Linux Cluster at INS/ ICM

CrossGrid – EGEE - LCG

PRESENT STATE

  • cluster at the Warsaw University

  • (Physics Department)

  • Worker Nodes: 10 CPUS (Athlon 1.7 MHz)

  • Storage Element: ~ 0.5 TB

  • Network: 155 Mb/s

  • LCG 2.3.0, registered in LCG Test Zone

NEAR FUTURE (to be ready in June 2005)

  • cluster at the Warsaw University (ICM)

  • Worker Nodes: 100 - 180 CPUS (64-bit)

  • Storage Element: ~ 9 TB

  • Network: 1 Gb/s (PIONEER)

from K. Nawrocki

Warszawa, 25 February 2005


PC Linux Cluster at ACC Cyfronet

CrossGrid – EGEE- LCG-1

LCG cluster at ACC Cyfronet

statistics for 2004

Warszawa, 25 February 2005


Atlas dc status

~ 1350 kSI2k.months

~ 120,000 jobs

~ 10 Million events fully simulated (Geant4)

~ 27 TB

ATLAS DC Status

  • DC2 Phase I started beginning of July, finishing now

  • 3 Grids were used

    • LCG ( ~70 sites, up to 7600 CPUs)

    • NorduGrid (22 sites, ~3280 CPUs (800), ~14TB)

    • Grid3 (28 sites, ~2000 CPUs)

Grid3 29%

NorduGrid

30%

LCG 41%

from L. Robertson

at C-RRB, Oct. 2004

All 3 Grids have been proven to be usable for a real production

ATLAS


Polish LHC Tier2 - future

„In response to the LCG MoU draft document and using data of the PASTA report the plans for Polish Tier2 infrastructure have been prepared – they are summarized in the Table

It is planned that in the next few years the LCG resources will grow incrementally mainly due to local investments. A step is expected around 2007, when the matter of LHC computing fundings should be finally resolved.”

from the report to LCG GDB, 2004

Warszawa, 25 February 2005


Thank you for your attention

Thank you for your attention

Warszawa, 25 February 2005


ad