1 / 18

SC2002 Bandwidth Challenge and Data Challenge Application

SC2002 Bandwidth Challenge and Data Challenge Application. KEK Computing Research Center Y. Morita. Fermion (Matter). 標準模型. Gauge particles (Force). 1st gen. 2nd gen. 3rd gen. Strong Force. Charm. Top. Up. Quark. Gluon. Electromagnetic Force. Down. Strange. Bottom. Photon.

lis
Download Presentation

SC2002 Bandwidth Challenge and Data Challenge Application

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SC2002 Bandwidth Challenge andData Challenge Application KEK Computing Research Center Y. Morita The 3rd PRAGRA Workshop - Y.Morita

  2. Fermion (Matter) 標準模型 Gauge particles (Force) 1st gen. 2nd gen. 3rd gen. Strong Force Charm Top Up Quark Gluon Electromagnetic Force Down Strange Bottom Photon Weak Force e neutrino  neutrino  neutrino Lepton electron muon tau W Boson Z Boson Particle related to the Higgs field (not discovered) The 3rd PRAGRA Workshop - Y.Morita Higgs

  3. Detector forLHCb experiment Detector for ALICE experiment Large Hadron Collider at CERN The 3rd PRAGRA Workshop - Y.Morita

  4. ATLAS Detector ~1850physicists from 33 countries dimensions: ~20x20x40 m weight : ~7000 ton readout ch: ~1.5 x 108 The 3rd PRAGRA Workshop - Y.Morita

  5. Theory vs Experiment • To discover a “new physics” in the experiments, signal must be separated from the background process (“well-known physics”) • Event simulator plays a crucial role in this comparison Theory Experiment Event Generator Raw Data particles Reconstruction Analysis Event Simulator Simulated Raw Data Comparison The 3rd PRAGRA Workshop - Y.Morita

  6. Physics Analysis Challenges “Finding a needle in a heystack” • 109 collisions/second online filter  ~100 events/second on storage 109 events/year • 1Mbytes/event  several PetaBytes/year • Event reconstruction: ~300 SPECint95*sec/event ~ 200K SPECint95 for reconstruction High Throughput, Data-intensive Computing The 3rd PRAGRA Workshop - Y.Morita

  7. Event RAW calorimeter-1 digits calorimeter-2 digits tracker-1 digits tracker-2 digits magnet-1 digits Event REC calorimeter-1 energy calorimeter-2 energy tracker-1 position tracker-2 position magnet-1 field tracker-1 D/A converter calorimeter-1 D/A converter calorimeter-2 D/A converter magnet-1 D/A converter tracker-2 D/A converter reconstructed data cluster reconstruction tracker reconstruction Event ESD cluster-1 cluster-2 cluster-3 track-1 track-2 track-3 track-4 track-5 event summary data electron id algorithm jet id algorithm Et miss id algorithm Event AOD electron1 photon1 electron 2 jet-1 jet-2 Et miss analysis object data HEP data reconstruction / analysis ~1PB/year 1MB/event ~1PB ~300TB/year 100KB/event ~10TB/year 10KB/event The 3rd PRAGRA Workshop - Y.Morita

  8. 1 TIPS = 25,000 SpecInt95 PC (1999) = ~15 SpecInt95 ~PBytes/sec Online System ~100 MBytes/sec Offline Farm~20 TIPS Bunch crossing per 25 nsecs.100 triggers per secondEvent is ~1 MByte in size ~100 MBytes/sec Tier 0 CERN Computer Center >20 TIPS ~622 Mbits/sec or Air Freight HPSS Tier 1 US Regional Center France Regional Center Germany Regional Center Italy Regional Center HPSS HPSS HPSS HPSS ~2.4 Gbits/sec Tier 2 Tier2 Center ~1 TIPS Tier2 Center ~1 TIPS Tier2 Center ~1 TIPS Tier2 Center ~1 TIPS Tier2 Center ~1 TIPS ~622 Mbits/sec Tier 3 Physicists work on analysis “channels”. Each institute has ~10 physicists working on one or more channels Data for these channels should be cached by the institute server Institute ~0.25TIPS Institute Institute Institute Physics data cache 100 - 1000 Mbits/sec Tier 4 Workstations Multi-Tier Regional Center Scheme Multi-tier Regional Center Model for LHC ~4 TIPS The 3rd PRAGRA Workshop - Y.Morita 24 March 2000, WW A/C Panel, P. Capiluppi

  9. LCG: LHC Compuing Grid Project • Deployment of Computing and Software model for the 4 LHC experiments • WG1: Choice of Security Middleware and Tools • WG2: VO management and resources • WG3: Registration, Authentication, Authorization and Security • WG4: Security Operational Procedures • LCG-1 estimates:Users ~ 1000User Registration: Peak rate ~ 25 users/day in 2003 2Q The 3rd PRAGRA Workshop - Y.Morita

  10. ATLAS Data Challenges • 2002Data Challenge 1 “~ 0.1%” test Regional Center Test + + “High Level Trigger studies” Apr~Aug Phase1: Event Full Simulation (Fortran) Oct~Jan Phase2: Event PileUp (Fortran) 3 x 106 events, ~ 25TB • 2003~4 Data Challenge 2 "~10%" test Full chain test of C++ software with Grid Validation of the LCG computing model • staged validation process for the computing and software models with increasing magnitude The 3rd PRAGRA Workshop - Y.Morita

  11. Atlas Software • Technical Design Proposal in Fortran program • In transition: new generation C++ program • Event Generator: Fortran program wrapped with C++ • Event Simulator: FADS/Goofy framework with Geant4 • Event Reconstruction/Analysis: Athena/Gaudi frameworkFADS/Goofy works also as a module of Athena/Gaudi • Writing the full detector simulation, reconstruction, analysis modules is an ongoing world-wide software integration effort • Validation of this integration is one of the major goals of the Atlas Data Challenge The 3rd PRAGRA Workshop - Y.Morita

  12. About Gfarm • Grid Data Farm • Middleware Project between AIST, KEK and Titech • Parallel File System taking the advantage of the parallel nature of event oriented data and statistics analysis • “owner computes” rule:job runs on the node where thedata resides • job history and the file segmentlocations are managed bya Metadatabase • File fragments are copied forbackup and load balancing • User sees the file fragmentsvia single image logical file URL http://datafarm.apgrid.org/ The 3rd PRAGRA Workshop - Y.Morita

  13. user job user job job manager job manager computing node job A job A computing node storage element job B high speed switch job B storage node CPU vs Storage in High I/O jobs • Simple management of system and file • Network and switches becomes the bottleneck in high I/O multi-user applications • Does not scale to more than a few hundrednodes system • Independent local I/O on each node • Scalability for more than thousands nodes • system and file management become complex The 3rd PRAGRA Workshop - Y.Morita

  14. FADS/Goofy architecture for SC2002 • FADS/Goofy:Framework for ATLAS Detector Simulation / Geant4-based Object-oriented Folly Atlas Detector User Analysis Module FADS/Goofy Geometry Material Particles Tracking Events I/O HepMC xerces-c MySQL Geant4 ROOT ROOT Gfarm Gfarm Converter Plug-in architecture Event Generator Objectivity/DB Hits files Histogram files The 3rd PRAGRA Workshop - Y.Morita

  15. Presto-III PC Cluster @ Titech • # of Nodes 256 • CPU AMD Athlon x 2 (Thunderbird, AXIA core) 1.33GHz (FSB=133MHz) • Motherboard ASUS A7V13 (VIA KT133A Chipset) • Memory 768MB • HDD 40GB • OS Debian/Lucie 2.14.7 • g++ 2.95.4 • Network Card 1 DEC 21140AF • Network Card 2 Myricom Myrinet2000 • 47-th in “TOP500” (2nd in PC cluster) The 3rd PRAGRA Workshop - Y.Morita

  16. FADS/Goofy with Gfarm • Framework for Monte Carlo Detector Simulation using Geant4 toolkit in C++ • Parallel Event Processing with Atlas Detector Full Simulation • Parallel Object I/O capability with ROOT and Objectivity/DB on Gfarm file system • Parallel network transfer and replication over gigabit WAN • World-wide distributed data mining and histograming for petabyte scale data • Robustness Test:Generated 106 events with Titech PrestoIII cluster in 2 days • Replicated the simulated events over WAN to AIST, SC2002, Indiana, SDSC • Gfarm data replication is used as the bandwidth challenge-> see Tatebe san’s talk The 3rd PRAGRA Workshop - Y.Morita

  17. Cluster and Network setting for SC2002 Bandwidth Challenge (9/3) The 3rd PRAGRA Workshop - Y.Morita

  18. The 3rd PRAGRA Workshop - Y.Morita

More Related