1 / 26

Brazilian HEP Grid initiatives: ‘São Paulo Regional Analysis Center’

Brazilian HEP Grid initiatives: ‘São Paulo Regional Analysis Center’. 2nd EELA Workshop 24-25 June 2006 Island of Itacuruçá - Brazil. Rogério L. Iope SPRACE Systems Engineer. SPRACE Project. Computing Center for Data Analysis and Processing

idania
Download Presentation

Brazilian HEP Grid initiatives: ‘São Paulo Regional Analysis Center’

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brazilian HEP Grid initiatives:‘São Paulo Regional Analysis Center’ 2nd EELA Workshop 24-25 June 2006 Island of Itacuruçá - Brazil Rogério L. Iope SPRACE Systems Engineer

  2. SPRACE Project Computing Center for Data Analysis and Processing • High performance computing cluster  90 dual Xeon servers (> 240 CPUs) • 1.2 TFlops of integrated computing power and over 200 GB of RAM memory • 12.8 TB on RAID + 7.1 TB on local disks  ~20 TB of total storage capacity • Gigabit fiber direct connection with international WHREN-LILA link • Gigabit fiber connection with HEP clusters at Rio de Janeiro • Remote Computing Resource for Fermilab DZero Collaboration • Data replication and access center of SAM storage system • Operational execution site of SAMGrid, the DZero processing grid • Analysis enabled computer cluster • Distributed Brazilian Tier-2 Center for CERN CMS Collaboration • Joint effort of SPRACE at São Paulo and UERJ HEP group at Rio de Janeiro • Associated with US-CMS Tier-1 Center at Fermilab • Agregates computing resources of Institutes at São Paulo and Rio de Janeiro • CHEPREO • Provides leading-edge international connectivity through the WHREN-LILA link • Helps bridging the digital divide between North and South America • Enables international collaboration on Science Education programs

  3. SPRACE – Related Projects • Network Infrastructure • TIDIA Kyatera Project: São Paulo State funded project for a state wide optical testbed development and deployment • Ultralight: NSF funded project for building ultrascale information system • Grid Computing Initiatives • Open Science Grid member site • DOSAR - Distributed Organization for Scientific Analysis and Research • OSG Virtual Organization • Agregates research institutions of Southern US, Mexico, India and Brazil • GridUNESP: São Paulo State University Grid Project • Seven participating campi with 13 research groups • 12 scientific projects on intensive demanding computing e-Science • Education and Outreach • TIDIA e-Learning Project: São Paulo State funded project for the development and deployment of e-learning tools • The Particle Adventure Brazilian portuguese mirror site

  4. SPRACE Cluster & Researchers Phase 1 Phase 2 • Sérgio Novaes - Physicist, Researcher (PI) • Eduardo Gregores - Physicist, Assistant Professor • Sérgio Lietti - Physicist, Associate Scientist • Pedro Mercadante - Physicist, Associate Scientist • Rogério Iope - Physicist, Comp. Eng. Grad. Std.

  5. SPRACE site - detailed configuration

  6. SPRACE site - network facilities • 8-pair high-quality Lucent SM fiber cable between USP Computer Center and SPRACE • 1 pair: GIGA project (to HEPGrid-Brazil) • 1 pair: International link (to ANSP network) • 6 pairs: KyaTera project • Cisco 3750G-24TS-E switch/router • Donated by Caltech (+ ZX / LX SFPs) • Default gateway of main servers • Routes network traffic between 4 different networks • 200.136.80.0/24 Netblock for the international link • 143.108.254.240/30 Net between ANSP and SPRACE lab • 143.107.128.0/26USP network – Physics Dept. • 10.24.46.0.24 GIGA project

  7. Detailed internal network configuration

  8. Detailed external network connectivity

  9. RNP testbed – the GIGA Project CBPF UERJ UFRJ SPRACE

  10. UFRJ UERJ USP CBPF GIGA Project - detailed diagram

  11. WHREN-LILA Cisco ONS 15454 (Miami) SDH/SONET N x GbE HEPGrid Brazil International Connectivity SPRACE (2 x) 1 Gbps COTIA BARUERI ANSP switch Cisco ONS 15454 2.5 Gbps RNPswitch RedCLARA router RedCLARA redundant dark fibers USP RNProuter RNPswitch UERJ UFRJ CBPF RNP 10 Gbps

  12. HEPGrid Brazil International Connectivity

  13. SAMGrid enabled Started Operating on July / 05 Data reprocessed at SPRACE: 4,253 raw data files 9,206,931 events 3.12 TB of data Monte Carlo Production More than 3 million events produced since July / 05 SPRACE UNESP SPRACE Production - DZero Data Processing • Standalone Cluster • From March / 04 to July / 05 • Monte Carlo Production • Produced ~4.5 million events • Stored more then 1.4 TBytes on tape at Fermilab

  14. Monte Carlo Production for DZero • SPRACE cluster was placed into production on March 23rd 2004 • Since then it has already produced more than 7.5 million Monte Carlo events for the DØ Collaboration of Tevatron • More than 1.6 TB of data has been transferred to the Fermilab repository

  15. Reprocessing of DZero Data • SPRACE was the only site in the Southern Hemisphere to participate in the reprocessing of DØ data. During 2005, SPRACE reprocessed 9.2 million events together with WestGrid in Canada, CCIN2P3 in France, UTA in USA, Praga in Tcheck Republic, GridKa in Germany, and GridPP and PPARC in UK.

  16. SPRACE Ganglia during Reprocessing

  17. From Dzero to CMS • Set up a Tier 2 of World LHC Computing Grid (WLCG), connected to the Fermilab Tier1, for • Monte Carlo Event Generation • Data Processing for physics analysis, requiring very fast data access • Data Processing for calibration, alignment and detector studies • Tier 2 requirements: • CPU: 900 kSI2k • Disk: 200 TB (for 40 researchers doing physics analysis) • WAN: at least 1 Gbps. Most with 10 Gbps • Data import: 5 TB/day from Tier1 • Data export: 1 TB/day • Participate in the Computing, Software and Analysis 2006 integrated test (CSA06): September-November 2006 • Tier-1: aiming to include all 7 centers, at least 5 • 600 CPU at Fermilab • 350 CPU at CNAF • 150 CPU at most other centers • Tier-2: aim for 20, at least 15 sites participating • Many sites now online and validated by Computing Integration / Service Challenge 4 • > 20 CPUboxes/5TB per site, in reality much more available

  18. Demo events - SC2004 Bandwidth Challenge BWC goal: to transfer as much data as possible using real applications over a 2 hour window “High Speed TeraByte Transfers for Physics”

  19. SC2004 Bandwidth Challenge results • Traffic exchange between São Paulo and Pittsburg • 2.93 (1.95 + 0.98) Gbpssustained for nearly one hour • Record of data transmission between South and North Hemispheres

  20. iGrid2005 event results SPRACE link tested during iGrid2005 Workshop (São Paulo-San Diego) Stable connection almost saturated for ~ 2 h WHREN-LILA STM-4 link stressed to its limit (622 Mbps)

  21. SC2005 Bandwidth Challenge “Distributed TeraByte Particle Physics Data Sample Analysis”

  22. SC2005 Bandwidth Challenge results Sustained data transfer of ~900 Mbps for over 1h (São Paulo - Seattle) WHREN-LILA STM-4 link stressed to its new limit (1.2 Gbps), with aggregated traffic coming from UERJ SC|04 record: 101.13 Gbps

  23. SPRACE and the UltraLight Project http://ultralight.caltech.edu/

  24. SPRACE and the KyaTera Project • The KyaTera project • FAPESP Project for the study of advanced Internet technologies • A large distributed network infrastructure (testbed) of the dark fiber mesh spread over several cities of the State of São Paulo • Dark fibers reach directly the research labs for experimental tests • Platform for developing and deploying new high performance e-Science applications • SPRACE proposal to the KyaTera project • Research in partnership with UltraLight project for provisioning end-to-end survivable optical connections (lightpaths) in the KyaTera testbed • Research partners: • OPTINET / UNICAMP (optical networking experts) • LACAD / USP (HPC & distributed computing experts)

  25. SPRACE and the KyaTera Project Intel donation

  26. GridUNESP - Computing Power Integration • GridUNESP project goals: • Deploy high-performance processing centers in seven different cities of the State of São Paulo • Integrate those centers using Grid Computing middleware architectures • Unify UNESP computing resources, allowing an effective integration with international initiatives (OSG / EGEE) • Finep just approved US$ 2 M to implement the project • Research Projects: • Structural Engineering • Genomics • High Temperature Superconductivity • Molecular Biology • High Energy Physics • Geological Modeling • Protein Folding

More Related