1 / 18

Rio GRID Initiatives : T2-HEPGRID

Rio GRID Initiatives : T2-HEPGRID. Andre Sznajder UERJ(Brazil) http://www.hepgrid.uerj.br. HEP Computing in Rio. 1982 – Begining of Experimental HEP activities 1983 - ACP1 ( paralelization software: CPS ) 1985 - ACP2 ( hardware: board design , software: OS )

nuala
Download Presentation

Rio GRID Initiatives : T2-HEPGRID

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil) http://www.hepgrid.uerj.br Andre Sznajder

  2. HEP Computing in Rio • 1982 – Begining of Experimental HEP activities • 1983 - ACP1 ( paralelization software: CPS ) • 1985 - ACP2 ( hardware: board design , software: OS ) • 1995 – CHEP 95 in Rio ( first international VC in Brazil ) • 1997 – Client/Server project with IBM(Grid Precursor ) • Client/Server Upgrade GRID Andre Sznajder

  3. HEPGRID Historic • 2001- Presentation of a New Project to FINEP Aproved 500 machines: But... Dollar increase 3x could buy only 100 machines. • 2002- The first part of the financial support delivered. Bureaucracy delays our projects about 2 years Need to create infrastructure. Link provided by REDE RIO at 2 Mbps • 2003- Start to buy the machines and build the group CALTECH group has been very helpful – Many Thanks ! • 2004- Hepgrid and Digital Divide Workshop at UERJ, See http://www.lishep.uerj.br/ December 20th. inauguration of UERJ HEPGRID cluster Andre Sznajder

  4. HEPGRID Cluster at UERJ • 1 Frontend: Pentium Xeon Dual CPU 2.7 GHz , 4 GB RAM , 2x 36GB SCSI HD , 2x 1Gbit NIC • 82 Nodes: Pentium Xeon Dual CPU 2.7 GHz , 1 GB RAM , 40 GB HD , 1Gbit NIC • 3 TB Raid disks ( 2 NAS File Servers ) • Five 24 Gbit port switches for internal connectivity • Operational System: Rocks 3.3.0( based on RedHat EL3.0 ) • Development cluster for testing: 1 frontend , 3 nodes • GRID software: OSG 0.2.1 Andre Sznajder

  5. HEPGRID Cluster RAID DISKS NODES NAS FRONTEND NOBREAKS Andre Sznajder

  6. Cluster Network Topology Andre Sznajder

  7. Cluster Monitoring (GANGLIA) Andre Sznajder

  8. Cluster Monitoring (GANGLIA) Andre Sznajder

  9. Cluster Monitoring (MONALISA) Andre Sznajder

  10. External Network Connectivity CLARA 100Mbps Andre Sznajder

  11. External Network Connectivity Andre Sznajder

  12. Rio 2005 CMS GRID Meeting • Main outcomes: • Decision to connect HEPGRID to Fermilab T1 • Grid3/OSG installation & certification • Start CMS Monte Carlo production • RNP decided to provide cluster connectivity • (Currently 100Mbps , but will be upgraded to 1Gbps exclusive • after September 2005) Andre Sznajder

  13. Rio CMS GRID Meeting Andre Sznajder

  14. National Research and Education Network (RNP) • Maintained by the Brazilian government • Provides national (inter-state) and international connectivity for more than 200 universities and research centers • collaboration – links to other similar networks internationally (Internet2, GÉANT, APAN, RedCLARA) • supports the development of advanced networking and its applications Andre Sznajder

  15. RNP Network Topology Andre Sznajder

  16. SC2004: Network Traffic Between UERJ & CERN Thanks to RNP and Caltech group ! UERJ to CERN CERN to UERJ Andre Sznajder

  17. HEPGRID Team Andre Sznajder

  18. Perspectives • Install SAMGRID and begin P17 reprocessing & MC production for D0 • Install dCache for CMS digi/reconstruction • Storage upgrade to 200TB • Cluster upgrade ( rack mounted ? ) Andre Sznajder

More Related