1 / 17

Network infrastructure at FR-CCIN2P3

Network infrastructure at FR-CCIN2P3. Guillaume Cessieux – CCIN2P3 network team Guillaume . Cessieux @ cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN meeting, Vancouver, 2009-09-01. FR -CCIN2P3. Since 1986 Now 74 persons ~ 5300 cores 10 Po disk 30 Po tape

lamont
Download Presentation

Network infrastructure at FR-CCIN2P3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network infrastructure at FR-CCIN2P3 Guillaume Cessieux – CCIN2P3 network team Guillaume . Cessieux @ cc.in2p3.fr On behalf of CCIN2P3 network team LHCOPN meeting, Vancouver, 2009-09-01

  2. FR-CCIN2P3 Since 1986 Now 74 persons ~ 5300 cores 10 Po disk 30 Po tape Computing room ~ 730m2 1.7 MW GCX

  3. RENATER-4 → RENATER-5: Dark fibre galore ~7500km of DF Kehl Kehl Le Mans Le Mans Angers Angers → Tours Tours CERN Genève (CERN) Cadarache Cadarache Dark fibres Leased line 2,5 G Leased line 1 G (GE) GCX

  4. Pop RENATER-5 Lyon • (D)WDM based • Previously • Alcatel 1.6k series • Cisco 6500 & 12400 • Upgraded to • Ciena CN4200 • Cisco 7600 & CRS-1 • Hosted by CCIN2P3 • Direct foot intoRENATER’sbackbone • No last miles or MAN issues GCX

  5. Endingtwo 10G LHCOPN links GRIDKA-IN2P3-LHCOPN-001 CERN-IN2P3-LHCOPN-001 CERN-GRIDKA-LHCOPN-001 Candidate for L1 redundancy Layer 3 view: 100km

  6. WAN connectivityrelated to T0/T1s LHCOPN RENATER GÉANT2 Internet NREN LAN WAN Beware: Not for LHC Chicago Tiers2 Edge Backbone 2x1G Tiers2 FR 10G Geneva Karlsruhe Tiers1 1G Dedicated data servers for LCG Generic IP Dedicated MDM appliances GCX

  7. LAN: Just fully upgraded! → Computing Storage SATA Storage FC+TAPE Computing Storage FC+TAPE Storage SATA GCX

  8. Now “top of rack” design • Really easing mass handling of devices • Enable directly buying pre-wired racks • Just plug power and fibre – 2 connections! … GCX

  9. Current LAN for data analysis Backbone 40G Computing Storage 2 distributing switches 3 distributing switches Linked to backbone with 4x10G Linked to backbone with 4x10G 34 access switches with Trunked uplink 2x10G 10G/server 1 switch/rack (36 access switches) 10G/server … … 1x10G uplink 48x1G/switch 24 servers per switch 2x1G per server 1G per server Data SATA 816 servers in 34 racks 36 computing racks 34 to 42 server per rack Data FC (27 servers) Tape 10 servers GCX

  10. Main network devices and configurations used • 24x10G (12 blocking) • + 96x1G • + 336x1G blocking (1G/8ports)‏ • 48x10G (24 blocking) • + 96x1G 6513 • 64x10G (32 blocking) 6509 x5 Backbone & Edge x5 16x10G 4900 Distribution 4948 48x1G + 2x10G x70 Access > 13km of copper cable & > 3km of 10G fibres GCX

  11. Tremendousflows LHCOPN links not so used yet CERN-IN2P3-LHCOPN-001 GRIDKA-IN2P3-LHCOPN-001 But still regular peaks at 30G on the LAN backbone GCX

  12. Other details • LAN • Big devices preferred to meshed bunch of small • We avoid too much device diversity • Ease management & spare • No spanning tree, trunking is enough • Redundancy only at service level when required • Routing only in the backbone (EIGRP) • 1 VLAN per rack • No internal firewalling • ACL on border routers are sufficient • Only on incoming traffic and per interface • Preserve CPU GCX

  13. Monitoring • Home made flavour of netflow • EXTRA: External Traffic Analyzer • http://lpsc.in2p3.fr/extra/ • But some scalability issues around 10G... • Cricket & cacti + home made • ping & TCP tests + rendering • Several publicly shared • http://netstat.in2p3.fr/ GCX

  14. Ongoing (1/3) • WAN - RENATER • Upcoming transparent L1 redundancy Ciena based • 40G & 100G testbed • Short path FR-CCIN2P3 – CH-CERN is a good candidate GCX

  15. Ongoing (2/3) • LAN • Improving servers’ connectivity • 1G → 2x1G→ 10G per server • Starting with most demanding storage servers • 100G LAN backbone • Investigating Nexus based solutions • 7018: 576x10G (worst case ~144 at wirespeed) • Flat to stared design → Nx40G Nx100G GCX

  16. Ongoing (3/3) • A new computer room! • 850m² on two floors • 1 cooling, UPS, etc. • 1 computing devices • Target 3 MW • Expected beginning 2011 • (Starting at 1MW) Building 2 2 floors Existing GCX

  17. Conclusion • WAN • Excellent LHCOPN connectivity provided by RENATER • Demand from T2s may be next working area • LAN • Linking abilities recently tripled • Next step will be core backbone upgrade GCX

More Related