1 / 23

Network & Services Overview June 2012 Jeff Ambern jambern@grnoc.iu.edu

Network & Services Overview June 2012 Jeff Ambern jambern@grnoc.iu.edu. Agenda. Indiana GigaPOP Overview Services Commodity Internet Usage Trends Monon100 (100Gbps to Chicago). Indiana GigaPOP. Established in Indianapolis in 1998

demont
Download Presentation

Network & Services Overview June 2012 Jeff Ambern jambern@grnoc.iu.edu

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network & Services OverviewJune 2012Jeff Ambernjambern@grnoc.iu.edu

  2. Agenda • Indiana GigaPOP Overview • Services • Commodity Internet Usage Trends • Monon100 (100Gbps to Chicago)

  3. Indiana GigaPOP • Established in Indianapolis in 1998 • Partnership between Indiana University and Purdue University • Advanced high-speed, high-availability, feature-rich network • Drives down the costs and increases the connection speeds for Indiana’s top research colleges and universities • Serves as an aggregation point for Indiana’s universities to access (regional, national and international) R&E and Commodity Internet.

  4. GigaPOP Participants • Indiana University • Purdue University • I-Light • Notre Dame • RLHEC • SLIAC • ENA • CSPAN Archives • NCAA

  5. GigaPOP Providers & Peers Providers (Commodity) • Cogent • TimeWarner • Iquest(Local only) • Smithville (Local only) • Wintek (Local only) • Internet2 TRCPS • WiscNet RPS • Akamai (Cache) • Google (Cache) • Peers (R&E) • Internet2 • National LambdaRail • MREN • CIC • ESnet

  6. Network Elements • Juniper MX960 (Core Nodes) • Fully redundant chassis hardware (cooling, power supplies, Routing Engines, SCB’s) • HP 5400 (Layer2 Backbone Switches) • Brocade MLXe (Layer2 Backbone Switch) • HP 3500 (Management Switches for OOB) • Mix of Cisco Routers (Terminal Servers, VPN, Mgt.)

  7. Current Topology

  8. GigaPOP Capabilities • Layer3 • R&E Access (National, Regional and Local) • Commodity Internet Access • L3VPN/MPLS • Multicast • Layer2 • L2VPN/MPLS • VLANs • Layer1/2 • 100G Ethernet (Fiber) • 10G Ethernet (Fiber) • 1G Ethernet (Fiber, Copper)

  9. Access to Research Networks R&E (Base Service) • National/International/RegionalNetworks (Internet2 (100G), NLR,CIC, MREN, Esnet) • Local R&E (GigapopParticipants, State Universities) R&E Service includes IPv4 plus: • IPv6 (we can provide IPv6 address space to members) • Multicast • Jumbo Frames (9000 bytes)

  10. Commodity Internet (Optional to Participants) • Full Transit • Complete Internet Routing Table • ~410K Routes • Settlement-Free Internet • Partial Internet Routing Table • ~240K Routes • Commercial Entities agree to peering arrangements with I2 (TRCPS) or other GigaPOPs to save on upstream BW charges • Internet Caching Servers • Similar to a Settlement-Free peering arrangement but involving only one entity (i.e. Google, Akamai, Netflix (future peering)) • Server Farm and cached data is local to customer • Does not consume Internet Transit bandwidth for cached data (except during incremental updates)

  11. Caching Services • Why is caching beneficial to Participants? • Offloading commercial traffic to local servers reduces our costs that we would have to pay to our per Mbps transit providers • Reduces the amount of Commodity traffic that would congest our upstream links and force us into costly upgrades (I2 TRCPS, Cogent and TWTC) • Reducing our costs allows us to pass savings on to our Participants

  12. Caching Services Akamai server farm added late 2009 • 1Gbps 95% • 500Mbps daily average we would have to pay to our upstream providers or consume BW on our settlement-free links Google Caching Services added 12/2011 • 1.3Gbps 95% • 730Mbps average Netflix Caching Services / Peering - Late 2012 • Currently working with Netflix on traffic analysis • Depending on usage will either Peer at IXP or install cache node in Indpls.

  13. Commodity Usage - Gbps • Cogent (10G) & TimeWarner (10G) • Full Internet Transit • Fully Redundant Upstreams • Full IPv4 & IPv6 Transit • Internet2 TRCPS Commodity Peering Service (10G) • Reduces cost through aggregation and settlement-free peering • WisNetRegional Peering Service (10G) – Via our CIC membership • Akamai Caching Service (10G) • Google Caching Service (10G)

  14. Bandwidth Trends Commodity Internet Usage Trends (Aggregate Upstream) • 40-70% increase in usage per year 2009-2012 • Sept 2009 – 2.5G Avg (Cost/Gbps $10000) – 25K/Month • Sept 2010 – 6.5G Avg (Cost/Gbps $6000) • Sept 2011 - 9.5G Avg (Cost/Gbps $5000) • July 2012 – 13.0G Avg (New Pricing $1750/Gbps) – 22.27K/Month • Able to keep member cost down due to decreases in provider pricing and Settlement-Free Peers

  15. Commodity Bandwidth 2009-2012 24hr Avgs

  16. Monon100

  17. Monon100 • Monon100 is named after the MononRail line that connected Indiana's higher education institutions to the rest of the world through Chicago • The Monon Rail served six colleges and universities along its line (from Chicago to Louisville): • Purdue University in West Lafayette, Indiana • Wabash College in Crawfordsville, Indiana • DePauw University in Greencastle, Indiana • Indiana University in Bloomington, Indiana • Butler University in Indianapolis, Indiana • St. Joseph's College in Rensselaer, Indiana • Links the Indiana GigaPoPto Internet2, NLR, CIC, MRENand other networks. • 10 times faster than our previous network path to Chicago • Resource available to all Indiana GigaPoPmembers

  18. Gigapop- Internet2 Update • Connected at 100G to Internet2 in Chicago • Upgrading optical nodes between Indianapolis and Chicago to support 100G • Temporary Internet2 100G Wave until Optical upgrades completed

  19. Optical Transition • Phase 1 • Rebuild IPGrid Optical System to be Monon100 Optical System • Phase 2 • Transition services from temp I2 DWDM to Monon100 Optical System • Indiana • Future growth will include 100GE to IUB via I-Light • CIC • Future growth will include 100GE to CIC in Chicago

  20. Monon100 – Core Upgrades • Installed new Brocade MLXe 16 slot switch in Chicago • Core Juniper routers have been upgraded to 12.1R1.9 • Power supplies upgraded to 4100W • Enhanced MX SCB (3 per router) • Added High Capacity Fan Trays 100G Cards online (3 at ICTC and 1 at LL) • MPC Type 3 - Modular Port Concentrator • CFP-100G-LR4

  21. Monon100/Indiana GigaPOP

  22. Ilight to Chicgo I-light access to Chicago

  23. Questions?

More Related