1 / 12

What’s MAX Production Been Up to? Presentation to MAX membership

What’s MAX Production Been Up to? Presentation to MAX membership. Spring 08 Member Meeting. Dan Magorian Director of Engineering & Operations. R&E Nets. Cogent ISP. Qwest ISP. Internet2 Nets. Qwest ISP. National LambdaRail. TransitRail ISP.

chul
Download Presentation

What’s MAX Production Been Up to? Presentation to MAX membership

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. What’s MAX Production Been Up to?Presentation to MAX membership Spring 08 Member Meeting Dan Magorian Director of Engineering & Operations

  2. R&E Nets Cogent ISP Qwest ISP Internet2 Nets Qwest ISP National LambdaRail TransitRail ISP MAX Production topology May 08 Baltimore pops 1. Original Zhone dwdm over Qwest fiber 6 St Paul 660 RW 2.Fujitsu dwdm over State Md fiber Prod Ring 4 Res Fiber spur 3. 10G on HHMI dwdm over Abovenet fiber 4. 10G on Univ Sys Md, MRV dwdm 10G lambda Level3 pop, Mclean VA CLPK NGIX Ring 2 AWave N T640 MCLN CLPK T640 6500 10G backbone UMD pop, College Park MD CLPK MCLN 10G lambda AWave S Ring 1 Ring 3 DCGW DCNE ASHB ARLG Equinix pop, Ashburn VA GWU & Qwest DC pops, ISI/E Arlington VA pop

  3. What has the Production Side of MAX been doing since the Fall member meeting? • Wrap up of the “Big Move”, Phase 1: • Moved the CLPK T640 to our new pop in UMD bldg 224 room 0302 customer by customer with minimal downtime each. Happy to give tours! • BALT cutover of 6 St Paul pop cleaned up with USM 10G fixed (thanks, Norwin!), 40ch filters for DWDM fanout in, and first 10G lambda from JHU Astro cluster to MCLN and on to Chicago delivered. Involved a lot of work with JHU’s fiber getting usable path across town. • Still working with UMD, NASA, and DREN to get last customer colo gear in 224/0312 moved into new cabinets so old racks can be turned over to UMD/NTS.

  4. What has the Production Side of MAX been doing since the Fall member meeting? (con’t) • Phase 2 of the Big Move: • Fujitsu Flashwave 7500s speced and procured, makeready done at GWU for 23” rack needed. • Now staged in lab in testing, had fiber jumper issues to UMD that slowed it down a week or two. • Also ran into snag with Fujitsu software bug with sonet timing for Flexponder modules (newer versions of the Muxponder modules used Phase 1) from head timing source not being propagated, Requires us to arrange timing source at each DC ring location (thanks NWMD and GWU). • Expect to have deployed to replace 2000-vintage Luxn/Zhone dwdm system by end May.

  5. BERnet and the work on JHU’s fiber • The Baltimore region has a really good “horse-trading” club called BERnet started by Richard Rose of USM. • Nothing fancy, it’s just a working group of net providers and universities in the region who have common interests in sharing and trading resources. • But this forum made possible the use of the 40-channel “client-side dwdm” filters, which are working out well as a unique experiment in low-cost metro innovative dwdm. • To use these, MAX needed to help JHU establish a working 10G path for an Astro cluster lambda back to Chicago. The City of Baltimore fiber they were using was in terrible shape. • So together with their engineers, we loaned optical tools and over several weeks tested & figured out a working 10G path.

  6. VRFs and more VRFs • MAX has run RFC2547 (now RFC4364, just doesn’t have the same ring) MPLS virtual routing tables (VRFs) for years. • Originally we adopted to offer ISP and not blackhole traffic to participants not subscribed to that, it offers choice instead of “one size fits all” routing. • Especially good for policy routing, where “can’t get back (to advertise downstream) routes bgp hasn’t selected”. • When needed to add NLR’s TransitRail, created separate and “blended” VRFs that gave participants choice. • Dave gave talk on it at Jt Techs Hawaii: http://www.internet2.edu/ presentations/jt2008jan/20080120-diller-gpgb.ppt • Even campuses are using it now to get rid of the thousands of vlans, eg Minnesota’s and Indiana’s flagship campuses.

  7. Other Accomplishments • MCLN pop: • Finished 6500 installation with 10G back to CLPK for distributed L2 service (more later). • Moved AtlanticWave lambdas to MCLN 6500 • Cut over from HHMI-provided 1G to Equinix Ashburn to new protected 10G lambda (thanks, Phil and John!) • Brought up VA (almost), first customer from Equinix. • Provisioned NOAA 10G lambda from CO via MCLN back to CLPK and on to NOAA Silver Spring. Involved with further planning and support for NOAA research net. • Installing BALT dwdm filters on fiber path from SAILOR to 111 Market street (Qwest pop and others). “Passive pop” strategy makes inexpensive to add backhaul locations.

  8. Other Ongoing Activities • Looking for fiber: • To ring out Baltimore to provide diverse path, also to Equinix Ashburn for growth beyond single 10G lambda.. • Partnering with Litecast in BALT for customer fiber, similar to long-time carrier partnership with Fibergate. • TransitRail: • Brought up 1G peering, passed to USM as anchor tenant, still in test mode. Once have more experience with reliability and cost, will offer service to more folks. Talking with MATP about sharing TR 10G if works out. • Evaluating new Juniper EX switches for DC ring fanout. • Still working with MIT to get BALT 300 W Lex pop online. • Working with Quilt to establish inexpensive lambda pricing. • Offering IPv6 training courses next week, IPv6 on iperfers.

  9. Proposed Layer 2 Service Offering • Have talked about this FrameNet-like service last meetings. • Physically, 6500s in MCLN and CLPK provisioned with 10G lambda between. • Once DC ring dwdm in, will select & procure Juniper EX or Force10 S25Ps switches for fanout from 10G lambda. • After thinking about pricing, turns out have L2 service offering defined long ago: NGIX. Now all net-net peerings, although used to have an enterprise participant (NLM). • Idea would be for same NGIX $30k/yr/port, participants could pass vlans to each other locally or out Awave to FL/GA/NY or across to Europe or S America, or west US. • Will be adding dynamic vlans using DRAGON technology so vlans on shared Fujitsu 10G can be be brought up/down on demand, and xconnected to Internet2’s DCN service.

  10. Participant Redundant Peering Initiative MCLN router Fujitsu dwdm Have been promoting this since 2004. But now want to really emphasize that with new dwdm infra- structure we can easily double-peer your campus to both routers for high-9s availa-bility. 8 folks so far. CLPK router Fujitsu dwdm USM NLM JHU

  11. Closing thought: Want to get folks thinking about how your campuses can take part in dynamic and non-traditional networking Internet2’s dynamic circuit net (DCN) and I2/NLR’s lambda services activities are happening and underway in the community MAX is positioned to fan these out and help drill them down into your campuses, like JHU’s recent example. Want to get people thinking about implications of dynamic allocation, dedicated rather than shared resources: Circuit-like services for high-bandwidth low-latency projects Not replacement of “regular” ip routing, but in addition Possible campus strategies for fanout. Need to plan for how will deliver, just as BERnet doing facilitating researcher use of this as it comes about. People may say, “We don’t have any of those applications on our campus yet”. But suddenly may have researchers with check in hand Talk to us about what you’re thinking about and want to do!!

  12. Thanks! magorian@maxgigapop.net

More Related