1 / 28

MAX Research Activities Status and Analysis

MAX Research Activities Status and Analysis. Jerry Sobieski Director, Research Initiatives April 26, 2007. Current Projects. ATDNet-V2 Bill Babson (Project Lead) Contract being re-engineered to reduce fiber costs DRAGON Jerry Sobieski (PI)

flynn-moon
Download Presentation

MAX Research Activities Status and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MAX Research ActivitiesStatus and Analysis Jerry Sobieski Director, Research Initiatives April 26, 2007

  2. Current Projects • ATDNet-V2 • Bill Babson (Project Lead) • Contract being re-engineered to reduce fiber costs • DRAGON • Jerry Sobieski (PI) • Fiona Leung (Systems/Software Development Engineer) • GRA (open) • Moving to complete program deliverables over next 6+ months • DRAGON software seeing substantial interest: I2 DCS, UvA AAA • HOPI – Testbed Support Center • Chris Tracy - Project Lead, Optical/Network Engineering • Jarda Flidr - System/Software Development Engineer • (Open FTE) • Adapting DRAGON sw to interoperate with Ciena CoreDIrectors, end user GUI, XML interface • LTS Application Specific Topologies • (Open FTE) Post Doc • GRA • Developing architectural approaches to building provably survivable inter-institutional networks, and real-time video services distribution architecture

  3. ATDNet V2 • Sponsor: Naval Research Lab • Participants: NRL, LTS, DISA, MIT-LL • Major Activities: • Renegotiating fiber pricing with Qwest • Migrating to “rings” from “segment” pricing • Moving to 20 yr IRU with only annual maintenance • Re-engineering BoSSNet • Replacing old MONET gear with Ciena CoreStream • Including 10Gbps wave for DRAGON/E-VLBI (MIT Haystack) • 40 Gbps experiments • Expect to be operational by end of May • Optical Peering with DRAGON • Tunable transponders interconnecting DRAGON and GIG-EF (ATDnet) are in place and under test - We believe these are first such deployment of tunable transponders in the R&E community • GMPLS control plane being set up • Kudos: Bill Babson (MAX)

  4. DRAGON • Sponsor: National Science Foundation • Experimental Infrastructure Networks (EIN) • Major Activities: • Complete deliverables • Complete L2SC, LSC, PSC multilayer deployment • Continue AST development • Integration of advanced path computation codes • Transition “research” project into a “software development” project (feature roadmap includes non-research issues that make it more practical) • To do: Dragon Users Group (DUG) • Coordinate experiments, activities, objectives for the testbed • Formulate ongoing support strategies to maintain the facility into the GENI years as an asset for MAX members to use in GENI proposals

  5. pM pM pM pM pM pM pM The DRAGON Optical Layer MAX UMBC CLPK-RE LTS GSFC Qwest fiber ring CLPK DCGW DCNE BoSSNet MCLN-RE (new tunable lambda) ARLG NGC DCNE-RE MCLN ARLG-RE NRL Movaz RayROADM (mems wavelength switches) ACCESS Level3 fiber ring Movaz RayExpress (wave Add/Drop mux/dmux)

  6. The DRAGON L2SC Layer UMBC MAX LTS Venter NOAA(new) GSFC MIT/ HAYS NIH/NLM(new) CLPK CaveWave AMES UIC DCGW DCNE ARLG HOPI/ DCS MCLN ISIE ACCESS Raptor Ethernet GMU NGC Force10 Ethernet

  7. HOPI Testbed Support Center • Sponsor: Internet2 • New expanded agreement now in place • Major Activities: • Support the control plane deployment activities on HOPI and new Internet2 DCS network • Porting DRAGON GMPLS control plane to interoperate with Ciena CoreDirectors • Developing AST XML interface and GUI tool • Involved in DICE control plane activities, GLIF activities, etc. • GMPLS Workshops (more later) • Looking at ports (of DRAGON sw) to other hardware platforms (Nortel, Cisco)

  8. HOPI Status • HOPI is being re-constituted on its own native waves as the new I2 core comes online (solid red) – NYC, WDC, and CHI are currently on their own 10GE waves • The rest of the HOPI network are connected via MPLS tunnels over Abilene (dotted lines) – Will be migrated native waves over next six months LON AMS TOK Rick Summerhill

  9. Dynamic Circuit Services • Internet2 evaluated the DRAGON GMPLS control plane software as part of the HOPI project • The DRAGON software is now the control plane for the “Dynamic Circuit Service” (DCS) offering being rolled out over the new Internet2 core • This is novel in that it integrates the Layer2 Ethernet capabilities with the TDM SONET core enabling very high quality global links • This extends the contiguous reach of the DRAGON GMPLS control plane nationally in a way that allows almost any other regional network in the US to begin trials of this technology • MAX staff, in conjunction with personnel from ISI-East, have just completed an important demonstration of this capability earlier this week at the I2 Member Meeting • Dynamically allocated bandwidth guaranteed connections across five domains between Washington DC and Ann Arbor MI. • Pre-demo testing showed we can establish dozens of such simultaneous connection requests in less than five minutes. • Kudos: Tremendous effort above and beyond: • Tom Lehman (USC-ISI East) • Xi Yang (USC-ISI East) • Chris Tracy (MAX) • Jarda Flidr (MAX) • Fiona Leung (MAX) • Since the DRAGON software is the product of NSF funding and is open source, we hope to see it deployed in other network environments at the campus, regional, national, and international levels.

  10. Application Specific Topologies • Sponsor: LTS (via UMIACS contract) • Synopsis: • Application Specific Topologies (ASTs) consist of formal XML descriptions of distributed applications. • Develop the ability to dynamically establish customized network topologies that support survivable network architectures, content distribution networks, virtual organizations, etc. • This project will build on basic functionality developed in DRAGON, extending the protocols and middleware to support real-time topology reconfiguration, hierarchical specifications, and “grid” integration.

  11. Areas of FY08 Interest and Exploration • Hybrid Networks • DRAGON, HOPI, DCS etc • Early adopter facility for regional fan-out of experimental dynamic services • Resilient Architectures • Understanding how to map theory to practice in the R&E environment to construct provably survivable networks supporting business & science processes • Handle extensive and rolling outages, continuance of operation, etc • Integrated approach considering large radius events (multi-enterprise) with zero RTO • Driven largely as part of the LTS project and using DRAGON technologies • Video/visualization and distributed data storage services • HD (video and visualization) source, capture, distribution, transcoding • How can MAX support a regional distributed video services capability? • Understanding the current technologies (and what is missing) • Understanding how this can be employed in the R&E community in novel ways. • Driven largely by the LTS and DRAGON projects • Ultra high-speed photonic switching • Tbps switching & transmission, network architectures, GENI, etc • Driven largely by anticipated regional requirements for long term grid computing activities and from DoE Blue Ribbon panel on future network research agenda

  12. So what have we wrought? • Our focus over the last four years has been in developing a viable model for the engineering, dynamic establishment, and operation of “light path” services – be they waves, VLANs, fiber cross-connects, etc. • A preliminary, ad-hoc, and cursory set of lessons learned from these efforts follows • Areas we think we have something to say: • Utility of hybrid networks • Global models for automated provisioning of circuit services • Best Common Practices for [GMPLS] control plane architecting and implementation • Practicality of switching architectures at different layers

  13. Growing data universe • Emerging e-science applications are creating extremely large sensor data sets, computationally intensive analysis workflows covering these sets, and large intermediate data storage (capacity and performance) requirements. • “It takes 16 hours to store the results of an 8 hour computation on the new Cray at ORNL” (Nagi Rao) • Observation: “e-science” requirements are diverging dramatically from the requirements of “normal” network users/applications. • Mostly in terms of the relationships between computational, sensor, and storage facilities at the high end. (Few users move petabyte data sets, and fewer still move them to/from their desktop machine) • “normal” services may accelerate if/when video content grows and as HD content becomes both expected and more common.

  14. The Very Large Array (VLA) • The VLA (New Mexico) has 27 Antennae • 120 Gbps each… = 3.2 Terabits/sec • While strict “real time” requirements are rare, near real-time applications are much more common (delays of less than ~1 minute) • These applications have a growing requirement to move large data sets rapidly and predictably. • There is a definite segment of the R&E community that have genuine and demonstrable need for dedicated network resources in support of their work.

  15. Hybrid Network Services • Circuits *are* useful – in many ways. • They underlie and complement the IP network • Circuit services are becoming a part of the common services portfolio available to the R&E user worldwide. • Circuits are difficult to set up manually. • Automated technologies are necessary to efficiently manage and allocate the resources and to set up such connections quickly and accurately • We see a growing consensus that dedicated and reservable network resources are necessary for the continued growth of large distributed grid applications • These [high end] science applications need to have as much control over the network resources as they have over the other non-network resources

  16. Other observations: • Ethernet seems to be the most commonly desired termination framing – presumably because most end systems and users understand it and it is inexpensive. • In the core, switched Ethernet provides very little performance advantage over layer3 forwarding capabilities • Both use variable size packets that can be launched asynchronously making QoS provisioning difficult at both layers • Ethernet within {SONET/SDH/OTN, lambda} provides a more bounded and predictable performance than native VLANs or aggregated transport • Port-to-port Ethernet “light paths” provide good performance and are easy to provision in the campus/metro space • SONET/SDH is still the champ on long haul (particularly global) connections • the synchronous nature provides a very reliable and well bounded jitter and loss characteristics • New features in SONET (e.g. VCAT/LCAS/GFP) make it very powerful new transport technology. • Significant and growing interest in Infiniband as handoff framing.

  17. Use models emerging for hybrid services: • Use of circuit services is seeing most interest from projects wishing to link collaborating sites (not end systems per se) • For instance: Researcher with growing data set(s) wants to link said data set(s) to other data repositories, colleagues, or computational centers. • Host to host circuits are often unnecessary if a local switch has a dedicated path to the other end. • Almost all such hybrid environments will still use IP for generic internet access as well as for packet transport over the light paths

  18. Conventional routed IP services Project Specific Networks National/international core DCS RON East RON Central Campus Campus RON South Campus User Cluster A User Cluster B Private Network User Data Repository

  19. Grid Applications Web Portal Workflow Step 1 Step 2 Step 3 Input storage repository Compute cluster Output storage repository Federated cluster

  20. VLSR VLSR VLSR VLSR Application Specific CollaboratoriesThe E-VLBI poster child example: Mark 5 Correlator/Compute Cluster Global R&E Hybrid Infrastructure Mark 5 Analysis station

  21. VLSR VLSR VLSR VLSR E-VLBI Application Specific Network(A more sophisticated approach- “Tivo” mode) Globally distributed, unified storage clusters Mark 5 Correlator/Compute Cluster Sensors Global R&E Hybrid Infrastructure Mark 5 Analysis station

  22. A global E-VLBI Application Specific Network(Zen mode)

  23. Ongoing Related Activites • Internet2 Newnet Technical Advisory Committee • Sobieski – “Non-layer3” services • Magorian – Commodity peering services • ESnet Panel - Sobieski • Charge: “weigh & review the organization, performance, expansion, and effectiveness of the current operations of ESnet. .. Consider the proposed evolution of ESnet, its appropriateness and comprehensiveness in addressing the data communication needs .. that will enable scientists nationwide to extend the frontiers of science. .. make suggestions and recommendations on the appropriateness and comprehensiveness of the networking research .. with a view towards meeting the long-term networking needs ..” • Report due towards end of the summer

  24. Dynamic Circuit Services Workshop • Purpose: • Disseminate technical expertise in the design and direct deployment of GMPLS based dynamic circuit networks. • Current state of the art Control Plane architecture concepts, issues, and ongoing efforts • Provide practical and hands-on experience building a functional DCS network • Intended Audience: • Network Engineering Personnel • Those responsible for defining regional and/or campus network services and architecture • Engineering teams responsible for implementation and support • Two day workshop • Brief over view of the GMPLS technologies • Intense two days of configuring, testing, and debugging increasing more sophisticated hybrid network environments

  25. Build this in two days: Intra-domain ctrl plane Inter-domain ctrl plane Data plane

  26. Schedule • First workshop: New York in [early] March • NYU, Brookhaven, NYSERnet attendees – very successful! • Follow-on workshops: • NYSERnet: New York City Mar 14,15 complete • DRAGON internal mini-workshop April 11,12 complete • MAX: Washington DC May 2,3 • NASA Ames: Mountain View May 30,31 • [Great Plains? Kansas City] Jun 27,28 • NCREN: Research Triangle Park Aug 1,2 • [Univ of Wisc? Madison] Sep ? • 4th quarter TBD – SC07 activities will deplete personnel • Resume with Joint Techs Jan 2008 • Since the instructors and equipment are based at MAX, we can hold additional workshops at MAX fairly easily if necessary or desired • Contact: • Jerry Sobieski at MAX • FFI: http://events.internet2.edu/2007/DCS/

  27. Dynamic light path technology is still maturing… • We are still in the early adopter stage of these technologies (!) • GMPLS protocols continue to evolve • Hardware capabilities are evolving very rapidly to support it • The R&E community’s understanding and experience with DCS will continue to grow • But it is useable - We want to see these capabilities used for real work as much as possible… • We want to create a community of users that will push the core capabilities, operational management, reliability, robustness, usablity, and applicability of these technologies • Contact MAX Staff if your staff or faculty would like to participate in these activities

  28. Thanks! • Comments, input, thoughts gladly encouraged and accepted • Jerry Sobieski • 301-346-1849 mobile • 301-314-6662 office • Jerrys(at)maxgigapop.net

More Related