1 / 20

ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager

Mathematical, Informational, and Computational Sciences (MICS Division). ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager. Program Goals. What's new New SciDAC and MICS Network Research Projects

theta
Download Presentation

ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mathematical, Informational, and Computational Sciences (MICS Division) ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager

  2. Program Goals • What's new • New SciDAC and MICS Network Research Projects • Ultra-Science Network Testbed – Base funding • ESnet MPLS Testbed – Base funding • Application Pilot Projects (Fusion Energy, Biology, Physics) • GMPLS Control Plane • GridFTP Lite (Generalized File Transfer Protocol) • Transport protocols for switched dedicated links • Cyber security: IDS and group security • Data grid wide area network monitoring for LHC • Gap (Network-enabled storage systems) • Leadership Class National Supercomputer • Budget Reduction in FY-04 & FY-05 Budget • SC Network PI meeting in late September, 2004

  3. Revised Program Focus Previous Focus • R&D on fundamental networking issues • Single and small group of investigators • Limited emphasis on technology transfer and integration • Limited emphasis on network, middleware, and applications integration New Focus • Applied research, engineering, and testing • Experimental networking using UltraNet and MPLS testbed • Integrated applications, middleware, an networks prototypes • Leadership-class supercomputing • Impact on network research • Impact on research testbeds • Impact on inter-agency network coordination activities

  4. Network Research Program Elements Program Elements • R&D, E: Research, Development and Engineering • ANRT: Advanced Network Research Testbeds (ANRT) • ECPI: Early Career Principal Investigators • SBIR: Small Business innovation Research

  5. FY-03/04 Network Research Program Budget

  6. Implementation of Office of Science Networking Recommendations – I(Very High-Speed Data Transfers) • Data, data, data, data everywhere! • Many science areas such high energy physics, computational biology, climate modeling, astro-physics, etc., predict a need for multi-Gbits/sec data transfer capabilities in the next 2 years • Program Activities • Scalable TCP protocol enhancements for share networks • Scalable UDP for share networks and dedicated circuits • Alternative TCP/UDP transport Protocols • Bandwidth on-demand technologies • GridFTP lite • Ultra high-speed network components

  7. High-End Science Applications High-Performance Middleware UDP Variants TCP TCP Variants Others Control and Signaling Plane Logical Network Layer Hybrid-Switched Links Circuit-Switched Links Packet-Switched Links Optical Layer Implementation of Office of Science Networking Recommendations – II(Diverse SC Network Requirements) • Problem • Many science areas such high energy physics, computational biology, climate modeling, astro-physics, etc., predict a need for multi-Gbits/sec data transfer capabilities in the next 2 years Program Vision

  8. Implementation of Office of Science Networking Recommendations - III Advanced Research Network • Experimental optical inter-networking • On-demand bandwidth/DWDM circuits • Ultra high protocol development/testing • GMPLS Ultra-Science Network, 20 Gbps • High-Impact Science Network • Connect few high-impact science sites • Ultra high-speed IP network technologies • Reliable and secure services • QoS/MPLS for on-demand bandwidths ESnet QoS/MPLS Testbed, 5 Sites • Production Networks • Connects all DOE sites • 7x24 and highly reliable • Advanced Internet capability • Predominantly best-effort ESnet

  9. Impact of MPLS and Ultra-Science Networks Testbeds • Category B Sites- Sites w/local fiber arrangements (T3 to OC-12) • BNL --- Tier 1 - ATLAS • JLAB • GA • Princeton • MIT • Category A Sites- Sites w/local fiber arrangements • FNAL OC-12/OC-192 --- Tier 1 - CMS • ANL OC-12/OC-192 • ORNL OC-12/OC-192 --- Leadership Computing • PNNL OC-12/OC-192 --- EMSL Computing • NERCS OC-48/OC-192 --- Flagship Computing • LBL OC-48/OC-192 • SLAC OC-12/OC-192 --- BABAR Data Source • Use UltraNet to link site with local fiber connectivity • Develop dynamic provisioning technologies to manage DWDM circuits • Develop and test advanced transport protocols for high-speed data transfers over DWDM links • Use MPLS to establish LSPs to link sites with high-impact applications • Use MPLS to provide guaranteed end-to-end QoS to high-impact applications • Link LSPs with dynamics GMPLS circuits established

  10. Advanced Research Network Testbeds: (QoS+MPLS) • Goal • To develop advanced network technologies to provide guaranteed on-demand end-to-end bandwidth to selected high-impact science applications • Technical Activities • Deployment of site QoS technologies at selected DOE sites • Integrate QoS with local grid infrastructure • Deployment of MPLS in ESnet core network • Integrate MPLS integration with GMPLS • Integrate on-demand bandwidth technologies to application • Target Science Applications • High Energy (CMS & ATLAS) – High-speed data transfer • Fusion Energy - remote control of scientific instruments • Nuclear Physics – remote collaborative visualization

  11. Initial MPLS Deployment in ESnet Site Technology: QoS Core technologies: MPLS Core Technologies: MPLS & GMPLS CERN PNNL Starlight Starlight NERCS BNL QoS/MPLS FNAL SLAC FNAL Caltech GMPLS JLab ORNL GA GMPLS Site QoS MPLS

  12. DOE University Partners DOE National Lab Ultra-Science Network Testbed: Topology Upgrade: 20 Gbps backbone CERN PNNL StarLight BNL NERSC LBL ESnet 10 Gbps 20 Gbps SLAC JLab FNAL Sunnyvale CalTech CalTech SOX ORNL • Major Nodes • StarLight/FNAL • SOX/ORNL • Seattle/PNNL • Sunnyvale/SLAC • Sunnyvale/Caltech 10 Gbps ESnet Links 10 Gbps UltraNet Link under discussion

  13. Ultra-Science Network Testbed: Activities • Dynamic Provisioning • Development data circuit-switched technologies • IP control plane based on GMPLS • Integration of QoS, MPLS, and GMPLS • Inter-domain control plane signaling • Bandwidth on-demand technologies • Ultra High-Speed Data Transfer Protocols • High-speed transport protocols for dedicated channels • High-speed data transfer protocols for dedicated channels • Layer data multicasting • Ultra High-Speed Cyber Security • Ultra high-speed IDS • Ultra high-speed firewalls and alternatives • Control plane security

  14. UltraNet funded Projects and Laboratory initiatives • UltraNet/GMPLS Institutions • FNAL Fiber Starlight/UltraNet • ORNL Fiber to Atlanta and Starlight/UltraNet • SLAC Fiber to Sunnyvale/UltraNet (under discussion) • PNNL Fiber connection to Seattle/UltraNet • Caltech DWDM link to Sunnyvale/UltraNet • UltraNet QoS/MPLS • Fusion Energy: GA, NERCS, Princeton • ATALS Project: BNL, CERN, U. Michigan • CMS Project: FNAL, CERN, UCSD • Funded Projects: Application Development • FANL Explore very high-speed transfer of LHC data on UltraNet • PNNL Remote visualization of computational biology on UltraNet • ORNL Astrophysics real-time data visualization on UltraNet & CHEETAH • G A Wide Area Network QoS using MPLS • BNL Exploring QoS/MPLS for LHC data transfers

  15. Collaborations • Inter-Agency Collaboration • CHEETAH NSF: Dynamic Provisioning – Control plane interoperability • Application - Astrophysics (TSI) • DRAGON NSF: Dynamic Provisioning – Control plane interoperability • All-optical network technology • OMNINet NSF: Dynamic Provisioning – Control plane interoperability • All-optical network technology • UltraNet DOE: Dynamic Provisioning – Control plane interoperability • Hybrid Circuit/packet switched network • HOPI Internet2 - • Collaboration Issues • Control plane architecture and interoperability • Optical service definitions and taxonomy • Inter-domain circuit exchange services • GMPLS and MPLS (ESnet & Internet2) integration • Testing of circuit-based transport protocols • Integration of network-intensive applications • Integration with Grid applications

  16. UltraNet Operations and Management • Management Team • UltraNet Engineering • ESnet Engineering Rep • ESCC Rep • Engineering Team • UltraNet Engineering • ESnet Engineering representatives • Application Developers representatives • Research Team – Awards Pending • Network Research PIs • Application Prototyping PIs • Other Research Networks • Management Responsibilities * • Prioritize experiments on UltraNet • Schedule testing • Develop technology transfer strategies * Needs to be integrated into the Office of Science networking governance model articulated in the roadmap workshop

  17. Network Technologies for Leadership Class Supercomputing • Leadership super computer being built at ORNL • National resource • Access from university, national labs, and industry is a major challenge • Impact of leadership class supercomputer on Office of science networking • Network technologies for leadership class supercomputer • Inter-agency networking coordination issues

  18. Network Technologies for Leadership Class Supercomputing • Leadership super computer being built at ORNL • National resource • Access from university, national labs, and industry is a major challenge • Impact of leadership class supercomputer on Office of science networking • Network technologies for leadership class supercomputer • Inter-agency networking coordination issues

  19. 1.E+09:1P Supercomputer peak performance 100 Gbps projected Backbone performance 1.E+08: 100T Achievable end-to-end performance by applications Earth Simulator 37T 80 Gbps ASCI White: 12T 1.E+07: 10T ASCI Blue Mountain: 3T 1.E+06: 1T 40 Gbps SONET Intel Paragon 150G 40 Gbps 1.0E+05 10 Gbps SONET 2.5 Gbps SONET 1.0E+04 10 Gbps 0.6 Gbps SONET Cray Y-MP:400M 1 GigE Ethernet Cray 1: 133M 1.0E+03: 1G T3 0.15 Gbps SONET 100 Mbps Ethernet T1 1 Gbps 10 Mbps Ethernet 1.E+02: 100M 1960 1970 1980 1990 2000 2010 Computing and Communications: The “impedance” mismatch: computation and communication Rule of thumb: The bandwidth must be adequate to transfer Petabyte/day ~ 200Gbps - NOT on the evolutionary path of backbone, much less application throughput

  20. Q&A

More Related