1 / 27

OMNI-View Lightpath map

Fail over From Rout-D to Rout-A (SURFnet Amsterdam, Internet-2 NY, CANARIE Toronto, Starlight Chicago). OMNI-View Lightpath map. DARPA DANCE Demo (May 31 st , ‘02). OG - 2. EvaQ8 Sw. Crisis Center. A notional view of an EvaQ8 end-to-end network

yardley
Download Presentation

OMNI-View Lightpath map

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fail over From Rout-D to Rout-A(SURFnet Amsterdam, Internet-2 NY, CANARIE Toronto, Starlight Chicago)

  2. OMNI-View Lightpath map

  3. DARPA DANCE Demo (May 31st , ‘02) OG - 2 EvaQ8 Sw. Crisis Center • A notional view of an EvaQ8 end-to-end network • Automatic optical path setup on disaster trigger • Sample measurements L2-L7Switch OG - 1 Ethernet Switch EvaQ8 Sw. MEMSswitch L2-L7Switch Ethernet Switch 10 GE EthernetSwitch Disaster Area 100Mbps ASTN Control Plane Disaster Event/ Environ. Sensor Safe End EvaQ8 Sw. Control Mesg L2-L7Switch OG - 3

  4. Sample measurements EvaQ8 Start • Measurements taken with clocks synchronized using NTP • Layer 1 link setup and IP QoS reconfiguration took around 1.2 seconds • The VLANs/Spanning Tree took an additional 12 seconds to converge • Further work with larger networks needed L1 L2 NOT TO SCALE 0 2 14 1000 1200 13,000 Timeline in ms Disaster Trigger And processing < 1 ms Photonic MEMs control 12 ms Ethernet Switch QoS control 1150 ms Inter-process communication 3 ms VLAN/Spanning Tree convergence 12 seconds Signal and Response 1.4 ms

  5. OMNInet Optera 5200 OFA 5200 OFA l l 1 1 l l 2 2 l l 3 3 Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR Optera 5200 10Gb/s TSPR l l 4 4 5200 OFA 5200 OFA Sheridan W Taylor Photonic 10 GE l 10 GE Photonic 1 10/100/ GE Node l Node Optera 5200 10Gb/s TSPR 2 PP 10/100/ GE l PP 3 8600 8600 l 4 Optera Metro 5200 OFA #5 – 24 km … #6 – 24 km 10 l #2 – 10.3 km EVL/UIC OM5200 Lake Shore TECH/NU OM5200 10 GE l 1 Photonic 10/100/ GE l 2 Node PP l 3 8600 2 x gigE 10 l l 4 StarLight Interconnect with other research networks Grid Clusters LAC/UIC OM5200 #8 – 6.7 km #9 – 5.3 km Grid Storage #4 – 7.2 km S. Federal • 8x8x8l Scalable photonic switch • Trunk side – 10G DWDM • OFA on all trunks • ASTN control plane 10GE LAN PHY (Aug 04) 10 GE Photonic Node PP 10/100/ GE 8600 WAN PHY interfaces 1310 nm 10 GbE

  6. Data Management Service Uses standard ftp (jakarta commons ftp client) Implemented in Java Uses OGSI calls to request network resources Currently uses Java RMI for other remote interfaces Uses NRM to allocate lambdas Designed for future scheduling λ Data Receiver Data Source FTP client FTP server NRM DMS Client App

  7. Network Resource Manager Using Application (DMS) Scheduling / Optimizing Application Network Resource Manager End-to-End-Oriented Allocation Interface Segment-Oriented Topology and Allocation Interface Omninet Data Interpreter Network-Specific Data Interpreter Network-Specific Data Interpreter Omninet Network Manager (Odin) Network-Specific Network Manager Network-Specific Network Manager Items in blue are planned

  8. 20GB File Transfer

  9. File transfer request arrives File transfer done, path released 0.5s 3.6s 0.5s 25s 0.14s 174s 0.3s 11s Data Transfer 20 GB Path Allocation request ODIN Server Processing Path ID returned Network reconfiguration FTP setup time Path Deallocation request ODIN Server Processing Initial Performance measure:End-to-End Transfer Time

  10. #2 Transfer Transaction Demonstration Time Line6 minute cycle time Customer #2 Transaction Accumulation #1 Transfer #2 Transfer #1 Transfer Customer #1 Transaction Accumulation -30 0 30 60 90 120 150 180 210 240 270 300 330 360 390 420 450 480 510 540 570 600 630 660 time (sec)  allocate path de-allocate path

  11. From 100 Days to 100 Seconds

  12. Network(s) D S S S Overall System Mouse Applications Our contribution Apps Middleware Lambda-Grid DTS Net Grid GT3 OGSI-fy NRS Comp Grid Data Grid Meta- Scheduler SRB Control Plane NMI Resource Managers C S D V I

  13. Apps mware I/F Datat calc Data service GT3 /IF Replica service NMI /IF Proposal evaluation Scheduling logic DTS NRS I/F DTS IF proposal constructor proposal evaluator GT3 /IF Net calc Scheduling algorithm NMI /IF Scheduling service Topology map Network allocation NRS Optical control I/F DTS - NRS

  14. BIRN Mouse Application BIRN Toolkit Apps Middleware BIRN Workflow Lambda Data Grid Collaborative NMI Grid Layered Architecture Grid FTP Resource Resource managers WSRF OGSA Optical Control NRS UDP TCP/HTTP Connectivity Optical protocols IP ODIN DB Optical hw Fabric Storage Computation Resources OMNInet Lambda Layered Architecture CONNECTION

  15. Control Interactions l1 l1 ln ln Apps Middleware DTS Scientific workflow Data Grid Service Plane NRS NMI Network Service Plane Resource managers Optical Control Network optical Control Plane Storage Compute DB l1 ln Data Transmission Plane

  16. NRS Interface and Functionality // Bind to an NRS service: NRS = lookupNRS(address); //Request cost function evaluation request = {pathEndpointOneAddress, pathEndpointTwoAddress, duration, startAfterDate, endBeforeDate}; ticket = NRS.requestReservation(request); // Inspect the ticket to determine success, and to find the currently scheduled time: ticket.display(); // The ticket may now be persisted and used from another location NRS.updateTicket(ticket); // Inspect the ticket to see if the reservation’s scheduled time has changed, or verify that the job completed, with any relevant status information: ticket.display();

  17. Overheads - Amortization When dealing with data-intensive applications, overhead is insignificant! 500GB

  18. Network Scheduling – Simulation Study Blocking probability Under-constrained requests Blocking Probability

  19. DWDM-RAM Service Control Architecture l1 l1 ln ln DATA GRID SERVICE PLANE Service Control GRID Service Request Optical Control Network Service Control NETWORK SERVICE PLANE Network Service Request ODIN OmniNet Control Plane ODIN Optical Control Network UNI-N UNI-N Data Path Control Data Path Control Connection Control Data Transmission Plane Data Center Data storage switch L3 router Data Center L2 switch l1 ln Data Path

  20. Path Allocation Overhead as a % of the Total Transfer Time Knee point shows the file size for which overhead is insignificant 500GB 1GB 5GB

  21. File transfer times

  22. Fixed Bandwidth List Scheduling

More Related