230 likes | 336 Views
Motivation Simulation Study Scheduled OFS Experimental Results Discussion. Outline. OFS reduces the amount of electronic processing by switching long sessions at the WDM layer Lower costs, reduced delays, increased switch capacity Provide specific QoS for advanced services.
E N D
Motivation Simulation Study Scheduled OFS Experimental Results Discussion Outline
OFS reduces the amount of electronic processing by switching long sessions at the WDM layer Lower costs, reduced delays, increased switch capacity Provide specific QoS for advanced services Optical Flow Switching Motivation
OFS Motivation (cont) Elect. Domain Optical Domain Elect. Domain Optical Domain Total Bytes Number of Flows 1KB 1MB 10MB 100MB 1KB 1MB 10MB 100MB Flow Size Flow Size • Internet displays a “heavy-tail” distribution of connections • More efficient optics => more transactions in optical domain (red line moves left)
Short-duration optical connections Access area Wide area Network architecture issues Connection setup Route/wavelength assignment Goal: efficient use of network resources I.e. high throughput Previous work: “probabilistic” approaches Difficulty: high-arrival rate leads to high blocking probability Problem: lack of timely network state information Our proposed solution: Use of timing information in network Schedule connections Gather timely network state information This demonstration Demonstrate flow switching Demonstrate viability of timing and scheduling connections Investigate key sources of overhead High efficiency Optical Flow Switching Study
Key issue: How to learn optical resource availability? Distribution problem “Wavelength continuity” problem makes it worse Previous work Addresses issues one at a time Assumes perfect network state information Will these results be useful for ONRAMP, WAN implementation? This work Assesses effects of distributed network state information Models some current proposals MP-lambda-S ASON Connection Setup Investigation
Design distributed approaches Combined routing, wavelength assignment Connection setup Baseline flow switching architecture Requested flows from user to user Durations on order of seconds All-optical Simulate approaches on WAN topology End-to-end latency (“time of flight” only) Approaches: Ideal, Tell-and-Go, Reverse Reservation Assess performance versus idealized approach Blocking probability Methodology
Ideal Approach Illustration Assume: Flow Requested from A->B l-Changers l-Changers l-Changers A C B A C B Optical Flow “Tell” cntl packet Bidirectional Multi-fiber Link D D l-Changers Network Infrastructure LLR Routing, Connection Setup
A C B Link-state Updates D Tell-and-Go Approach Illustration Assume: Flow Requested from A->B Available l: 2,3 Available l: 2,3,4 A C B Optical Flow Available l: 1,2 Available l: 1,2,3 “Tell” Packet - Single wavelength D Link-State Protocol Connection Setup
Reverse Reservation Approach Illustration Assume: Flow Requested from A->B A C B A C B Information Packets Reservation Packet Route Chosen by B D D Route, Wavelength Reservation Route Discovery
Simulation Description • Results shown as Blocking Probability vs. Traffic Intensity • Uniform, Poisson flow traffic per node • Fixed WAN topology • Parameters: • F = Number of fibers/link • L = Number of channels/link • K = Number of routes considered for routing decisions • U = Update interval (seconds) • = Average service rate for flows (flows/second) • = Average arrival rate of flows (flows/second) • = Traffic intensity. Equal to / • not utilization factor
Latency-free Control Network Results (1sec flows) RR: F=1, L=16, K=10 TG: F=1, L=16, K=10
Control Network With Latency Results (1sec flows) TG, RR: U=0.1, F=1, L=16, K=10
Interesting Phenomenon • Why is TG performance better than RR? • 1 sec flows and large rho => small inter-arrival times • Smaller than round trip time • Thus, with high probability, successive flows will see same state (at least locally) • Increases chance of collision • Effect of distribution (latency) • Why is Rand better than FF? • This is exactly opposite of analytical papers’ claim • Combination of reasons • Nodes have imperfect information • FF makes them compete for same wavelengths (false advertisement) • Not seen in analysis because distribution was ignored
Inaccurate information hurts performance In this case: Simple speed of light Biggest problem: Core network resources wasted Our proposal: Use of timing information to schedule flows Deliver network information on time to make decisions Exchange flow-based information Maximize utilization of core network Possible small delay for user Issues Can timing be implemented cheaply, scaled? Can schedules be implemented? Must make use of current/future optical devices Low cost ONRAMP OFS Demonstration of scheduled OFS in access-area network One example of an implementation Scheduled OFS in ONRAMP
OXC OXC OXC OXC Router Router Router l l l l Tunable Tunable Fixed Fixed l l l l Fixed Fixed Tunable Tunable Xponder Xponder Xponder Xponder Xponder Xponder Xponder Xponder Receiver (R ) ) Xmitter (X) Scheduling in ONRAMP Intermediate Node Intermediate Node Router Router Router Router OXC Sched OXC OXC OXC OXC Access Node #2 Access Node #2 Access Node #1 Access Node #1 Control Control Control Control IP IP IP IP X-a R-a OXC Sched OXC Sched • FLOW FLOW IP IP IP IP FLOW FLOW GE GE GE GE GE GE GE GE
Uses timeslotting and schedules for lightpaths X => li busy on output of node i at corresponding slot Slot 1 Slot 2 Slot 3 ….. l 1 X X l 2 X l 3 l 4 X ONRAMP Connection Setup OXC Schedule
Algorithm Timeline Overhead - Dependent on timing uncertainty Slot 1 Slot 2 Slot 3 TIME Scheduling OH Scheduling OH Cannot go in next timeslot Can go in next timeslot • -Overheads includes all timing uncertainty • -Efficiency of any scheduled algorithm related to • timing uncertainty, and switching/electronic overheads • -Rough efficiency = Flow duration / Flow duration + Overhead
Sending GigE over transparent optical channel Clock rate 1.244 Ghz Rate 8/10 coding results in raw bit rate of 995.2 Mb/s Payload capacity for UDP Send MTU-sized packets 9000 bytes Avoid fragmentation Headers Ethernet (26 bytes) + IP (20 bytes) + UDP (8 bytes) = 54 bytes Result: 8946 bytes of payload/packet Link payload limit 989.2288 Mb/s Rate-limited UDP Input: desired rate Timed sends of UDP packets achieve desired rates Demonstrates transparency of OFS channel Utilizing Link Capacity
Experimental Setup • OFS implemented in lab • One second timeslots • Timing overhead negligible • Routing/wavelength selection • All available wavelengths (currently 14) • Both directions around ring • Gigabit Ethernet link layer • Flows achieve theoretical maximum link rate ~989 Mb/s • Rate limited UDP • Unidirectional flows • No packet loss (100s of flows) • Variable rate • Demonstrates transparent use of optical connection
Current overhead is 0.10 seconds Efficiency for one-second flows is therefore 90% Analysis of overhead reveals possible overhead of Gigabit Ethernet frame sync Still under investigation Switching overhead and timing uncertainty negligible I.e. scheduling viable, efficient Current Performance Limitations(cont.) Algorithm Overhead Timeline Flow Request Begin Slot Flow begins………… 10ms 100ms 150ms time Scheduling GBE Sync? Switching Command Receiver Laser