1 / 15

Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) <masaki@nict.go.jp> <tanaka@kddnet.ad.jp>

Impact of Bottleneck Queue on Long Distant TCP Transfer. August 25, 2005 NOC-Network Engineering Session Advanced Network Conference in Taipei. Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) <masaki@nict.go.jp> <tanaka@kddnet.ad.jp>. APAN Requirements on Transport.

ilyssa
Download Presentation

Masaki Hirabaru (NICT) and Jin Tanaka (KDDI) <masaki@nict.go.jp> <tanaka@kddnet.ad.jp>

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Impact of Bottleneck Queue on Long Distant TCP Transfer August 25, 2005 NOC-Network Engineering Session Advanced Network Conference in Taipei Masaki Hirabaru (NICT) and Jin Tanaka (KDDI)<masaki@nict.go.jp> <tanaka@kddnet.ad.jp>

  2. APAN Requirements on Transport Advanced ► High Speed International ► Long Distant Difficulty in Congestion Avoidance is in proportion to:Bandwidth-Delay Product (BWDP) Single TCP flowNo fairness considered

  3. Long Distant Rover Control (at least) 7 minutes one way delay Image Command Earth Mars When operator saw collision, it was too late.

  4. Long-Distance End-to-End Congestion Control Overflow B A C Sender(JP) Receiver(US) Merge (Bottleneck) A+B > C Feedback 200ms round trip delay BWDP: Amount of data sent but not yet acknowledged 64Kbps x 200ms = 1600B ~ 1 Packet 1Gbps x 200ms = 25MB ~ 16700 Packets

  5. Analyzing Advanced TCP Dynamic Behavior in a Real Network(Example: From Tokyo to Indianapolis at 1G bps with HighSpeed TCP) Throughput RTT Window Sizes Packet Losses The graphs were generatedthrough web100. The data was obtained during e-VLBI demonstration at Internet2 Member Meetingin October 2003.

  6. TCP Performance Measurement in Testbedfocus on bottleneck queue queuing delay (q) + trip delay (t) 1/2RTT < t < RTT overflow(loss) dummynetFreeBSD 5.1 GbE GbE Sender ReceiverLinux TCP 1500B MTU Only 800 Mbps available RTT 200ms(100ms one-way)

  7. TCP Performance with Different Queue Sizes

  8. TCP’s Way of Rate Control (slow-start) rate RTT (200ms) 1Gbps 20ms 40ms 80ms 160ms average rate 100Mbps t 150 Mbps average rate overflows with a 1000-packet queue

  9. Bottleneck bandwidth and queue size TCP Burstyness (a) HighSpeed (b) Scalable (c) BIC (d) FAST

  10. Measuring Bottleneck Queue Sizes Capacity C Sender Receiver lost packet packet train measured packet Queue Size = C x (Delaymax – Delaymin) Switch / Router Queue Size Measurement Result * set to 100M for measurement cross traffic injectedfor measurement

  11. Typical Bottleneck Cases b-1) Queue~100 Queue~1000 a) Switch Router VLANs 1Gbps (10G) Switch Router 100Mbps(1G) b-2) Switch/Router 9.5G WAN-PHY 802.1q Tag 10G LAN-PHY Ethernet Untag

  12. Solutions by Advanced TCPs How can wee foresee collision (queue overflow)? • Loss-Based ► AQM (Advanced Queue Management)Reno, Scalable, High-Speed, BIC, … • Delay-BasedVegas, FAST • Explicit Router NotificationECN, XCP, Quick Start, SIRENS, MaxNet

  13. Queue Management Methods • FIFO (First In First Out) 1 full 2 4 5 4 3 2 1 3 drop 5 6 6 • RED (Random EarlyDetection) 1 threshold 2 4 6 4 3 2 1 3 drop 5 5 6

  14. PAUSE HOLB (Head of Line Blocking) Switch empty blocked 1 fast 1 1 1 1 2 full wait 2 2 2 2 slow full output queue input queue Note: Ethernet flow-control (PAUSE frame in 802.3x) may produce HOLB (Head of line blocking),resulting in less performance at a backbone switch.

  15. Summary • Add an interface to a router. Or, • Use a switch with an appropriate interface queue. • Let’s consider making use of AQM on a router. Future Plan • 10G bps congestion through TransPAC2 and JGN II with large delay (>=100 ms)

More Related