1 / 10

TCP Performance over IPv6

TCP Performance over IPv6. Yoshinori Kitatsuji KDDI R&D Laboratories, Inc. kitaji@kddilabs.jp. Summary. Background Experiment inside of Tokyo XP, between Osaka University and Tokyo XP TransPAC Northern Route Lesson & Learn. Background.

hani
Download Presentation

TCP Performance over IPv6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP Performance over IPv6 Yoshinori Kitatsuji KDDI R&D Laboratories, Inc. kitaji@kddilabs.jp

  2. Summary • Background • Experiment • inside of Tokyo XP, • between Osaka University and Tokyo XP • TransPAC Northern Route • Lesson & Learn

  3. Background • Although IPv4/IPv6 dual stack in APAN Tokyo XP and TransPAC link northern route started in 2001, there was no performance test. • Two main routers (Juniper M20) in Tokyo XP, GigabitEthernet, PoS • Osaka University was planning DV over IPv6 demonstrations in iGrid2002 and HDTV over IPv6 in SuperComputing2002 • iGrid2002 • University Osaka deliveries the DV streams over IPv6 to Amsterdam University (Venue), San Diego Supercomputer Center and • Abilene, APAN Tokyo XP, ASCnet, JGNv6, TransPAC, SURFnet • 100Mbit/s for 3 DV streams • SC2002 • University Osaka deliveries the HDTV stream and stream over IPv6 to SCInet (Venue in Baltimore) and San Diego Supercomputer Center • Abilene, APAN Tokyo XP, JGNv6, TransPAC • 140Mbit/s for a HDTV stream and a DV stream

  4. Experiment • Inside APAN Tokyo XP • 2 Juniper M20 connected with GigabitEthernet link • Path between APAN Tokyo XP and Osaka University • Osaka University connects to JGNv6 • TransPAC links (IPv4) • Northern route to Seattle is POS OC12 link

  5. Examination1 • Inside of APAN Tokyo XP • Routers and Switches are connected with Gigabitethernet • PCs • Pentium III 1GHz • 1GB Memory • 64bit 66MHz PCI • Linux Usagi kernel 2.4 • TCP options • Window Size: 128KB • Enable SACK • Result • 750-900Mbit/s • depends on the regular traffic of users Seattle Chicago Juniper M20 Juniper M20 GigabitEthernet PC Foundry BigIron 4000 Foundry NetIron400 PC Foundry FastIron 400

  6. PC PC PC PC Examination2 APAN Tokyo XP • Path between Tokyo XP and Osaka University • Osaka University is connects to JGNv6 • JGNv6 is ATM OC12 network • PCs • Pentium III 1GHz • 1GB Memory • 64bit 66MHz PCI • Linux Usagi kernel 2.4 • TCP options • Enable Window scaling & SACK • Result1: Tokyo XP--Osaka University • 170Mbit/s • Window size: 200KB • RTT: 15ms • Result2: Tokyo XP --JGN ROC • 750-900Mbit/s • Window size: 128KB • Result3: JGN ROC – Dojima DC • 170Mbit/s • Window size: 180KB-1MB • RTT: 15ms • Bottleneck is the link between JGN ROC and Dojima DC Juniper M20 Test1 Test2 JGN Research and operation Center Hitachi GR2000 Test3 Juniper M20 Dojima DC ATM OC12 Osaka University Hitachi GR2000 JGNv6

  7. PC PC Examination3 • Path between Tokyo XP and TransPAC in Seattle • TCP over IPv4 • The link is POS OC12 • PC in Tokyo XP • Pentium III 1GHz • 1GB Memory • 64bit 66MHz PCI • Linux Usagi kernel 2.4 • PC in TransPAC • Pentium III 866MHz • 768MB Memory • 64bit 66MHz PCI • Linux kernel 2.4 • TCP options • Window size: 8MB • Enable SACK • RTT: 114ms • Result: • 360 – 460 Mbit/s • Window size: 16MB • RTT: 114ms • Upstream was not available. Security policy of PC in Seattle…? APAN Tokyo XP Juniper M20 POS OC12 Juniper M5 TransPAC

  8. Examination3 (cont.)

  9. Learn with experiments • Enabling TCP option • Same as IPv4 • Window scaling (for maximum 8MB) • net.ipv4.tcp_window_scaling = 1 • net.ipv4.tcp_rmem="4096 87380 4194304“ • net.ipv4.tcp_wmem="4096 65536 4194304" • net.core.wmem_max=8388608 • net.core.rmem_max=8388608 • net.core.wmem_default=65536 • net.core.rmem_default=65536 • Selective Acknowledgement • net.ipv4.tcp_window_scaling=1 • Congestion avoidance of Linux is also same • Unavailable SACK with FreeBSD (same as IPv4) • Hop by hop measurement environment is important • End to end measurement doesn’t solve where problem lies • Loss, Loss and Loss… • buffer allocated by driver for NIC in PC should be enlarged • Linux: ifconfig <dev> txqueuelen <# of packets> • FreeBSD: depends on driver • Usually change parameter in source code of driver and reconstruct kernel • synchronization failure (in JGNv6) • Usually ATM switch generates clock and Router follows with it. • Wrong configuration. • Non-shaped Multiple PVCs in a Physical link cause congestion.  packet loss…

  10. Slow Start Threshold Switch to congestion avoidance algorithm here, due to ssthresh was cached in the past. After this, bandwidth increase linearly.

More Related