1 / 11

Network Management

Network Management. has always been and always will be essential to the Internet. FCC Broadband Industry Practices Hearing WC Docket No. 07-52 Stanford University April 17, 2008. Testimony of George Ou Former Network Engineer www.LANArchitect.net. Internet meltdown in 1980s.

teresab
Download Presentation

Network Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Management has always been and always will be essential to the Internet FCC Broadband Industry Practices Hearing WC Docket No. 07-52 Stanford University April 17, 2008 Testimony of George Ou Former Network Engineer www.LANArchitect.net

  2. Internet meltdown in 1980s • Lack of adequate congestion control in TCP allowed too many FTP users to overload Internet around 1986 • Van Jacobson created congestion control algorithm for TCP in 1987 • Congested routers randomly dropped packets to force every TCP end-point (client) to cut flow rate in half • TCP clients then slowly increased flow rate with every successful transmission until next packet drop • Caused all TCP streams to home in towards equal flow rate • Fair bandwidth sharing, but only for applications of its time • Jacobson’s algorithm saved the Internet in 1987 and remains dominant standard after 20 years • Early example of managing network congestion

  3. World Wide Wait in 1990s • First generation of web browsers were not optimized for Internet • World Wide Web turned in to theWorld Wide Wait • Version 1.1 of HTTP revamped to efficiently use resources over 1.0

  4. Today’s crisis on the Internet • Video-induced congestion collapse • Efficient existing broadcast model migrating to bandwidth-intensive Video on Demand model over IP • Full migration of video could require 100- to 1000-fold increase in Internet capacity • Exponentially more bandwidth required as video bit-rate and resolution increase to improve quality • P2P is the dominant distribution model because most of its content is“free” (read pirated) • Video can fill any amount of bandwidth

  5. More bandwidth doesn’t help The few throttling the many According to the Japanese Government 1% of users account for ~47% of traffic 10% of users account for ~75% of traffic 90% of users get leftover 25% Bandwidth hogs

  6. Exploiting Jacobson’s algorithm 50/50 Fair 80/20 Unfair 92/8 Unfair

  7. Persistence advantage in P2P apps * Corporate VPN telecommuter worker using G.722 codec @ 64 kbps payload and 33.8 kbps packetization overhead ** Vonage or Lingo SIP-based VoIP service with G.726 codec @ 32 kbps payload and 18.8 kbps packetization overhead *** I calculated that I sent 29976 kilobytes of mail over the last 56 days averaging 0.04956 kbps

  8. Weighted TCP: Per-user fairness 92/8 Unfair 50/50 Fair • BT chief researcher Bob Briscoe proposes TCP fix before the IETF to neutralize multi-stream loophole • Changing TCP takes many years, but it’s even harder to get over a billion devices to switch to new TCP client • Newer network-based solutions being implemented

  9. Present and future solutions • Present solutions use protocol throttling • P2P applications use disproportionately large amounts of bandwidth so they’re throttled to balance them out • Use conventional router de-prioritization techniques on P2P • Use TCP resets to occasionally stop P2P seeders • Potentially affect an extremely rare low-bandwidth P2P user • Can be fooled by protocol obfuscation techniques • Future solutions are protocol-agnostic • Weighted packet dropping at router and/or fair upstream scheduling on CMTS accomplishes per-user fairness • Only targets bandwidth hogs and forces them to back off • Cannot be fooled by protocol obfuscation

  10. What is reasonable network management? Protocol-agnostic per-user fairness Fair Future Intelligent Reasonable ISP throttles BW hogging protocols Metered Internet Unreasonable No network management Unfair Dumb Past

  11. Network management ensures harmonious coexistence • P2P applications need volume, not priority • Interactive applications (Web) and real-time applications (VoIP) want priority and not volume • P2P, Interactive, and real-time applications each get what they want under a managed network • Interactive and real-time apps have small/fixed volume so no matter how much they’re prioritized, they cannot slow down a P2P download. • Unmanaged networks regardless of capacity will always be unfair and hostile to interactive and real-time applications

More Related