1 / 13

Internet and Intranet Protocols and Applications

Internet and Intranet Protocols and Applications. Section V: Network Application Performance Lecture 11: Why the World Wide Wait? 4/11/2000 Arthur P. Goldberg Computer Science Department New York University artg@cs.nyu.edu. Performance definitions. Latency, delay: how long?

alia
Download Presentation

Internet and Intranet Protocols and Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Internet and Intranet Protocols and Applications Section V: Network Application Performance Lecture 11: Why the World Wide Wait? 4/11/2000 Arthur P. Goldberg Computer Science Department New York University artg@cs.nyu.edu

  2. Performance definitions • Latency, delay: how long? • Ie, message transport latency • Response time: how long to respond? • Ie, time from request to response • Bandwidth • Utilization: fraction (%) used • Throughput: # processed / time

  3. In Application Protocols, Where’s The Time Go? • Trace the steps • Client issues request • Network transports request • Server receives request • Server generates and sends response • Network transports response • Client receives and displays response • Complications in this model • Multiple concurrent requests (how long until the last finishes?) • Network protocol delays • Server/proxy hierarchy

  4. Primary Components of Delay • “Critical path” computation • Transmission, ie signal travel time =~ (2/3)c • Congestion (queueing) • Network • Ie, outgoing links on routers • Servers • All components: CPU, disks, memory, synchronization, etc. • Protocol (and, sometimes, system) waits • Eg in TCP • Delayed ACK • Slow start

  5. Tanenbaum, Section 6.6, Perf. Problems in Networks • Problems • Congestion • Overload -> data loss -> retransmission • Eg, bandwidth mismatch • Eg, broadcast storm (errors, booting, anything synchronous) • Timeout • Too short: excessive retransmissions • Too long: slow recovery • Underutilization

  6. Underutilization • An underutilized pipe has less effective bandwidth • Bandwidth – delay product: capacity of a pipe Bandwidth * delay • Eg: cross country T1 line: • 70 ms RTT • 1.544 Mbps • Bandwidth – delay product = 108,080 bits • Consider maximum bits in flight • Eg, TCP window: if 8 KB then max bw only: (T1 speed)* 64,000/108,080 = 0.91 Mbps

  7. Delay Estimate Rule-of-Thumb • Draw delay vs. load function • Load ranges • Light: 0.0 to 0.5 • Moderate: about 0.5 to 0.95 • Heavy: 0.95 to 1.0 • Over: over 1.0 • Under moderate load • Delay varies as 1/(1- Utilization) • (From first principles and distribution assumptions and queueing theory) • Under heavy • Avoid thrashing • Under over load • Shed work: Mogul and Ramakrishnan, Eliminating Receive Livelock in an Interrupt-driven Kernel.

  8. Tanenbaum 6.6.3: System Design for Better Performance • CPU speed is more important than network speed OS and protocol overhead dominates • Reduce packet count to reduce software overhead • Ie, use bigger packets • Minimize context switches • Minimize copying • Pre-compute whenever possible • Header prediction • Pre-compute partial checksum • You can buy more bandwidth, but not lower delay • Avoiding congestions is better than recovering from it • Avoid timeouts

  9. Impact of Fiber In the race between computing and communication, communication won. The full implications of essentially infinite bandwidth (although not at zero cost) have not yet sunk in to a generation of computer scientists and engineers taught to think in terms of the low Nyquist and Shannon limits imposed by copper wire. The new conventional wisdom should be that all computers are hopelessly slow, and networks should try to avoid computation at all costs, no matter how much bandwidth that wastes. Andrew S. Tanenbaum, Computer Networks, 1996

  10. Padmanabhan & Mogul, Improving HTTP Latency • Distribution of document sizes • Show curve • Avoid extra round trips • Do pipelining • How do sequential requests and pipelining interact with a proxy server?

  11. OTHER • NYU WebPerf measurements • Live WebPerf Demo? • Repair of HTTP 1.0 performance bugs in HTTP 1.1. • Multiple concurrent requests • Cool paper on Browser rendering • Delayed ACKs Problem • Interactions Between Delayed Acks and Nagle's Algorithm in HTTP and HTTPS: Problems and Solutions • In http://WWW.CS.NYU.EDU/artg/

  12. Admin • Final? • Due date of Phase II • Changes to Phase II

  13. IIPA Sections • Intro to Application Protocols and Socket Programming • Web • Vacation • Email • Network Application Performance

More Related