1 / 65

Network Application Performance

This talk discusses network performance in local and wide areas and for advanced applications. Topics include throughput, delay, and contributors to delay, such as processing delay and retransmission delay. Suggestions for minimizing delay and optimizing performance are provided.

annise
Download Presentation

Network Application Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deke Kassabian and Shumon Huque ISC Networking & Telecommunications February 2002 - Super Users Group Network Application Performance

  2. What this talk is all about Network performance on the local area network and around campus Network performance in the wide area and for advanced applications Goal: acceptable performance, positive user experience Introduction

  3. Who needs to be involved? • End Users • Researchers • Local Support Providers • Application Developers • System Programmers/Administrators • Network Engineers

  4. “Performance” might mean … Elapsed time for file transfers Packet loss over a period of time Percentage of data needing retransmission Drop outs in video or audio Subjective “feeling” that feedback is “on time” What is performance?

  5. Throughput is the amount of data that arrives per unit time. “Goodput” is the amount of data that arrives per unit time, minus the amount of that data that was retransmitted. Throughput

  6. Delay is a time measurement for data transfer One way network delay for a bit in transit Delay for a total transfer Time from mouse click to screen message that the “operation is complete” Delay NIC to NIC Stack to Stack Eyeball to eyeball

  7. Variation in delay over time Non-issue for non-realtime applications May be problematic for some applications with real-time interactive requirements, such as video conferencing E2E delay of 70 ms +/- 5 ms -> low jitter E2E delay of 35 ms +/- 20 ms -> higher jitter Jitter

  8. Slow networks Slow computers Poor TCP/IP stacks on end-stations Poorly written applications Some Contributors to Delay

  9. Analysis of Delay A B (2) Propagation Delay Insertion time (3) Processing Delay

  10. Analysis of Delay From: Deke To: Ira Date: Mon Feb 12, 2002, 11:00AM EST Subject: Lunch Hey Ira, Meet you at the food trucks at noon! ^Deke Send 1,000 bits from A to B, With an acknowledgement, Over 100 meters of fiber A B (2) Propagation Delay 0.0000004 sec Insertion time (3) Processing Delay 0.0001 sec 0.01 sec

  11. Analysis of Delay Send 1,000 bits from A to B, With an acknowledgement, Over 100 meters of fiber A B Total Elapsed Time: 0.0211008 seconds

  12. Analysis of Delay A Add 2 switches and a router to the path B S S R Add 0.00002 sec Add 0.00002 sec Add 0.002 sec New Total Elapsed Time: 0.0231408 seconds

  13. Propagation delay is of little consequence in LANs, more of an issue for high bandwidth WANs. Queueing delays are rarely major contributors. Processing delay is almost always an issue. Retransmission delays can be major contributors to poor network performance. Summary of Delay Analysis

  14. Speaker Change

  15. What I’m going to talk about • More on delay contributors, their causes and how to minimize them • Protocol Stack behavior & tuning • Quality of Service (QoS) • Performance measurement tools • Operating System tuning examples • General comments about things you can do

  16. Recap: Delay Contributors • Processing Delay • Retransmission Delay • Queueing Delay • Propagation Delay

  17. Processing Delay • Time it takes to process a packet at an end-station or network node. Depends on: • Network protocol complexity, application code, computational power at node, NIC efficiency etc • Endstation Tuning • Application Tuning

  18. Endstation Tuning • Good network hardware/NICs • Correct speed/duplex settings • Auto-negotiation problems • Sufficient CPU • Sufficient Memory • Network Protocol Stack tuning • Path MTU discovery, Jumbo Frames, TCP Window Scaling, SACK etc

  19. Ethernet Bandwidth/Duplex mode • Ethernet bandwidth: 10, 100, 1000 • 10 Gigabit Ethernet soon • Duplex modes: half-duplex, full-duplex • Auto-Negotiation • Mismatch Detection: • CRC/Alignment errors • Late Collisions

  20. Application Tuning • Optimize access to host resources • Pay attention to Disk I/O issues • Pay attention to Bus and Memory issues • Know what concurrent activity may be interfering with performance of app • Tuning application send/receive buffers • Efficient application protocol design • Positive end user feedback • Subjective perception of performance

  21. Retransmission Delay • Causes • Packet loss • Bad hardware: NICs, switches, routers, transmission lines • Congestion and Queue drops • Out of order packet delivery • May be considered packet loss from application’s perspective if it can’t re-order packets • Untimely delivery (delay) • Some apps may consider a packet to be lost if they don’t receive it in a timely fashion

  22. Retransmission Delay (cont) • Mitigating retransmission delay • Ensure working equipment • Although some packet loss is unavoidable; eg. most transmission lines have a BER (Bit Error Rate) • Reduce time to recover from packet loss • Eg. Highly tuned network stack with more aggressive retransmission and recovery behavior • Forward Error Correction (FEC) • Very useful for time/delay sensitive applications • Also, for cases when it’s expensive to retransmit data

  23. Bit Errors on WAN paths • Bit Error Rate (BER) specs for networking interfaces/circuits may not be low enough: • 1 bit-error in 10 billion bits • Assuming 1500 byte packets • Packet error rate: 1 in 1 million • 10 hops => 1 in 100,000 packet drop rate

  24. Queueing Delay • Long queueing delays could be caused by lame hardware (switches/routers) • Head of line blocking • Insufficient switching fabric • Insufficient horse power • Unfavorable QoS treatment

  25. Queueing Delay (cont) • How to reduce • Use good network hardware • Improved network architecture • Reduce number of switching/routing elements on the network path • Richer network topology, more interconnections • End user may not have influence over architecture • Employ preferential queue scheduling algorithms • Will discuss later in QoS section of talk

  26. Propagation Delay • Restricted by speed of light through transmission medium • Can’t be changed, but rarely a concern in the campus/LAN environment • A concern in long distance paths (WAN), but • Some steps can be taken to increase performance (throughput) on such paths

  27. Other delays and bottlenecks • Intermediary systems • DNS • Routing issues • Route availability, asymmetric routing, routing protocol stability and convergence time • Firewalls • Tunnels (IPSec VPNs, IP in IP tunnels etc) • Router hardware poor at encap/decap

  28. Throughput • Influenced by a number of variables: • All the delay factors we discussed • Window size (for TCP) • Bottleneck link capacity • End station processing and buffering capacity

  29. What I’m going to talk about next • Brief description of TCP/IP protocol • How to improve TCP/IP performance

  30. Transport: TCP vs UDP • Network apps use 2 main transport protocols: • TCP (Transmission Control Protocol) • Connection oriented (telephone like service) • Reliable: guarantees delivery of data • Flow control • Examples: Web (HTTP), Email (SMTP, IMAP) • UDP (User Datagram Protocol) • Connectionless (postal system like) • Unreliable: no guarantees of delivery • Examples: DNS, various types of streaming media

  31. When to use TCP or UDP? • Many common apps use TCP because it’s convenient • TCP handles reliable delivery, retransmissions of lost packets, re-ordering, flow control etc • You may want to use UDP if: • Delays introduced by ACKs are unacceptable • TCP congestion avoidance and flow control measures are unsuitable for your application • You want more control of how your data is transported over the network • Highly delay/jitter sensitive apps often use UDP • Audio-video conferencing etc

  32. Network Stack Tuning • Jumbo Frames • Path MTU Discovery • TCP Extensions: • Window Scaling - RFC 1323 • Fast Retransmit Fast Recovery • Selective Acknowledgements

  33. Jumbo Frames • Increase MTU used at link layer, allowing larger maximum sized frames • Increases Network Throughput • Fewer larger frames means: • Fewer CPU interrupts and less processing overhead for a given data transfer size • Some studies have shown Gigabit Ethernet using 9000 byte jumbo frames provided 50% more throughput and used 50% less CPU! • (default Ethernet MTU is 1500 bytes)

  34. Jumbo Frames (cont) • Pitfalls: • Not widely deployed yet • Many network devices may not be capable of jumbo frames (they’ll look like bad frames) • May cause excessive IP fragmentation • BER may have more impact on jumbo frames • Eg. A single bit-error can cause a large amount of data to be lost and retransmitted • May have negative impact on host processing requirements: • More memory for buffering, newer NICs

  35. Path MTU Discovery • MTU (Max Transmission Unit) • Max sized frame allowed on the link • Path MTU • Min MTU on any network in the path between 2 hosts • IP Fragmentation & Reassembly • Path MTU Discovery • MSS (Max Segment Size) • What happens without PMTU discovery? • Might select wrong MTU and cause fragmentation • Suboptimal selection of TCP MSS (536 default?)

  36. Path MTU Discovery (cont) R1 MTU=9000 IP fragmentation may occur MTU=4474 R2 A MTU=1500 R3 B Path MTU is 1500 MTU=9000

  37. TCP Sliding Window • TCP uses a flow control method called “Sliding Window” • Allows sender to send multiple segments before it has to wait for an ACK • Results in faster transfer rate coz sender doesn’t have to wait for an ACK each time a packet is sent • Receiver advertises a window size that tells the sender how much data it can send without waiting for ACK

  38. TCP Sliding Window (cont)

  39. Slow Start • In actuality, TCP starts with small window and slowly ramps it up (upto rwin) • Congestion Window (cwnd) • controls startup and limits throughput in the face of congestion • cwnd initialized to 1 segment • cwnd gets larger after every new ACK • cwnd gets smaller when packet loss is detected • Slow Start is actually exponential

  40. Congestion Avoidance • Assumption: packet loss is caused by congestion • When congestion occurs, slow down transmission rate • Reset cwnd to 1 if timeout • Use slowstart until we reach the half way point where congestion occurred. • Then use linear increase • Increase cwnd by ~ 1 segment/RTT

  41. packet loss, D-ACK timeout CWND slow start: exponential increase congestion avoidance: linear increase retransmit: slow start again time TCP Behavior • Recovery after a loss can be very slow on today’s high delay/bandwidth links • (graph from Peter O’Neill, NCAR)

  42. TCP Throughput Acceleration (From Phil Dykstra)

  43. TCP Window Size Tuning • TCP performance depends on: • Transfer rate (bandwidth) • Round trip time • BW*Delay product • TCP Window should be sized to be at least as large as the BW*Delay product

  44. BW*Delay Product • BW*Delay product measures: • Amount of data that would fill the network pipe • Buffer space required at sender and receiver to achieve the max possible TCP throughput • Amount of unacknowledged data that TCP must handle in order to keep pipe full

  45. BW*Delay example • A path from Penn to Stanford has: • Round trip time: 60 ms • Bandwidth: 120 Mbps • BW * Delay = • 60/1000 sec * 120 * 1000000 bits/sec • = 7200000 bits = 7200 Kbits • = 900 Kbytes • So TCP window should be at least 900KB

  46. TCP Window Scaling • RFC 1323: TCP Extensions for High Performance • Allows scaling of TCP window size beyond 64KB (16 bit window field) • Introduces new TCP option • Note: In previous example, TCP needs to support Window Scaling to use 900KB window

  47. Window Scaling Pitfalls • Why not use large windows always? • Might consume large memory resources • May not be useful for all applications • Isn’t useful in the campus/LAN environment

  48. Fast Retransmit Fast Recovery • TCP required to send immediate D-ACK when out-of-order packet received • After 3 D-ACKs, sending TCP retransmits only one segment • Also perform congestion avoidance but not slow start 1 2 3 4 5 6 7 Packet loss, causing D-ACK

  49. TCP Selective Acks (SACK) • RFC 2018 • Allows TCP to efficiently recover from multiple segment losses within a window • Without retransmitting entire window

  50. Enough about TCP

More Related