1 / 30

Estimating TCP Latency Approximately with Passive Measurements

Estimating TCP Latency Approximately with Passive Measurements. Sriharsha Gangam, Jaideep Chandrashekar , Ítalo Cunha, Jim Kurose. Motivation: Network Troubleshooting. Home network or access link? WiFi signal or access link? ISP, enterprise and content provider networks

ziva
Download Presentation

Estimating TCP Latency Approximately with Passive Measurements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Estimating TCP Latency Approximately with Passive Measurements Sriharsha Gangam, JaideepChandrashekar, Ítalo Cunha, Jim Kurose

  2. Motivation: Network Troubleshooting • Home network or access link? • WiFi signal or access link? • ISP, enterprise and content provider networks • Within the network or outside? ? ? ? Latency and network performance problems? Where? Why?

  3. Passive Measurement in the Middle Passive measurement device (gateway, router) • Decompose path latency of TCP flows • Latency: Important interactive web-applications • Other approaches: active probing, NetFlow • Challenges • Constrained devices and high data rates A B Challenge: Constrained Resources

  4. TCP Latency • Existing methods: Emulate TCP state for each flow • Accurate estimates but expensive Source Destination TS1 0, 99 S TM1 100 A TD1 Time TD2 TM2 TS2 RTT = TM2 – TM1

  5. TCP Latency • Scaling challenges • Large packet and flow arrivals • Constrained devices (limited memory/CPU) • ALE– Approximate Latency Estimator Source Destination Large TCP state Time Expensive lookup

  6. Design Space Ideal solution Existing solutions e.g., tcptrace ALE Goals: Configurable tradeoff Accuracy Expensive for constrained devices Overhead

  7. ALE Overview • Eliminate packet timestamps • Group into time intervals with granularity (w) • Store expected ACK • SEG: 0-99, ACK: 100 • Use probabilistic set membership data structure • Counting Bloom Filters Source Destination ΔT = w 0, 99 S 100 A Time ΔT = w

  8. ALE Overview • Sliding window of buckets (time intervals) • Buckets contain a counting bloom filter (CBF) S 100 100 A A 0, 99 RTT = Δ + w + w/2 Lookup hash(inv(FlowId), 100) hash(FlowId, 99+1) i i i 0, 99 i S Max Error: w/2 Match w Δ w/2 i CBF CBF CBF CBF CBF CBF Missed Latencies Current Bucket w w w TALE W

  9. Controlling Error with ALE Parameters ALE-U (uniform) Number of buckets = W/w Increase W: Higher Coverage W w Decrease w: Higher Accuracy

  10. ALE-Exponential (ALE-E) • Process large and small latency flows simultaneously • Absolute error is proportional to the latency • Larger buckets shift slowly 2w w 4w A B D E F DC C G BA DCBA Counting Bloom Filter (CBF) Merge

  11. Error Sources • Bloom filters are probabilistic structures • False positives and negatives Artifacts from TCP • Retransmitted packets and Reordered packets • Excess (to tcptrace) erroneous RTT samples • ACK numbers not on SEQ boundaries, Cumulative ACKs

  12. Evaluation Methodology • 2 x Tier 1 backbone link traces (60s each) from CAIDA • Flows: 2,423,461, Packets: 54,089,453 • Bi-directionalFlows: 93,791, Packets: 5,060,357 • Groundtruth/baselinecomparison: tcptrace • Emulates TCP statemachine • Latencies ALE and tcptrace • Packetand flowlevel • Overhead of ALE and tcptrace • Memory and compute

  13. Latency Distribution 300 ms 60 ms 120 ms 500 ms Majority of the latencies

  14. Accuracy: RTT Samples • Range 0-60 ms • Box plot of RTT differences • More memory => Closer to tcptrace Exact match with tcptrace Increasing overhead(smaller ‘w’)

  15. Accuracy: RTT Samples ALE-E(12) outperforms ALE-U(24) for small latencies Exact match with tcptrace Increasing overhead

  16. Accuracy: Flow Statistics • Many applications use aggregated flow statics • Small w (more buckets) • Median flow latencies approach tcptrace CDF Exact match with tcptrace

  17. Accuracy: Flow Statistics • Small w: Standard deviation of flow latencies approach tcptrace Exact match with tcptrace

  18. Compute and Memory Overhead • Sample flows uniformly at random at rates 0.1, 0.2, . . . 0.7 • 5 pcap sub-traces per rate • Higher sampling rate (data rate) ⇒ More state for tcptrace • GNU Linux taskstats API • Compute time and memory usage

  19. Compute Time ALE scales with increasing data rates

  20. Memory Overhead TCPTRACE ALE-U(96) RSS memory: ≈64 MB (0.1) to ≈460 MB (0.7) VSS memory: ≈74 MB (0.1) to ≈468 MB (0.7) RSS memory: 2.0 MB for all Sampling rates VSS memory: 9.8 MB for all sampling rates ALE uses a fixed size data structure

  21. Conclusions • Current TCP measurements emulate state • Expensive: high data rates, constrained devices • ALE provides low overhead data structure • Sliding window of CBF buckets • Improvements over compute and memory with tcptrace sacrificing small accuracy • Tunable parameters • Simple hash functions and counters • Hardware Implementation

  22. Thank You

  23. ALE Compute and Memory Bounds • Insertion: O(h) time • CBF with h hash functions • Matching ACK number: O(hn) time • ‘n’ buckets • Shifting buckets: O(1) time • Linked list of buckets • ALE-E: O(C) time to merge CBFs • ‘C’ counters in the CBF • Memory usage: n × C × d bits • ‘d‘ bit CBF counters

  24. ALE Error Bounds • ALE-U: Average case error w/4 • ALE-U: Worst case error w/2 • ALE-E:Averagecase error (3w/16)2i • ALE-E: Worst case error (2i−1w) • ACK has a match in bucket i

  25. ALE Parameters • ‘w’ on accuracy requirements • ‘W’ estimate of the maximum latencies expected • CBFs with h =4 hash functions • C on traffic rate, w, false positive rate and h • E.g., For w = 20 ms, h = 4, and m = R × w, the constraint for optimal h (h = (C/m) ln 2) yields C = 40396 counters

  26. Compute Time 10 min trace ALE scales with increasing data rates

  27. Accuracy: RTT Samples

  28. Memory Overhead

  29. ALE Overview • Eliminate packet timestamps • Time quantization: Use fixed number of intervals • Store expected acknowledgement number S 600, 699 S 300, 399 S 0, 99 S 700, 799 S 400, 499 S 100, 199 S 800, 899 S 500, 599 S 200, 299 T3 T1 T2 S 700 S 400 S 100 S 800 S 500 S 200 S 900 S 600 S 300 T3 = T1 + 2w T1 T2 = T1 + w

  30. Motivation: Network Troubleshooting Latency Sensitive Applications Latency and network performance problems? Where? Why?

More Related