1 / 28

Multicast Performance Measurement on the vBNS

Multicast Performance Measurement on the vBNS. NANOG 20 (Washington, DC) October 24, 2000. Robert Beverly (rbeverly@mci.net). Background . End-to-End nightly performance tests run since early 1995 across vBNS Goal: Develop analogous tests for multicast

flann
Download Presentation

Multicast Performance Measurement on the vBNS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multicast Performance Measurement on the vBNS NANOG 20 (Washington, DC) October 24, 2000 Robert Beverly (rbeverly@mci.net)

  2. Background • End-to-End nightly performance tests run since early 1995 across vBNS • Goal: Develop analogous tests for multicast • No longer possible to rely on crontab entries for test synchronization (1:N vs 1:1) • Developed out-of-band signaling protocol to control tests

  3. Network Details • Tests utilize Sun Ultra2 hosts with OC12c ATM interfaces in each network POP • PVC to local Juniper M40 • Juniper M40s have both POS (OC48c) and ATM (OC12c) links to other backbone network nodes • POS links preferred • PIM-SM domain

  4. Signaling Protocol • Signaling protocol designed to allow maximum flexibility • Allows for arbitrary multicast topologies • Uses TCP for reliability • Messages: • Health check • Send N packets of size S on group G at rate R • Receive N packets on group G

  5. Test Operation • Coordinator checks health of all daemons • Coordinator selects one sender and ten receivers • Coordinator sends receive control instructions • Receivers send IGMP membership reports • Coordinator sends transmit control instruction • Receivers collect loss and packet misordering

  6. Test Operation • Receiver receives last expected packet or times out waiting on final packet • Coordinator waits for acknowledgements from all receivers • Coordinator gathers loss information, generates graphs and tables • Select different transmitter, repeat

  7. Test Details • Addresses selected from GLOP (RFC 2770) • Administratively scoped • Why ATM? • Models actual vBNS customer access method • Already deployed across all vBNS POPs • Easily controlled traffic shaping

  8. Expect 50000 Multicast Packets from Group (G) Control Host (Washington) Expect 50000 Multicast Packets from Group (G) Expect 50000 Multicast Packets from Group (G)

  9. IGMPv2 Membership Report for Group (G) Control Host (Washington) IGMPv2 Membership Report for Group (G) IGMPv2 Membership Report for Group (G)

  10. Control Host (Washington) Send 50000 to Group (G)

  11. Control Host (Washington) RP Register Test Traffic to Group G

  12. Control Host (Washington) (S,G) State Installed

  13. Traffic via Shared Tree Control Host (Washington) Traffic via Shared Tree Traffic Test Traffic to Group G

  14. Control Host (Washington) (S,G) State Installed (S,G) State Installed (SPT Built using PIM-SM)

  15. Control Host (Washington) Receivers now see traffic via SPT

  16. Receiver Report (includes which packets were lost) Control Host (Washington)

  17. Test Results • Nightly test results available at: http://www.vbns.net/stats/mcast • Both absolute and time relative loss presented • Nature of loss (bursty, continuous, etc) • Result data validated with OCxMONs

  18. Test Results – Loss Report Multicast Loss Percentage [Wed Oct 11 00:11:43 EDT 2000] Packets: 49984 Pkt Size: 4000 Bytes Rate: 10 Mbps Receiver SRC AST DNG DNJ HAY HSJ NOR PYM RTO SEJ WAE WOR ----+----------------------------------------------------------------- ast - 0.074 0.094 0.166 0.016 0.006 0.006 0.016 0.182 0.006 0.006 dng 0.046 - 0.006 0.166 0.006 0.008 0.222 0.022 0.132 0.022 0.022 dnj 0.098 0.014 - 0.026 0.014 0.010 0.488 0.026 0.116 0.022 0.022 hay 0.068 0.036 0.024 - 0.176 0.036 0.070 0.028 0.012 0.048 0.048 hsj 0.018 0.010 0.010 0.090 - 0.008 0.018 0.006 0.006 0.018 0.018 nor 0.040 0.016 0.016 0.066 0.016 - 0.172 0.016 0.018 0.028 0.028 pym 0.024 0.038 0.084 0.172 0.040 0.026 - 0.040 0.236 0.024 0.026 rto 0.048 0.046 0.086 0.036 0.016 0.046 0.198 - 0.004 0.048 0.070 sej 1.283 0.114 0.086 0.158 0.388 0.114 1.296 0.070 - 1.280 1.280 wae 0.016 0.124 0.140 0.184 0.140 0.016 0.016 0.140 0.704 - 0.016 wor 0.492 0.480 0.486 0.568 0.504 0.448 0.450 0.504 0.572 0.492 -

  19. Test Results – SNMP Polling ROOT: jn1.ast.vbns.net (Null hostent.) ROOT: 204.147.136.134 (jn1-so7-0-0-0.ast.vbns.net) 1: 204.147.136.139 (jn1-so7-0-0-2.mej.vbns.net) [0:04:05] [56142] 2: jn1.dng.vbns.net (Null hostent.) [0:04:39] [56142] ROOT: 204.147.136.134 (jn1-so7-0-0-0.ast.vbns.net) 1: 204.147.136.139 (jn1-so7-0-0-2.mej.vbns.net) [0:04:07] [56142] 2: 204.147.136.144 (jn1-so7-0-0-2.dng.vbns.net) [0:04:40] [56142] 3: jn1.dnj.vbns.net (Null hostent.) [0:04:08] [56096] ROOT: 204.147.136.129 (jn1-so7-0-0-1.ast.vbns.net) 1: 204.147.136.136 (jn1-so7-0-0-0.wae.vbns.net) [0:04:47] [53185] 2: 204.147.136.133 (jn1-so7-0-0-0.wor.vbns.net) [0:04:12] [53149] 3: jn1.nor.vbns.net (Null hostent.) [0:04:12] [53107] ROOT: 204.147.136.129 (jn1-so7-0-0-1.ast.vbns.net) 1: 204.147.130.162 (jn1-at1-0-0-13.wae.vbns.net) [0:04:50] [53185] 2: jn1.pym.vbns.net (Null hostent.) [0:04:13] [50016]

  20. Test Results – Loss Pattern ast Detected 484 lost pkts (50016 expected) 0.968% percent loss dng Detected 72 lost pkts (50016 expected) 0.144% percent loss

  21. Test Results – Practical Application • Detect performance problems • Loss • Reordering • Determine vBNS backbone multicast performance • Detect multicast routing anomalies • Detected lost tunnel PIC

  22. Causes of Loss • State initiation delay • Congested network path or network element • Routing instabilities • Inherently unreliable protocol (UDP)

  23. Practical Implementation Problems • No way to get OSPF routes into Juniper MRIB (inet.2) in JunOS 4.x • Forced to export Sun /30 routes into iBGP via a JunOS policy statement • IGMP membership reports must be carried in optioned IP packets for the Juniper to recognize them (contrary to RFC) • Danger in running native multicast on production routers

  24. Multi-Megabit Multicast • Successfully demonstrated high-date rate multicast from 1 sender to 10 receivers • 1Million 4k Byte packets at 380Mbps • Between 0.443% and 0.830% loss • Backbone M40 routers perform very well, shared memory architecture • Currently trying to scale Sun performance hosts to even higher rates

  25. Other Multicast Measurement Tools • Netcom Systems SmartMulticastIP • http://www.netcomsystems.com/solutions/products/applications/0300_0025RevE_SmartMulticast.asp • NLANR Multicast Beacon • http://dast.nlanr.net/Projects/Beacon • MRM • http://imj.ucsb.edu/mrm/

  26. Multicast BenchmarkingDocuments • RFC2432: Terminology for IP Multicast Benchmarking • Draft-ietf-bmwg-mcastm-04.txt: Methodology for IP Multicast Benchmarking

  27. Further Research • Full line rate (~580Mbps) testing • Group capacity testing • Mixed-Class Throughput • Latency/Jitter Measurements

  28. Questions?

More Related