1 / 18

IETF BMWG Work Items

IETF BMWG Work Items. 65th IETF Meeting Dallas, TX Tuesday 3/21/06. BENCHMARKING NETWORK LAYER TRAFFIC CONTROL MECHANISMS draft-ietf-bmwg-dsmterm-11.txt draft-ietf-bmwg-dsmmeth-01.txt Co-authors are

Download Presentation

IETF BMWG Work Items

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IETF BMWG Work Items 65th IETF Meeting Dallas, TX Tuesday 3/21/06

  2. BENCHMARKING NETWORK LAYER TRAFFIC CONTROL MECHANISMS draft-ietf-bmwg-dsmterm-11.txt draft-ietf-bmwg-dsmmeth-01.txt Co-authors are Scott Poretsky of Reef Point, Jerry Perser of Veriwave, Shobha Erramilli of Qnetworx, and Sumit Khurana of Telcordia 65th IETF Meeting – Dallas

  3. draft-ietf-bmwg-dsmterm-12.txt, Terminology for Benchmarking Network Layer Traffic Control Mechanisms Terminology completed WGLC Required Co-Chair review prior to IESG revealed a few issues that are now corrected Clarify that delay is Forwarding Delay Minor grammar and format issues Only remaining outstanding issue is reference to Jitter definition in obsoleted EF PHB RFC Ready for IESG review? Terminology

  4. draft-ietf-bmwg-dsmmeth-01.txt, Methodology for Benchmarking Network Layer Traffic Control Mechanisms Applies many of the terms from the Terminology draft Test Cases: Undifferentiated Response Traffic Control Baseline Performance Traffic Control Performance with Forwarding Congestion Methodology

  5. Expected Vector | \/ --------- Offered Vector --------- | |<--------------------------------| | | | | | | DUT | | Tester| | | | | | |~~~~~~~~~~~~~~~~~~> | | | | Output Vector | | --------- --------- Methodology – Baseline Test Cases • Undifferentiated Response • This is the baseline case with • Multiple flows of SA/DA pairs and DSCP=0 (BE) • Aggregate Offered Load is < Forwarding Capacity • Traffic Control Baseline Performance • This is the DSCP baseline case with • Multiple flows of SA/DA pairs • Multiple DSCP values • Aggregate Offered Load is < Forwarding Capacity

  6. Expected Vector | \/ --------- Offered Vector --------- | |<--------------------------------| | | |<--------------------------------| | | DUT | | Tester| | | | | | |~~~~~~~~~~~~~~~~~~> | | | | Output Vector | | --------- --------- Methodology – Congestion Test Cases • Traffic Control Performance with Forwarding Congestion • This is the DSCP congestion case with Link Congestion • This is the DSCP baseline case with • Multiple flows of SA/DA pairs • Multiple DSCP values • Aggregate Offered Load is > Forwarding Capacity • ADD Test Case: • Traffic Control Performance with DSCP Congestion • No Link Congestion, but configured DSCP Bandwidth is Exceeded • Any input from WG? • Other test cases to add? • Any comments for methodology?

  7. BENCHMARKING IGP DATA PLANE ROUTE CONVERGENCE draft-ietf-bmwg-igp-dataplane-conv-app-10.txt draft-ietf-bmwg-igp-dataplane-conv-term-10.txt draft-ietf-bmwg-igp-dataplane-conv-meth-10.txt Co-authors are Scott Poretsky of Reef Point and Brent Imhoff of Juniper Networks 65th IETF Meeting – Dallas

  8. draft-ietf-bmwg-igp-dataplane-conv-app-10.txt, Considerations for Benchmarking IGP Data Plane Route Convergence draft-ietf-bmwg-igp-dataplane-conv-term-10.txt, Terminology for Benchmarking IGP Data Plane Route Convergence draft-ietf-bmwg-igp-dataplane-conv-meth-10.txt, Benchmarking Methodology for IGP Data Plane Route Convergence -08 successfully completed 2nd WGLC -09 issued to correct IETF NITs and incorporate comments for formatting and clarification from Al Morton, Thomas Eriksson, and Timmons Player -10 incorporates comments from Cross-Area Reviewer, Sue Hares (last step for IESG review) Current Status

  9. Clean-up Normative/Informative References Clarify time measurement granularity is to milliseconds Specify the packet size includes Payload, IP header, and Link-Layer header Clarify last sentence of Convergence Packet loss discussion Fix figures with formatting error of the ‘Tester’ Change "this draft describes" to "this document describes" Make Consistent use of term Throughput (not Forwarding Rate) Found rfc3978 Section 5.4 paragraph 1 boilerplate (on line 696), which is fine, but *also* found rfc2026 Section 10.4C paragraph 1 boilerplate on line 42. It should be removed. Considerations (Applicability) missing form feeds Some lines between 73 to 77 characters long (26 instances) with control characters (52 instances) With an extra space between words (5 instances) Changes for –09

  10. “Overall comment - very well done!Document is accurate and well thought out.” A few document edits/nits found and fixed in –10 One comment not incorporated: “It would be very good to replicate the equations used by cisco for ISIS or IGP convergence as an appendix:” LoC(p) = D + O + QSP + (h * F) + SPF(n) + RIB (p) + FIB(p) + DD + CRR D = link outage         0 = Originate OSPF       QSP = queue the ls updates       H*F = hops by flooding time       SPF(n) = SPF calculation time       RIB(p) = Routing RIB update time       FIB(p) - FIB update time       DD - Logical circuit update time       CRR = Recursive Lookup for BGP That equation, while being very useful, does not fit directly into this IGP work.  It includes parameters that are White Box measurements, BGP time, and factors for multiple hops.  Since it was suggested to be in an appendix I felt more comfortable excluding it from this single box, black box, IGP benchmark.  Cross Area Review

  11. -10 Ready for IESG Review? Next Steps

  12. BENCHMARKING NETWORK DEVICES UNDER ACCLERATED STRESS draft-ietf-bmwg-acc-bench-term-08.txt draft-ietf-bmwg-acc-bench-meth-04.txt (draft-ietf-bmwg-acc-bench-meth-ebgp-00.txt draft-ietf-bmwg-acc-bench-meth-opsec-00.txt) Co-authors are Scott Poretsky of Reef Point and Shankar Rao of Qwest 65th IETF Meeting – Dallas

  13. Terminology draft-ietf-bmwg-acc-bench-term-07.txt, Terminology for Accelerated Stress Benchmarking -08 changes incorporate action items from IETF 64 Specified the benchmark Recovery Time in micro-second resolution Added discussion that benchmarks span multiple dimensions and each can be compared as the methodology user requires for the DUT application. Renamed "degraded forwarding rate“ to "forwarding rate degradation" General Methodology draft-ietf-bmwg-acc-bench-meth-05.txt, Methodology Guidelines for Accelerated Stress Benchmarking -05 will incorporate action items from IETF 64. To be submitted by end of April. Current Status

  14. Is Terminology ready for WGLC? -05 Methodology will incorporate comments from IETF 64 and BMWG mailing list. To be posted by end of April. Next Steps

  15. Backup Slides

  16. Example Stress Test – Configuration Set Control Plane 30 BGP Peers (2 EBGP, 28 IBGP) 28 OSPF Adjacencies 400K route instances 175K routes in FIB MPLS Disabled Multicast Protocols Disabled 16K IPsec Tunnels 32K IPsec SAs 16K IKE SAs IPsec SA Lifetime = 8 hours IKEv2 SA Lifetime = 8 hours DPD Disabled Security Plane 100K Stateful Firewall Sessions 64K Firewall Rules DOS-Protection Enabled Management Plane 20 SSH Sessions 4 RADIUS Servers with round-robin Logging enabled SysLog enabled Statistics enabled Data Plane Interfaces = qty 4 GigE Data Rate = 4 Gbps Packet Size = 1500 bytes QoS Disabled

  17. Example Stress Test – Test Conditions • Startup Conditions (as configured on Tester*) • BGP and OSPF pre-configured and negotiation starts immediately • 50 IPsec Tunnels established per second • 1500 Stateful Firewall Sessions established per second • Instability Conditions (as configured on Tester*) • 1 Interface Shut/No Shut per minute • 1 OSPF Interface Cost Change per hour • 100 IPsec Tunnels flapped (setup/teardown) per second • 20 IKEv2/IPsec Rekeys per second • RADIUS Server lost every 30 minutes • Continuous DOS Attacks (using Nessus) • Close/Open 1 SSH session per minute • Enter SHOW, Config, and Errored commands for every open session • 1 SNMP GET per second • 1 FTP File Transer of 100Mb every second • * Tester is Test Device or System of Test Devices

  18. Example Stress Test – Benchmarks • DEVICE #1 • 1. Configuration Sets achieved • 2. Startup Phase Benchmarks • Stable Aggregate forwarding Rate = 4Gbps • Stable Latency = 110 usec • Stable Session Count = • 30 BGP Peers • 28 OSPF Adjacencies • 16K IPsec Tunnels • 3. Apply Instability Conditions • 4. Instability Phase Benchmarks* • Unstable Aggregate Forwarding Rate = 3.5Gbps • Degraded Aggregate Forwarding Rate = 0.5Gbps • Unstable Latency = 110usec • Unstable Uncontrolled Sessions Lost = 126 • *These are averages. It is recommended to record these values at 1 second interval • 5. Stop applying Instability Conditions after X hours (24 for this test) • 6. Recover Phase Benchmarks • Recovery Time = 22 seconds • Recovered Aggregate Forwarding Rate = 4Gbps • Recovered Latency = 110usec • Recovered Uncontrolled Sessions Lost = 0 • DEVICE #2 • 1. Configuration Sets achieved • 2. Startup Phase Benchmarks • Stable Aggregate forwarding Rate = 4Gbps • Stable Latency = 150 usec • Stable Session Count = • 30 BGP Peers • 28 OSPF Adjacencies • 16K IPsec Tunnels • 3. Apply Instability Conditions • 4. Instability Phase Benchmarks* • Unstable Aggregate Forwarding Rate=3.3Gbps • Degraded Aggregate Forwarding Rate= 0.7Gbps • Unstable Latency = 170usec • Unstable Uncontrolled Sessions Lost = 4000 • *These are averages. It is recommended to record these values at 1 second interval • 5. Stop applying Instability Conditions after X hours (24 for this test) • 6. Recover Phase Benchmarks • Recovery Time= Infinite • Recovered Aggregate Forwarding Rate = 3.9Gbps • Recovered Latency = 150usec • Recovered Uncontrolled Sessions Lost = 97 • Configuration Set in this test was reduced from a previous test because Device #2 crashed at 20 hours • Test was repeated with 3rd Configuration Set to obtain a Recovery Time for Device #2

More Related