1 / 11

Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04)

BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com BreakingPoint Systems. Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04). Agenda. Why draft-hamilton? Charter objections/responses Goals reset Explicit goals of this draft

ramona
Download Presentation

Content-Aware Device Benchmarking Methodology (draft-hamilton-bmwg-ca-bench-meth-04)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BMWG Meeting Maastricht July 2010 Mike Hamilton mhamilton@breakingpoint.com BreakingPoint Systems Content-Aware Device Benchmarking Methodology(draft-hamilton-bmwg-ca-bench-meth-04)

  2. Agenda • Why draft-hamilton? • Charter objections/responses • Goals reset • Explicit goals of this draft • Explicit non-goals of this draft

  3. Why draft-hamilton? • RFC 2544 doesn’t specifically apply to some modern devices • Test vendors are already doing this in a one-off fashion • BreakingPoint, Spirent, Ixia, Agilent, etc.

  4. Charter Objections • “the scope of the BMWG is limited to technology characterization using simulated stimuli in a laboratory environment.” • “Said differently, the BMWG does not attempt to produce benchmarks for live, operational networks • This does not restrict BMWG from creating benchmark tests that are representative of VERY SPECIFIC live, operational networks

  5. Goals Reset • Create a series of benchmark tests to MOST accurately predict device performance under realistic conditions FOR A SPECIFIC SIMULATED NETWORK • RFC 2544 Quotes Page 11, Section 18, “Multiple Frame Sizes” • “The distribution MAY approximate the conditions on the network in which the DUT would be used.” • “The authors do not have any idea how the results of such a test would be interpreted other than to directly compare multiple DUTs in some very specific simulated network”

  6. Explicit Goals • Repeatable Results • Compare Multiple DUTs

  7. Explicit Non-Goals • Not a replacement of RFC 2544 • Total Input Repeatability (discussion to follow)

  8. Test Run Setup • Methodologies Run • RFC 2544 Throughput (64B + 1518B) • RFC 3511 Throughput (1 kB + 512 kB) • IMIX Throughput • CAIDA • Spirent • Wikipedia • Agilent-simple • draft-hamilton-03 (random) • draft-hamilton-04 (shell)

  9. Test Results

  10. Fuzzing Results

  11. Draft-04 Highlights and Reasons • “Shell” Methodology • More reproducible • Backoff on ‘realistic’ • Compromise • Dropped ‘security’ • Difficult to scope and maintain currency • Maintain ‘fuzzing’ aspect • Random but repeatable

More Related