1 / 8

SIP Performance Benchmarking draft-poretsky-sip-bench-term-04.txt

SIP Performance Benchmarking draft-poretsky-sip-bench-term-04.txt draft-poretsky-bmwg-sip-bench-meth-02.txt BMWG, IETF-71 Philadelphia March 2008 Carol Davids IIT Vijay Gurbani Alcatel-Lucent Scott Poretsky NextPoint. Motivation. Problem Statement:

Download Presentation

SIP Performance Benchmarking draft-poretsky-sip-bench-term-04.txt

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SIP Performance Benchmarking draft-poretsky-sip-bench-term-04.txt draft-poretsky-bmwg-sip-bench-meth-02.txt BMWG, IETF-71 Philadelphia March 2008 Carol Davids IIT Vijay Gurbani Alcatel-Lucent Scott Poretsky NextPoint

  2. Motivation • Problem Statement: • Service Providers are now deploying VoIP and Multimedia using the IETF developed Session Initiation Protocol (SIP). • Industry lacks common terminology for SIP performance benchmarks • SIP allows a wide range of configuration and operational conditions that can influence performance benchmark measurements. • Goals: • Service Providers use the benchmarks to compare performance of RFC 3261 network devices • Vendors and others can use benchmarks to ensure performance claims are based on common terminology and methodology. • Benchmark metrics can be applied to make device deployment decisions for IETF SIP

  3. Scope SIP Signaling SIP Server (DUT) Tester (Emulated Agents) SIP ALG /NAT SUT • Terminology defines Performance benchmark metrics for black-box measurements of SIP networking devices • Methodology describes how to measure the metrics for a DUT or SUT • DUT MUST be a RFC 3261 compliant device and MAY have SIP Aware Firewall/NAT and other functionality • SUT MAY be RFC 3261 compliant device with a separate external SIP Firewall and/or NAT • Benchmark • Control Signaling in presence of Media, not media itself • SIP Transport (TCP, UDP, TLS over these) • Invite and Non-Invite scenarios

  4. Industry Collaboration • BMWG to develop standard to benchmark SIP performance of a single device • SIPPING and BMWG Chairs met in Montreal to discuss this SIP Performance Benchmarking work item as it was first opened in BMWG • PMOL WG developing standard to benchmark end-to-end SIP application performance. • SPEC to develop industry-available test code for SIP benchmarking in accordance with IETF’s BMWG and SIPPING standards

  5. Supported in SIP and SIPPING • From: Dean Willis <dean.willis@softarmor.com> • Date: December 13, 2007 2:24:10 PM CST • To: acmorton@att.com, Dan Romascanu <dromasca@avaya.com> • Cc: "Vijay K. Gurbani" <vkg@alcatel-lucent.com> • Subject: Benchmarking for SIP in BMWG • Vijay Gurbani just brought it to me attention that there are some drafts related to the benchmarking of SIP systems being discussed in the BMWG working group. In general, I'm very supportive of this concept. We've tried to do it in SIP on several occasions in the past and just don't have the right mindset. • I think that the development of vocabulary and practice around the benchmarking of SIP will be very helpful in the operator sector, as sizing and performance planning with COTS systems are currently very difficult to estimate "on paper" and currently require building up labs just to get baselines. • Further, I think that the SIP "overload" work (http://www.ietf.org/) internet-drafts/draft-hilt-sipping-overload-03) and SIMPLE's "presence scaling" analysis (draft-houri-sipping-presence-scaling-requirements-01) need to have a benchmarking framework in place so that we can really talk about the issues. • Dean Subject: Work on SIP performance issues Date: Tue, 18 Dec 2007 08:30:31 +0200 From: Gonzalo Camarillo <Gonzalo.Camarillo@ericsson.com> To: Al Morton <acmorton@att.com>, "Romascanu, Dan (Dan)" <dromasca@avaya.com> CC: bmwg@ietf.org, Jon Peterson <jon.peterson@neustar.biz>, Cullen Jennings <fluffy@cisco.com>, Mary Barnes <mary.barnes@nortel.com>  Hi,  I understand that BMWG is in the process of deciding whether or not to work on SIP performance and benchmarking issues. The following drafts were brought to my attention: http://tools.ietf.org/html/draft-poretsky-sip-bench-term-03 http://tools.ietf.org/html/draft-poretsky-bmwg-sip-bench-meth-02  In July 2006 (IETF 66 in Montreal), as you can see in the MoMs below, the SIPPING WG showed support to work on SIP performance and measurements issues. This was seen as an interesting and useful topic by the SIP community. http://www3.ietf.org/proceedings/06jul/minutes/sipping.txt Cheers, Gonzalo SIPPING WG co-chair

  6. Benchmarks • Maximum Session Establishment Rate • Maximum Registration Rate • Maximum IM Rate • Session Capacity • Session attempt performance • Session setup delay • Session disconnect delay • Standing sessions

  7. BMWG Input • Dec 18, 2007: Jim McQuaid • Questions re convergence of the capacity metric • Description of Back-off method to be clarified in the next version

  8. Next Steps Incorporate comments from meeting and mailing list. Consider for BMWG work item?

More Related