1 / 61

Bluetooth Performance

CS 218 Fall 02 Oct 14, 2002. Bluetooth Performance. Bluetooth Performance. Piconets: polling models and latency TCP, Video over BT BT and 802.11 performance comparison Scatternets: formation gateway scheduling Applications/Case studies: Medical environment Convention center.

warner
Download Presentation

Bluetooth Performance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 218 Fall 02 Oct 14, 2002 Bluetooth Performance

  2. Bluetooth Performance Piconets: • polling models and latency • TCP, Video over BT • BT and 802.11 performance comparison Scatternets: • formation • gateway scheduling Applications/Case studies: • Medical environment • Convention center

  3. Efficient Polling Schemes for Bluetooth Picocells

  4. 1 2 3 4 5 6 7 8 9 10 master slave 1 slave 2 ACL Access Scheme: Polling • Asynchronous connections access code header payload

  5. Access Scheme • Asynchronous connections (ACL) • MASTER solicits transmission from slaves • SLAVE • transmits packets only after POLL from the master • if the slave queue is empty, a NULL packet is transmitted • 1, 3, or 5 slots packets can be used, either by the master or the slave

  6. Optimal Polling Scheme • Optimal (min avg delay) polling policy: Exhaustive Round Robin (Liu et alt., On optimal Polling Policies, Queuing Systems, 1992): • exhaustive (never leave a non-empty queue) • visit order: longest queue first (when the server switches to another queue, the longest one must be selected)

  7. BT Polling Scheme • Bluetooth polling differs from classical polling in that: • the transmission from the master to a slave is always combined with the slave to master chance to transmit • master has only partial status knowledge (knows only its own queues) • thus, classical polling modelscannot be directly used (but, we can use them as benchmarks)

  8. Comparing BT Polling Schemes Ideal Polling:Algorithm B1 (complete state knowledge): • the server does not switch to another master-slave pair until both queues are empty • the visit order is defined in decreasing order of the sum of the master and slave queue lengths Realizable schemes: • PRR (Pure Round Robin): one packet per visit • ERR (Exhaustive Round Robin): clean up entire queue

  9. Simulation model • C++ ad-hoc simulator • single picocell with 7 slaves • 14 independent traffic generators (7 master-to-slave, 7 slave-to-master) • traffic model: Poisson arrivals and message length exponentially distributed (8 or 50 packets mean) • packet size (1, 3, or 5 slots) dynamically chosen according to the queue length • queue length infinite

  10. Simulation Results • PRR has the poorest performance at low/medium loads • But, at high loads PRR outperforms even B1 !

  11. More Idealized schemes • At high loads the exhaustive schemewastes slots when one queue of the pair is empty, the other is not • Idealized Algorithm B2: • Similar to B1 but partial exhaustive: • when one queue of the pair is empty the server switches to another slave; master-slave pairs with both queues not empty are visited first • Idealized Algorithm PP (Garg et altI., (MoMuC 99)): dynamic weighted round robin (if both queues non empty the weight is 4, if only one non empty the weight is 1, if both are empty the weight is 0)

  12. Results: simple model average message length = 8 packets average message length = 50 packets • B2 and PP have a good performance at high loads also • BUT: again a very simple scheme like ERR has a performance almost as good as the ideal ones • the results so far presented suggest that we do not need complex polling schemes Thus: should stick with Round Robin Schemes!!!

  13. Improved exhaustive scheme: Limited Round Robin (LRR) • At high loads the exhaustive scheme ERR wastes slots when one queue of the pair is empty and the other is not • Worse yet, the channel can be captured by stations generating a very high traffic (as shown later) • To avoid the above problems we modify the ERR by limiting the number t of transmissions that can be performed by each pair per cycle • the new scheme is called: Limited Round Robin (LRR)

  14. LRR Results average message length = 8 packets average message length = 50 packets • with value of t = 4, LRR “lower envelopes” ERR and PRR

  15. Fairness in Mixed traffic Scenario: • one master-slave pair generating long bursts (l=500 packets) • 6 master-slave pairs generating short bursts (l=8 packets) Ave. burst transmission delays with load=0.84: • Separate homogeneous cases (ie, 8 pkts and 500 pkts alone) yield: • 78 ms for l=8 • 3051 ms for l=500 ERR obviously unfair! LRR performance close to optimal (homogeneous case)

  16. TCP traffic experiments • So far, we considered Poisson UDP sources • these sources are passive • source rate does not depend on service received • To consider more realistic scenarios we considered TCP sources, which react to the MAC level scheduling • We propose a modification of the LRR scheme to improve performance with reactive (TCP) traffic

  17. LWRR • LWRR (Limited Weighted Round Robin): • as with LRR, we use a scheme with a limit on number t of transmissions per master-slave pair • in addition, we measure slave “activity” • each slave is assigned a weight wi, (initially, w=MP) • each time a pair is visited and no packet is exchanged the weight is reduced by 1 (minimum value of weight is wi = 1) • when packets show up, weight wiis reset to MP • basically, the weight is a kind of “memory” Action: the weight determines poll frequency

  18. TCP simulation model • The simulations were run using NS-2, augmented with Bluetooth modules • packet formats are DH1, DH3, DH5 (corresponding to a payload length equals to 224, 1464 and 2712 bits respectively) specified in BT recommendations • TCP Reno • t=3 (No. of maximum transmissions per queue) • MP=4 (Maximum weight of activeness)

  19. ON/OFF ratio 1/2 ON/OFF ratio 1/6 TCP Results • “non-persistent” ON/OFF TCP sources • reset to slow-start at the beginning of every new ON period • average ON time equals to 2 s

  20. Conclusions • Simple schemes can achieve a performance very close to that of complex and even non realizable schemes • Proposed: weighted round robin algorithm with dynamic weights and limited service (LWRR) • LWRR performs well with TCP (close to ERR performance) • Moreover LWRR does not suffer of the channel “capture effect” exhibited by ERR

  21. Case Study #1 TCP on BT MAC Layer

  22. Case study #1: TCP on BT and 802.11 Experiments Focused on TCP performance • Experiment #1:TCP in a single piconet. Each TCP connection starts from a different slave on the common piconet, and goes through the access point (BT master). • Experiment #2: TCP over multiple piconets. Each piconet supports a separate TCP connection.

  23. Exp # 1:TCP on single piconet

  24. Exp #2:TCP over multiple piconets, ie, one TCP per piconet; also TCP on 802.11

  25. Exp #2 (cont): Throughput of individual flows

  26. TCP Experiment Conclusions • Bluetooth performance under TCP predictable, dependable • Fairsharing across TCP connections • IEEE 802.11, on the other hand is unfair, “capture”- prone • BT aggregate throughput can exceed IEEE 802.11 throughput if multiple Piconets are present

  27. Case Study #2: Adaptive Multimedia over Bluetooth

  28. iMASH: Interactive Mobile Application Support for Heterogeneous clients CS: R. Bagrodia, M. Gerla, S. Lu, L. Zhang Medical School: D. Valentino, M. McCoy Campus Admin: A. Solomon UCLA Supported by NSF

  29. Motivation: Trends in Medical Care • Emphasis on primary care; fewer sub-specialists • Distributed multi-facility health care enterprises • Increasing volume and complexity of multimedia patient information (e.g., imaging, correlation of Oncology data) • Primary care physicians must frequently consult a limited number of (remote) sub-specialist physicians • Need for ubiquitous and nomadic access to the multimedia patient record • Many other industries will soon be facing this ‘need’

  30. Imaging Workstation: high-quality medical imagery and • multimedia patient records • Physician’s PDA: for messaging and scheduling • Mobile Medical Notes: for reviewing and taking medical notes • Medical Workstation: multimedia patient records, • including moderate-resolution images Diverse Display Devices Use of different devices for different components of medical care

  31. Adaptive Middleware Functions Wireless adaptive middleware functions include: • Mobility adaptation • Security adaptation • Data content adaptation • Multimedia rate adaptation The Focus of our talk is on Middleware for Multimedia Stream rate adaptation

  32. MPEG H263 server client client server Traditional E-to-E adaptation Scenario • Problems with E-to-E approach: • Congestion loss vs Random loss (eg, Interference, Errors, etc ) • End-to-End Measurements introduce latency channel fading internal interference mobility Multi-hop noise LOSS FROM CONGESTION LOSS FROM ERROR external interference (jamming, environment noise) End-to-End QoS Feedback RTP, RTCP, SCTP, RAP,RCS etc (mostly pkt loss, some use bdw estimate)

  33. client Network Network client Link Link Network Network Network server Link Link Link Application API Network Link New approach: Network Feedback channel fading MPEG H263 internal interference mobility Multi-hop noise server LOSS FROM CONGESTION LOSS FROM ERROR external interference (jamming, environment noise) Propagate() Measure() Measure() Measure() Measure() Network QoS Feedback Accurate Available Bandwidth Measurement and Advertising

  34. Case study- 64 node network • Adaptation schemes evaluated: • End to End Adaptation • RTP Adaptation, • PP (Packet Pair) Adaptation • AB (Avail Bandwidth Adaptation) • Network Feedback Adaptation • 7 video rates: 180, 128, 88, 64, 32, 16 Kbps • Bluetooth and 802.11b

  35. Loss vs Load – 802.11 single hop • Network Feedback shows min loss rate • AB-probe next best performer • RTP better than direct PP measurements

  36. Loss vs Load: 802.11 Multi-hop The Network Feedback scheme is affected by latency and routing O/H (overhead) as the number of connections grows

  37. Loss vs Load: Bluetooth scatternet • Bluetooth performance: • End-to-End adaptation almost as efficient as E to E adaptation due to Bluetooth Master centric control

  38. Conclusions • Loss Rates are lower in 802.11 and Bluetooth using Adaptation • End-to-End Adaptation in 802.11 improves QoS but does not scale • Only Network Feedback strategies are suitable (and scalable) in 802.11 multihop networks • In Bluetooth End-to-End adaptation works well as compared to the asynchronous 802.11 MAC.

  39. Bluetooth Scatternets

  40. Bluetooth Scatternets • Current Bluetooth specification defines the notion of interconnected piconets, called scatternets • Does not define the actual mechanisms and algorithms necessary to set up and maintain them.

  41. Why Scatternets Natural extension of the Piconet concept to support: • Workgroups • Sensor “fabrics” • Interconnection of PANs • Piconet range and node # extension

  42. Outline • Scatternet Architecture • Inter-Piconet Scheduling Algorithms • Gateway Forwarding Throughput affects capacity • Optimal Rendezvous Point Assignment • Conclusions and Open Issues.

  43. Scatternet Architecture Gateway nodes divide their (slotted) time (superframe) between many Piconets

  44. Scatternet Research • Research on Scatternets is currently limited by • proprietary work • unresolved intra-Piconet issues • present market focus on PANs and Piconets • Challenge: • Architecture: how can Scatternets work in Bluetooth systems.

  45. Inter-piconet Scheduling Algorithms • The rendezvous point (RP): • how do master and slave agree on the RP and how strict is the commitment to this RP? • The rendezvous window (RW): • given that the master and slave units are both present at a RP, how much data will they be able to exchange (ie, how large should RW be)?

  46. Superframe cycle G3 1+s 2+s G3present 2+s 1+s Present Slaves G2 1+s G1 Rendezvous Point Assignment Problem RV-points impact scatternet capacity.How to co-ordinate RV points across the Scatternet ? • Parameters • Home Master Piconet has existing RV points • Visiting Master Piconet has existing RV points • Gateway has RV points • Other Piconets connected to the Gateway RV points • RV info exchanged at link establishment time

  47. Forwarding Throughput Gateway G Piconet i-> Piconet j Present Slaves During Gateway Presence Presence Fraction As defined by RVs Piconet Pair Forwarding Throughput Compute the forwarding throughput as a function of RV points:

  48. Validation of forwarding throughput All nodes participate in a 640Kbps CBR superframe is 120 slots

  49. Validation of forwarding throughput • From Equation: 98.3Kbps Simulation: CBR from M8 to M24 throughput : 99Kbps

  50. Algorithm 1: RV-Separation (non optimal scheme) • When slave or gateway senses a new Master • Decision: will a Gateway connection be established? • If yes, pick the most seperated points in the Gateways superframe • For example, 0, 50 in a 100 slot superframe for a new gateway • Extra step: If many such points exist pick the most distant in home and visiting Piconets

More Related