1 / 26

Analytic Evaluation of Quality of Service for On-Demand Data Delivery

Analytic Evaluation of Quality of Service for On-Demand Data Delivery. Hongfei Guo (guo@cs.wisc.edu) Haonan Tan (haonan@cs.wisc.edu). Outline. Background Two Multicast Protocols Customized MVA Analysis Validation Model Improvement (Interpolation)

nasya
Download Presentation

Analytic Evaluation of Quality of Service for On-Demand Data Delivery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analytic Evaluation of Quality of Service for On-Demand Data Delivery Hongfei Guo (guo@cs.wisc.edu) Haonan Tan (haonan@cs.wisc.edu)

  2. Outline • Background • Two Multicast Protocols • Customized MVA Analysis • Validation • Model Improvement (Interpolation) • Evaluation of Different Multicast Protocols • Conclusion & Future Work CS747 Project Presentation

  3. Background • Eager et al. reasoned minimum bandwidth requirements. But – • How about Quality of Service ? – Balking probability – Waiting time Given: – server bandwidth – multicast protocol CS747 Project Presentation

  4. Two Multicast Protocols • Grace Patching – Shared multicast stream (current data) – Unicast “patch” stream (missed data) – Average required server bandwidth CS747 Project Presentation

  5. Two Multicast Protocols (cont’d) • Hierarchical Multicast Stream Merging – Each data transmission stream is multicast – Clients accumulate data faster than file play rate – Clients merged into larger and larger groups – Once merged, clients listen to the same streams – Average required server bandwidth CS747 Project Presentation

  6. CMVA Analysis • Customer Balking Model • Fixed number of streams in the server • An arriving customer leaves if no streams available • Customer Waiting Model • Fixed number of streams in the server • An arriving customer waits till it being served • Customers with same request coalesce in the waiting queue CS747 Project Presentation

  7. Input Parameters • C server capacity •  external customer arrival rate • M number of file categories For i = 1, 2, …, M • Ki the total number of distinct files in category i • Ti mean duration of the entire file in category i • i zipfian parameter in category i • Pi probability accessing category i files CS747 Project Presentation

  8. Output Parameters (Balking) • S1 average service time at center 1 • R0 mean residence time at center 0 • X system throughput. For i = 1, 2, … #files on the server • pi fraction of customer requests for file i • Ci’ average b/w for file i • S1i mean service time of file i streams at center 1 • S0 mean service time at center 0 • Q0 mean queue length at center 0 • Xi throughput of streams serving file i • PB mean incoming costumer balking probability CS747 Project Presentation

  9. Output Parameters (Waiting) • W mean waiting time for a request (not coalesced) • U system utilization • S overall mean stream duration estimate For i = 1, 2, …, #files on the server • pi fraction of customer requests for file i • Si mean stream duration for file i • Qi mean number of waiting requests (not coalesced) for file i • Xi mean throughput of requests (not coalesced) for file i • Ri mean residence time of a request (not coalesced) for file i • Ci’ average number of active streams for file i • Ri’ mean residence time adjusted for coalescing • Wi’ mean waiting time adjusted for coalescing CS747 Project Presentation

  10. Center 1 … C streams X Center 0 (1) Customer Balking Model • Center 0 – SSFR center – Represent the waiting state of a stream • Center 1 – Delay center – Represent the active state of a stream CS747 Project Presentation

  11. CMVA Equations (Protocol result) (interarrival time) CS747 Project Presentation

  12. X  C streams (2) Waiting Model • Center 0 – multi-channel server with C streams • Two kinds of measurements (from two perspectives) • Server only see non-coalesced customer requests • Customers count in both coalesced and non-coalesced requests. Center 0 CS747 Project Presentation

  13. CMVA Equations • Measurements for the server CS747 Project Presentation

  14. CMVA Equations (cont’d) • Measurements for the customers CS747 Project Presentation

  15. Validation (1) CS747 Project Presentation

  16. Validation (2) CS747 Project Presentation

  17. Validation (3) CS747 Project Presentation

  18. Comparison of Patching Results Average Stream Durationa – Big error here! CS747 Project Presentation

  19. Interpolation of Stream Duration • g(Ni) – Threshold for patching • Exact for two extreme cases: Wi or Wi 0 • Exact for other cases ??? CS747 Project Presentation

  20. Evaluation of Two Protocols(1) CS747 Project Presentation

  21. (2) CS747 Project Presentation

  22. (3) CS747 Project Presentation

  23. (4) CS747 Project Presentation

  24. Conclusion • Balking model – big relative error when utilization is low. • Waiting model – good for HSMS, but underestimates Patching when utilization is high. • Interpolation helps ! • C* is a good trade-off between QoS and server utilization. • HSMS is always better than Patching. CS747 Project Presentation

  25. Future Work • Further investigate the discrepancy between model results and simulation results • Use the models to evaluate QoS of stream servers with multiple categories CS747 Project Presentation

  26. Comparison of Patching Results (1) Coalesce Fraction CS747 Project Presentation

More Related