1 / 55

Presentation of M.Sc. Thesis Work

Presentation of M.Sc. Thesis Work. Presented by: S. M. Farhad [040405056P] Department of Computer Science and Engineering, BUET Supervised by: Dr. Md. Mostofa Akbar Associate Professor Department of Computer Science and Engineering, BUET. Thesis Title.

maja
Download Presentation

Presentation of M.Sc. Thesis Work

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Presentation of M.Sc. Thesis Work Presented by: S. M. Farhad [040405056P] Department of Computer Science and Engineering, BUET Supervised by: Dr. Md. Mostofa Akbar Associate Professor Department of Computer Science and Engineering, BUET

  2. Thesis Title Multicast Video-on-Demand Service in Enterprise Networks with Client Assisted Patching

  3. C1 C2 C3 Server C6 C4 C5 Video-on-Demand Service Architectures C1 C2 Server S2 Server C6 S1 S3 Server S6 S5 S4 C5 C3 C4 Centralized: Star-bus topology EnterpriseNetwork: Multiple Servers

  4. Video-on-Demand Service Architectures Server C3 S2 S1 S3 S6 S5 S4 C2 C1 Internet: Overlay topology

  5. Characteristics of VoD Services • Long-lived sessions (90-120 minutes) • High bandwidth requirements • MPEG-1 (1.5 Mbps) and MPEG-2 (3-10 Mbps) • VCR like functionality • Pause • Fast-forward • Rewind • Slow motion, etc. • Quality of service (QoS) • Jitter-free delivery • No service interruption

  6. Classification of VoD Services • True VoD (TVoD) • User has complete control • High quality of service • Full-function VCR capabilities • Not scalable • Near VoD (NVoD) • Stream sharing • Service latency • Service interruption • Scalable

  7. Video-on-Demand Service Techniques • Unicast • Dedicated channel for each client • Easier to implement • Expensive to operate • Non scalable • Multicast • One-to-many data transmission • Complex system • Cost effective • Scalable • Broadcast • Periodic time shifted multicast over some fixed channels • Suitable only for popular movies • Increases initial service latency

  8. Unicast VoD Service in an Enterprise Network by M. M. Islam et al [2005] • Designing an Admission Controller • Batching • Several media servers • K-shortest path (MD) • SLA (Multi-choice) • Consider both network and server parameters • Profit maximization • MMKP (Heuristics) C1 C2 Server Server S2 S1 S3 S6 C3 S5 S4 C4 ADC

  9. Limitations of Unicast Model • Not scalable • Expensive-to-operate • Profit maximization, hence might be unfair to some clients

  10. Multicast VoD Service by S. Deering [1989] • Avoid transmitting the same packet more than once on each link • Branch routers duplicate and send the packet over multiple down stream branches • Not that much scalabe C1 C2 Server Server S2 S1 S3 C5 S6 C3 S5 S4 C4 ADC

  11. Batching Technique by A. Dan et al [1994] • Multiple client requests for the same movie arriving within short time • Can be batched together • Can be serviced using a single stream • More scalable than simple multicasting • Batching incurs a service latency (NVoD) • Increased batch duration increases reneging probability but increases the probability of larger group formation

  12. Dynamic Multicasting • The multicast tree is expanded dynamically to accommodate new clients • Eliminates service latency incurred by “Batching” technique • But it requires client side cache • Some works • Adaptive Piggybacking [1996] • Stream Tapping [1997] • Chaining [1997] • Patching [1998] • Some variants of patching

  13. Patching stream 0 0 0 0 Regular stream Patching by K. Hua et al [1998] C1 C2 Server S2 C6 S1 C3 5 S3 C7 10 S4 C4 C5 After the missing portion is patched the channel is released

  14. Incurs heavy server load 0 0 Shortcoming of Patching C1 C6 C2 Server 6 S2 C7 S1 One of our objectives C3 7 5 S3 C5 S4 C4 8 C8 Patching window effect

  15. Multicast VoD in Enterprise Network with Client Assisted Patching • Our proposal • Features • Using multicast • Batching • Client Assisted Patching • Fair scheduling • Several media servers • Admission controller C1 C2 Server Server S2 S1 S3 C5 S6 C3 S5 S4 C4 ADC

  16. Using Multicast with Batching • Multicast on the shortest path tree rooted at each server • Batching introduces service latency but increases the possibility of forming larger group C1 C2 Server S2 C5 S1 S3 S6 C3 S5 S4 C4

  17. Client Assisted Patching: A New Patching Technique • Each client will maintain a buffer and cache an initial part of a movie • Upon arrival of a new client request shortly afterward a nearby client will be selected by Admission Controller that will supply the patching stream • The newly introduced client in turn can also supply the patching stream to the later clients • A client will serve only a single client • Benefits • Significantly decreases server load • Increases the scalability of the system • Eliminates service latency that incurs by batching

  18. Patching stream 0 0 0 Regular stream Client Assisted Patching: A New Patching Technique The outcome is to alleviate the server load C2 C3 Server S2 C1 S1 C4 5 S3 C6 10 C5

  19. Multicast with VCR Functionality • If another session of a multicast group exists playing the same movie that has the actual start-time within the interval [displaced start-time, displaced start- time + threshold] • VCR functionality is granted if resources are available C1 C2 Server S2 C7 S1 S3 C5 S6 C3 C6 S5 S4 C4

  20. Admission Policy • Maximum Factored Queue Length First (MFQLF) • A request queue is maintained for each movie • The pending batch with the largest queue size weighted by the factor, √(associated access frequency), to serve next • A profit maximization but fair policy • Largest queue size maximizes profit • Access frequency ensures fairness

  21. Workflow of the Admission Controller (ADC) The server advertises about their available multimedia data and other resources to the ADC The users will put their requests to the ADC The ADC accepts or rejects any client’s request according to an admission control principle Patch the request If patching is not possible then batch the request The client is notified about the acceptance of the request Clients cache data received from the server and forwards data to a client if requested 21

  22. Database of Admission Controller ADC maintains a central database containing the following information Resources of the EN (Server and network bandwidths) Network topology Shortest path multicast trees rooted at each server Also maintains the detail information about ongoing sessions The multicast tree of each session Each client information (client source) If any client is serving other client (patching parent) 22

  23. Architectural Environment • Connectivity • Switch to switch  Gigabit Ethernet • Switch to server  Gigabit Ethernet • Switch to clients  LAN or ADSL • Switch nodes • Layer 3 switch • Capacity: several millions pps • Servers • Network Attached storage (NAS) • Workstations • Capacity supports to play movies and related softwares • Admission Controller • High performance machine • NAS is attached to it

  24. The Architecture

  25. Procedure Admission-Control • Procedure Initialize Admission Controller • Create shortest path trees rooted at each server node (Dijkstra’s algorithm) • Online requests processing thread • Process Movie Requests • Process VCR Requests • Process Session End Requests • Process Patch Parent Requests • Batched requests processing thread

  26. Procedure Process-Movie-Request • Select a patchable session (patching window) • Select a patching parent • If there is a patchable session and a patching parent available Admit the client in the session • Else Enqueue the request for future processing

  27. Procedure Select-Patching-Parent • Forming shortest path tree rooted at the requesting client node • For each client of the session • Find the shortest distant client from the requesting client node • Thus the total complexity is

  28. Batched Requests Processing Thread • Sorting the batched lists according to descending order of the factor “queuesize/√(associated access frequency)” • For each movie of the batches admit each request • Thus the complexity of the thread is

  29. Client Buffer Requirement Case 1:

  30. Client Buffer Requirement Contd. • Patching window seconds • Case 1 • Session starts at t0 • Client requests the movie at t1 and • The missing portion is made up at time • The initial portion of the buffer is not needed for patching after time • Thus the buffer requirement is at most • M is the stream rate of each stream

  31. Client Buffer Requirement Contd. Case 2:

  32. Client Buffer Requirement Contd. • Case 2 • Session starts at t0 • Client requests the movie at t1 and • The initial portion of the buffer is not needed for patching after time • The missing portion is made up at time • Thus the buffer requirement is at most

  33. Simulation Parameters • No of switch nodes: 20 • No of links: 32 • Total Clients: 400-1200 • No of servers: 5-10 • No of movies: 30 • Batch interval: 1-5 Minutes • Movie length: 1 hour

  34. Simulation Parameters • Replication of popular movies: 1-3 • Bandwidth per link: 1Gbps • Server I/O bandwidth: 1Gbps • Patch window: 5-10 Minutes • Movie stream type MPEG-2 (5Mbps) • Simulation language: PERSEC

  35. Some Probability Distribution • Client requests are generated in our simulation according to a Poisson process • The videos are requested with frequencies following a Zipf-like distribution • We consider different interactive rates in the system

  36. We compare Proxy-Prefix Caching • Server accepts and rejects the requests • Proxy caches the initial portion of the ongoing movies • Proxy servers serve the missing portion C1 C2 Server Proxy S2 S1 S3 C5 Proxy S6 C3 S5 S4 C4 Proxy 36

More Related