1 / 25

Distributed Multimedia Systems

Distributed Multimedia Systems. Resource Management Stream Adaptation Case Study The Tiger Video file server. Resource Management. Resource Scheduling.

ownah
Download Presentation

Distributed Multimedia Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Multimedia Systems • Resource Management • Stream Adaptation • Case Study The Tiger Video file server

  2. Resource Management • Resource Scheduling To provide Quality of Service (Qos) to an application not only system must have sufficient resource (performance), it also needs to make these resource available to an application when they are needed (scheduling).

  3. Resource Scheduling • Fair Scheduling • Real-time Scheduling

  4. Fair Scheduling • If several streams compete for a same resource it is necessary to consider fairness and to prevent ill behaved streams taking too much bandwidth. • Round robin method is used on bit by bit basis, which provides more fairness with respect to varying packet sizes and arrival times.

  5. Real-time Scheduling • The Scheduling algorithms assigns CPU time slots to a set of processes in a manner that ensures that they complete their tasks on time. • Earliest- deadline first (EDF).

  6. Stream Adaptation • Adjustment in Qos • Droping a piece of information (audio) • Dropouts in video stream MPEG • We Use scaling methods for dropouts. For video files we use the following scaling methods or combination of it.

  7. Video Scaling methods • Temporal Scaling • Spatial Scaling • Frequency Scaling • Amplitudinal Scaling • Color space Scaling

  8. Scaling • Temporal Scaling reduces the resolution of video stream in the time domain by decreasing the number of video frames transmitted with in a interval.

  9. Scaling • Spatial Scaling reduces the number of pixels of each image in a video stream. • Frequency Scaling modifies compressed algorithm applied to a image.

  10. Scaling • Amplitudinal Scaling Reduces the color depth of each image pixel • Color space Scaling Reduces the number of entities in the color space color -> gray scale

  11. Filtering • Scaling modifies the stream of source it is not suitable for applications that involve several receivers: if bottleneck occurs on the route of one target, This target sends Scale-down message to the source and all targets receive the degraded quality,although some do not require.

  12. Filtering • Filtering is a method that provides the best possible quality of service to each target applying. • Filtering requires that a stream be partitioned into a set of hierarchical sub streams, each adding a higher level of quality.

  13. Filtering Targets Source

  14. Case Study • A video storage system that supplies multiple real time video streams simultaneously is seen as an important system component to support consumer-oriented multimedia applications. • Tiger video file server developed by Microsoft research Labs.

  15. Case Study : The Tiger video file server • Design goals • Architecture • Storage organization • Distributed schedule • Network support

  16. Design Goals • Video on demand for a large number of users • Quality of service • Scalable and distributed • Low cost hardware

  17. Architecture • The Cub Computers are identical PC’s with same number of standard Hard disk drives attached to each. They are equipped with ethernet and ATM network cards. The Controller is another PC it handles the client requests and manages the work schedule of the curb.

  18. Architecture Controller Controller Low-bandwidth network n…2n+1 1…n+2 2…n+3 0…n+1 Cub 0 Cub 1 Cub 2 Cub n start/ Stop requests From clients ATM Switching network Video Distribution to clients

  19. Storage organization • Video data is a large file in order to share the load its distributed among the disks attached to the cubs. • A movie is divided into blocks ( 1sec -> 0.5MB so a 2 hr movie has app. 7000) Movie can start on any disk whenever highest numbered disk is reached, the movie is wrapped around so that next block disk 0 is in process.

  20. Distributed Schedule • Scheduling workload for the cubs. • Schedule is organized as list of slots • Each slot rep. Work must be done to play one block of movie read it from relevant disk transfer it to ATM network.

  21. Network support • The blocks of each movie are simply passed to the ATM network by the cubs that hold them, together with the address of the relevant client. • Client needs sufficient buffer storage to hold two buffer locations 1 is playing the video and other is arriving from the network.

  22. Problem 1 • Outline the design of a QoS manager to enable desktop Computers connected by an ATM network to support several concurrent multimedia applications. Define an API for your QoS manager, giving the main operations with their parameters and results.

  23. Problem 2 • In order to specify the resource requirements software components that process multimedia data, we need estimates for their processing loads. How can this information can be obtained without undue effort?

  24. Problem 3 • The Tiger schedule is potentially a large data structure that changes frequently, but each cub needs an up-to-date representation of the portions it is currently handling. Suggest a mechanism for the distribution of the schedule to the cubs.

  25. Problem 4 • When Tiger is operating with a failed disk or cub, secondary data blocks are used in place of missing primaries. Secondary blocks are n times smaller than primaries ( where n is the decluster factor), how does the system accommodate this variability in block size?

More Related