1 / 38

Multimedia Proxy Caching Mechanism for

Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in The Internet. Reza Rejaie, Haobo Yu, Mark Handley, and Deborah Estrin AT&T Labs – Research USC/ISI ACIRI at ICSI

mbitter
Download Presentation

Multimedia Proxy Caching Mechanism for

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multimedia Proxy Caching Mechanism for Quality Adaptive Streaming Applications in The Internet Reza Rejaie, Haobo Yu, Mark Handley, and Deborah Estrin AT&T Labs – Research USC/ISI ACIRI at ICSI IEEE Info COMM 2000 pp 980 - 989

  2. this papers proposes a proxy caching mechanism • for layered-encoded multimedia streams in the • Internet to maximize the delivered quality of • popular streams to interested clients • a proxy cache resides close to a group of clients • requested streams are always delivered from the • original servers through the proxy to clients. Thus, • the proxy is able to intercept and cache these • steams

  3. the proxy can significantly increase the • delivered quality of popular streams to high BW • clients despite the presence of a bottleneck on • its path to the original server • the proxy can also reduce the startup delay, • facilitate VCR-functionalities, and reduce load • on the server and network • compare to the other enhancing approaches, • such as mirror servers, proxy caches have • lower storage and processing requirements

  4. recall that, the architecture of layered • quality adaptation mechanism without the • proxy is as following figure:

  5. with the proxy, the big image of the system • will be as following:

  6. available BW Cache Layer3 Layer2 congestion control ACKER Layer1 Layer0 Quality Adaptation prefetching request Cache Controller proxy

  7. to maximize the delivered quality to • clients while obeying congestion controlled • rate limits, streaming applications should • match the quality of delivered stream • with the average available BW on the • path. • Thus the quality of cached streams • will depend on the available BW (on the • path between the server and the proxy) • to the first client that retrieved the • stream

  8. however, quality variations of the cached • stream are not correlated with the quality • variations required by quality adaptation • ( on the proxy-client path ) during the new • playback • this means that the quality adaptation module • at the proxy may attempt to send some data • that do not exist in the cache (the missing data • may be caused by either congestion or the • quality adaptation module of the server) • this results in requiring of data prefetching • (done by the proxy) during the idle time or • when delivering for the higher quality flow

  9. note that performing “data prefetching” for • a cached stream can be considered as the quality • adjustment of that stream • to allow fine-grain adjustment, each layer of • the encoded stream is divided into equal-sized • pieces called “segments” • the proxy prefetches missing segments that • are required by the quality adaptation along the • proxy-client path and are missing in the cache

  10. main contributions of this paper are novel • prefetching and fine-grain cache replacement • algorithms for multimedia proxy caching • the interaction of the two algorithms causes • the state of the cache to converge to an • efficient state such that the quality of a • cached stream is proportional to its popularity, • and its quality variations are inversely • proportional to its popularity

  11. Main Assumptions: • using Rate Adaptation Protocol (RAP) without • the implementation of packet retransmission • mechanism • 2. using Hierarchical Encoding • 3. all streams are linear-layered encoded where • all layers have the same constant BW ( for • simplicity )

  12. Delivery Procedure: • clients always send their requests for a • particular stream to their corresponding • proxy • when a proxy receives a request, it checks • the availability of the requested stream • the rest of the delivery procedure varies • for cache miss or a cache hit

  13. Relaying on a cache miss: • if the requested stream is missing from the • cache, the request is relayed to the original • server. • the stream is played back from the server • to the proxy via an RAP connection • the proxy then relays data packets toward • the client through a separate RAP connection • - in the cache miss scenario, the client does • not observe any benefit from the presence of • the proxy cache

  14. Prefetching on a cache hit: - on a cache hit, the proxy acts as a server and starts playing back the requested stream - as a result, the client observes shorter startup latency - in case of the mismatch between the quality of the cached stream and the requirement of the proxy’s quality adaptation module, the proxy should prefetch the missing segments from the server ahead of time

  15. - two possible scenarios: 1. PlaybackAvgBW <= StoredAvgBW 2. PlaybackAvgBW > StoredAvgBW - all the prefetched segments during a session are cached in both scenarios

  16. PlaybackAvgBW <= StoredAvgBW PlaybackAvgBW > StoredAvgBW

  17. Prefetching Mechanism: “sliding-window” - prefetching a segment from the server will take at least one RTT of the server-proxy path - thus, the proxy must predict a missing segment that may be required by the quality adaptation module in the future - quality adaptation adjusts the number of active layers according to random changes in the available BW, the time for upcoming adjustment is not known apriori

  18. - tradeoffs: the earlier the proxy prefetches a missing segment, the less accurate is the prediction, but the higher is the chance of receiving the prefetched segment in time - the server should deliver the requested segment based on their priority, otherwise the prefetching stream will fall behind the playback - note that the prefetched segments are always cached even if they arrive after their playout times

  19. - the proxy maintains the playout time for each active client - at the playout time, tp, the proxy examines the interval [tp+T, tp+T+], which is called the “prefetching window” of the stream, and identifies all missing segments within this window - the proxy then sends a single “prefetching request” that contains an ordered list of all these missing segments in a prioritized fashion

  20. - to loosely synchronize the prefetching stream with the playback stream, after  seconds, the proxy examines the next prefetching window and sends a new prefetching request to the server - when the server receives a prefetching request, it starts to send the requested segments based on their priorities (layer numbers) - a new prefetching request preempts the previous one. If the server receives a new prefetching request before delivery of all the requested segments in the previous request, it ignores the old request and starts to deliver segments in the new request (this preempting mechanism causes the prefetching and the playback to proceed with the same rate) - the average quality improvement of a cached stream after each playback is determined by the average prefetching BW - multiple prefetching sessions can be multiplexed

  21. Replacement Algorithm: - goal: converge the cache state to an efficient state - the conditions of the “efficient” state: 1. the average quality of each cached stream is directly proportional to its popularity. Furthermore, the avg quality of the stream must converge to the avg BW across most recent playbacks 2. the quality variations of each cached stream are inversely proportional to its popularity

  22. Replacement Pattern: - layered encoded streams are structured into separate layers, and each layer is further divided into segments with a unique ID - as the popularity of a cached stream decreases, its quality-- consequently its size-- is gradually reduce before it is completely flushed out - once a victim layer is identified, its cached segments are flushed from the end.

  23. - if flushing all segments of thevictim layer does not provide sufficient space, the proxy then identifies a new victim layer and repeats this process

  24. Popularity Function: - in the context of streaming applications, the client can interact with the server and perform VCR-functionalities (i.e., Stop, FF, Rewind, Play). - intuitively, the popularity of each stream should reflect the level of interest that is observed through this interaction

  25. - assume that the total playback time of each stream indicates the level of interest in that stream for example, if a client only watches half of one stream, its level of interest is half of a client who watches the entire stream. This approach can also include weighted duration of fast forward and rewind with proper weighting - define the term “weighted hit (whit)” where PlaybackTime: the total playback time of a session (second) StreamLength: the length of the entire stream (second)

  26. - the proxy calculates weighted hits on a per-layer basis for each playback - the cumulative value of whit during a recent window (called the popularity window) is used as the popularity index of the layer - the popularity of each layer is re-calculated at the end of a session as follows: Where P denotes popularity  denotes the width of the popularity window

  27. Note ! - adding and dropping layers by quality adaptation results in different PlaybackTimes in a session and consequently affects popularity of a cached layer - applying the definition of popularity on a per-layer basis is compatible with the proposed fine-grain replacement mechanism. The reason is that layered encoding guarantees that popularity of different layers in the same stream monotonically decrease with the layer number. Thus, a victim layer is always the highest in-cache layer of one of the cached streams - the length of a layer does not affect its popularity, because whit is normalized by stream length

  28. Simulation Setup: - ns-2 simulator - RAP without error control mechanism [no retransmission scheme] - two sets of simulations: 1. focusing on evaluating the prefetching algorithm 2. focusing on the replacement algorithm

  29. in all simulations the server-proxy link is shared • by 10 RAP and 10 long-lived TCP flows. One of • the RAP flows is used to deliver multimedia • streams from server to the proxy’s cache; the • other 19 flows present background traffic, • whose dynamic results in available BW changes • that will trigger adding and dropping of layers • to limit the number of parameters, all streams • have 8 layers, the same segment size of 1 KB, • and layer consumption rate of 2.5 KB/s

  30. Evaluation Metrics: • Completeness • * the percentage of a stream residing in • the cache • * this metric allows us to trace the quality • evolution of a cached stream after each playback • * defined on a per-layer basis • * the completeness of layer l in cached • stream s is defined as:

  31. where “chunk”: a continuous group of segments of a single layer of a cached stream chunk(l): the set of all chunks of layer l Ll, i : the length (in terms of segments) of the ith cached chunk in layer l RLi : the official length of the layer

  32. Continuity • * the level of smoothing of a cached stream • * also defined on a per-layer basis • * the continuity of layer l in cached stream • s is defined as follows: where a layer break occurs when there is a missing segment in a layer

  33. Results: - Prefetching

  34. Replacement Algorithm • * to examine the hypothesis that the state • of the cache gradually converges to an • “efficient” state as result of the interaction • between prefetching and replacement algorithm • * the resulting quality due to cache • replacement depends on two factors: stream • popularity and the BW between the requesting • clients and the proxy • * 10 video streams with lengths uniformly • distributes between 1 and 10 minutes. Stream#0 • is the most popular one

  35. the cache size is set to be half of the total • size of all 10 streams • to study the effect of stream popularity, • BWsp >= BWpc • to study the effect of client’s bandwidth, • BWsp < BWpc

More Related