1 / 22

Client Cache Management

Client Cache Management. Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access distributions. Therefore the client machines need to cache pages obtained from the broadcast. Client Cache Management.

awena
Download Presentation

Client Cache Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Client Cache Management • Improving the broadcast for one probability access distribution will hurt the performance of other clients with different access distributions. • Therefore the client machines need to cache pages obtained from the broadcast.

  2. Client Cache Management • With traditional caching clients cache the data most likely to be accessed in the future. • With Broadcast Disks, traditional caching may lead to poor performance if the server’s broadcast is poorly matched to the clients access distribution.

  3. Client Cache Management • In the Broadcast Disk system, clients cache the pages for which the local probability of access is higher than the frequency of broadcast. • This leads to the need for cost-based page replacement.

  4. Client Cache Management • One cost-based page replacement strategy replaces the page that has the lowest ratio between its probability of access (P) and its frequency of broadcast (X) - PIX • PIX requires the following: 1 Perfect knowledge of access probabilities. 2 Comparison of PIX values for all cache resident pages at cache replacement time.

  5. Example • One page is accessed 1% of the time at a particular time and is also broadcast 1% of the time. • Second page is accessed only 0.5% but is broadcast 0.1% of time • Which page to be replaced, 1st will be replaced in favor of 2

  6. Client Cache Management • Another page replacement strategy adds the frequency of broadcast to an LRU style policy. This policy is known as LIX. • LIX maintains a separate list of cache-resident pages for each logical disk • A page enters the chain corresponding to the disk in which it is broadcast • Each list is ordered based on an approximation of the access probability (L) for each page.

  7. Cont. • When a page is hit, it is moved to top of chain and when a new page is entered > • A LIX value is computed by dividing L by X, the frequency of broadcast for the page at the bottom of each chain • The page with the lowest LIX value is replaced.

  8. Prefetching • PIX/LIX for only demand driven pages • An alternative approach to obtaining pages from the broadcast. • Goal is to improve the response time of clients that access data from the broadcast. • Methods of Prefetching: Tag Team Caching Prefetching Heuristic

  9. Prefetching • Tag Team Caching - Pages continually replace each other in the cache. • For example two pages x and y, being broadcast, the client caches x as it arrives on the broadcast. Client drops x and caches y when y arrives on the broadcast.

  10. Expected delay in demand driven • Suppose a client is interested in accessing X and Y and Px = Py = 0.5 with one single slot for cache • In demand driven , cache X and if needs Y, wait for Y and replace the cache by Y • Expected delay on a cache miss is ½ of the rotation of the disk • Expected delay over all accesses is Ci*Mi*Di, where C is the access probability, M is the probability of cache miss and D is the expected broadcast delay for page i • For pages x and y, it is = 0.5 *0.5*0.5 +0.5*0.5*0.5 = 0.25

  11. Expected Delay in Tag-team caching • 0.5*0.5*0.25 + 0.5*0.5*0.25 = 0.125, that is average cost is ½ of the demand driven scheme • Why: a miss can occur at any time in demand driven whereas misses can only occur during a half of the broadcast

  12. Prefetching • Simple Prefetching Heuristic • Performs a calculation for each page that arrives on the broadcast based on the probability of access for the page (P) and the amount of time that will elapse before the page will come around again (T). • If the PT value of the page being broadcast is higher than the page in cache with the lowest PT value, then the page in cache is replaced.

  13. Example

More Related