destage algorithms for disk arrays with non volatile caches l.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Destage Algorithms for Disk Arrays with Non-Volatile Caches PowerPoint Presentation
Download Presentation
Destage Algorithms for Disk Arrays with Non-Volatile Caches

Loading in 2 Seconds...

  share
play fullscreen
1 / 19
mallika

Destage Algorithms for Disk Arrays with Non-Volatile Caches - PowerPoint PPT Presentation

138 Views
Download Presentation
Destage Algorithms for Disk Arrays with Non-Volatile Caches
An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Destage Algorithms for Disk Arrays with Non-Volatile Caches Anujan Varma Quinn Jacobson

  2. Presentation Outline • Introduction • Write- Cache • Purpose of Scheduling algorithm • Parameters determining destages • Different Scheduling algorithms • Performance Evaluation • Conclusion

  3. Introduction • Parity logging only attempts to reduce the overhead of parity updates. • A non-volatile write cache is used to reduce write latency. • The process of updating data or parity in the disks from the write cache is called destaging.

  4. Non-Volatile Write Cache(Advantages) • Lower service time seen by write requests to an array. • Locality in writes in the work load can be exploited: temporal & spatial. • Lowers the response time for read requests serviced by the disks.

  5. Non-Volatile Write Cache(Disadvantages) • Reliability • Data Loss can occur. • Scheduling destages.

  6. How a scheduling algorithm improves disk performance • Can reduce the number of destages by capturing most of the re-writes in the write cache. • Can reduce the number of destages by aggregating blocks that lie physically close on a disk & destage them as a large read and/or write. • Can reduce the average time for a destage by ordering destage requests to the disk such that the service times in the individual disks are minimized.

  7. Parameters to determine the block to destage next • The probability of the block to be re-written in the near future. • Number of blocks to be read/updated on the same track. • Service time of the requests in the destage queue. • Current level of cache occupancy.

  8. Least-Cost Scheduling • Modeled after the shortest seek-time first disk scheduling algorithm. • Each disk scheduled independently. • Request that takes the shortest access time is performed. • Exploits spatial locality.

  9. High/Low Mark Algorithm • Designed after the cache purging algorithm. • Each disk scheduled independently. • Two cache-occupancy thresholds are used to enable & disable destages. • Least-cost scheduling used to minimize the service time of individual disk accesses. • Exploits spatial locality.

  10. Linear Threshold Scheduling • Matches the rate of the destage from the cache to the current level of occupancy for the cache. • Each disk scheduled independently. • Parity & data destages are treated in a similar manner. • Least-cost scheduling used. • Trade-off exists in the choice of the threshold. • Spatial locality implemented. • Temporal Locality is not maximized explicitly.

  11. Approximation to Linear threshold scheduling • Linear threshold faces the problem of scanning an entire queue of destage request to select the minimum-cost request. • Divides the disk into regions. • Maintains separate queue of requests in each region. • Queues searched from the closest to farthest from the current position of head. • Uses function to specify the maximum destage cost for each level of cache occupancy. • Regions searched for destages are selected using the function: Th(w) >= cost(i,j)

  12. The Model • Read cache has unmodified copies of disk blocks. • Write cache holds newer information. • Request from host is FCFS. • Host request has higher priority than destage request. • Write access never bypasses the write cache.

  13. Performance Metrics • Response time of host reads - this is the average delay experienced by a read request from the host. • Disk utilization - it is the fraction of time the disk is busy servicing a request. • Burst Tolerance - the ability to tolerate short bursts in the workload without causing a write cache overflow.

  14. Response time of host reads • Linear Threshold has the best read-to-disk response times for moderate to heavy loads. • High/Low drops in performance at high workloads. • Linear & Least-Cost converge at high I/O.

  15. Burst Tolerance • FCFS & Least-cost show the best burst-tolerance. • Linear threshold recovers from burst tolerance with higher background loads than other algorithms.

  16. Disk Utilization • High/Low mark & Linear threshold algorithms perform minimum work for destaging.

  17. Conclusion • Scheduling algorithm for destaging blocks from the write cache in a RAID –5 are presented. • Linear threshold scheduling provides the best read performance and still maintains a high degree of burst tolerance. • Approximation to Linear threshold maintains performance of Linear but can be implemented at a lower overhead.

  18. Questions • What are the characteristics of the Linear threshold scheduling algorithm? • Compare the performance of Linear threshold algorithm with others on the following criterions: Response time of host reads, Burst tolerance, and Disk Utilization?