1 / 29

Trace-Driven Analysis of Power Proportionality in Storage Systems

Trace-Driven Analysis of Power Proportionality in Storage Systems. Sara Alspaugh and Arka Bhattacharya. Why trace-driven analysis. Lots of published proposals Giant design space. Some r elated work. Method. Laboratory. Production.

hedya
Download Presentation

Trace-Driven Analysis of Power Proportionality in Storage Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trace-Driven Analysis of Power Proportionality in Storage Systems Sara Alspaugh and Arka Bhattacharya

  2. Why trace-driven analysis • Lots of published proposals • Giant design space

  3. Some related work

  4. Method Laboratory Production Implementation is infeasible when considering many system types. Evaluation Traces Analysis ? Components Algorithms

  5. Traces Analysis Components Algorithms

  6. Quantifying Inherent Opportunity • gain = diff(peakxlength, sum(bandwidth)) / peak x length • waste factor = peak x length / sum(bandwidth) • waste factor = peak:avg

  7. bandwidth time

  8. bandwidth time

  9. bandwidth requirements (B/s) bw_app >> bw_component cap_app < cap_component bw_app <= bw_{components} cap_app >> cap_component bandwidth requirements (B/s) bandwidth requirements (B/s) data set size (B) bandwidth requirements (B/s) data set size (B) data set size (B) data set size (B)

  10. Bandwidth (bytes / sec ) unit = disks replicate NFS filer partition ~ 50 MB/s laptop DB server ~ 500 GB Capacity (bytes)

  11. unit = servers ~ 1 GB/s bandwidth replicate DFS partition ~ 200 MB/s memory cache DB server ~ 32 GB (RAM) ~ 12 TB (disk) bytes

  12. NAS / NFS (NetApp), disk arrays bandwidth requirements (B/s) web farms (Wikipedia) bandwidth requirements (B/s) data set size (B) bandwidth requirements (B/s) data set size (B) data analytics, DFS (Hadoop) data set size (B)

  13. bandwidth requirements (B/s) bw_app >> bw_component cap_app < cap_component bw_app <= bw_{components} cap_app >> cap_component bandwidth requirements (B/s) bandwidth requirements (B/s) data set size (B) bandwidth requirements (B/s) data set size (B) data set size (B) data set size (B)

  14. Challenges • Case 1: writes • Case 2: latency to inactive components • Case 3: both of the above, set cover problem

  15. write through: to all components (even if requires waking some) write offloading: to active components only (propagate on wake) write log: propagate when ~full reaper: to all components but only wake when queue is full

  16. active units write-through bandwidth active units write-offloading requests time

  17. Next steps • data not pictured here • latencies • ramp times • unit sizes • etc. • ways to slice it • how to visualize it • more workloads • go back to related work to compare • case 3 • object popularity

  18. The End. Questions?

More Related