1 / 17

Optimal Power-Down Strategies

Optimal Power-Down Strategies. Chaitanya Swamy Caltech John Augustine Sandy Irani University of California, Irvine. Dynamic Power Management. Idle period. Machine/server serving jobs/requests in active state with high power consumption rate Idle period between requests

Download Presentation

Optimal Power-Down Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimal Power-Down Strategies Chaitanya Swamy Caltech John Augustine Sandy Irani University of California, Irvine

  2. Dynamic Power Management Idle period Machine/server serving jobs/requests • in active state with high power consumption rate Idle period between requests • length is apriori unknown During idle period • can transition to low power state • incur power-down cost Idle power management: Determine when to transition so as tominimize total power consumed Request i Request i+1

  3. A(t), OPT(t): total power consumed when idle period length is t Power consumed A Competitive ratio (c.r.) of A = maxtA(t)/OPT(t) = 2 2d0,1 OPT Suppose t is generated by a probability distribution. Expected power ratio(e.p.r.) of A = Et [A(t)] / Et [OPT(t)] d0,1 t d0,1 Active states0 : power consumption rate = 1 Sleep states1 : power consumption rate = 0 Transition cost = d0,1 = cost to power-down from s0 to s1 Idle period length = t (not known in advance) Decide when to transition from active state to sleep state. Simply a continuous version of the ski-rental problem. Want to try skiing but unsure of the number of ski trips Rent at cost $10/trip OR Buy paying $100

  4. DPM with multiple sleep states Set of statesS = (s0, s1,…, sk) s0 : active state, rest are sleep states ri : power consumption rate of si r0 > r1 > … > rk di,j : cost of transitioning from si to sj • Power-down strategy is a tuple (S,T) • S : sequence of states of S starting at s0 • T : transition time sequence for S starting at t = 0

  5. Power consumed s0 s1 s2 s3 d0,3 d0,2 d0,1 t = idle period length

  6. Follow-OPT Strategy d2,3 d1,2 d0,1 Power consumed s0 s1 s2 s3 OPT is lower envelop of lines d0,3 d0,2 d0,1 t = idle period length

  7. Two Types of Bounds • Global bound: what is the smallest c.r. (e.p.r.) r* such that every DPM instance has a power-down strategy of c.r. (or e.p.r.) at most r*? • Instance-wise bound: Given a DPM instance I, what is the best c.r. (or e.p.r.) r(I) for that instance? Clearly r* = maxinstances Ir(I) Would like an algorithm that given instance I, computes strategy with c.r. (or e.p.r.) = r(I).

  8. Related Work • 2-state DPM – ski-rental problem • Karlin, Manasse, Rudolph & Slater: global bound of 2 for c.r. • Karlin, Manasse, McGeoch & Owicki: global bound of e/(e-1) for expected power ratio. • easy to give instance-wise optimal strategies. • Multi-state DPM • Irani, Gupta & Shukla: global bounds for additive transition costs, di,k = di,j + dj,k for all i>j>k – called DPM-A (additive). Show that Follow-OPT has c.r. = 2, • give strategy with expected power ratio = e/(e-1). • Other extensions – capital investment problem (Azar et al.) • can view as DPM where states “arrive” over time, but with more restrictive transition costs.

  9. Our Results • Give the first bounds for (general) multi-state DPM. • Global bounds: give a simple algorithm that computes strategy with competitive ratio r* ≤ 5.83. • Instance-wise bounds: Given instance I • find strategy with c.r. r(I)+e in time O(k2log k.log(1/e)). Use this to show a lower bound of r* ≥ 2.45. • find strategy with optimal expected power ratio for the instance.

  10. Finding the Optimal Strategy DPM instance I is given. Want to find strategy with optimal competitive ratio for I. Decision procedure: given r, find a strategy with c.r. ≤ r or say that none exists. Need to determine a) state sequence, and b) transition times.

  11. Claim: For any strategy A, c.r.(A) = maxt=transition time of AA(t)/OPT(t). Power consumed A OPT t = idle period length

  12. r.OPT t1 Suppose A=(S,T) has c.r. ≤ r, and transitions to sÎS at time t1ÎT s.t. A(t) < r.OPT(t). Then, can find new transition times T' such that a) A' = (S,T') has c.r. ≤r, b) A' transitions to s at time t' < t1. Power consumed A OPT t = idle period length

  13. A tA(s) = E(s) tA(s) = transition time of s in strategy A Strat(s) = set of (partial) strategies A ending at s such that c.r.(A) ≤ r in [0,tA(s)] E(s) = minA'ÎStrat(s) tA' (s) = early transition time of s Let A = strategy attaining above minimum. Power r.OPT Properties of A: a) A(E(s)) = r.OPT(E(s)) b) All transitions before s are at early transition times – "states q before s, tA(q) = E(q) OPT t = idle period length

  14. Dynamic Programming Compute E(s) values using dynamic programming. Suppose we know E(s') for all states s' < s. Then, E(s) = mins' before s(time when s' transitions to s). To calculate quantity in brackets, use that: – Transition to s' was at t' = E(s') with A(t') = r.OPT(t'), – Transition to s must be at time t s.t. A(t) = r.OPT(t). Finally, if E(s)is finite for state s with power consumption rate rS ≤ r.rk, then we have a strategy ending at s with c.r. ≤r.

  15. Global Bound May assume that there are no power-up costs and di,j ≤ d0,j. Scaling to ensure that d0,i / d0,i+1 ≤ c where c < 1. s0 Follow-OPT Strategy Power d2,3 s1 s2 d1,2 s3 d0,3 OPT d0,1 d0,2 Theorem: Get a 5.83 competitive ratio. d0,1 t = idle period length

  16. Open Questions • Randomized strategies: global or instance-wise bounds for randomized strategies. • Better lower bounds.

  17. Thank You.

More Related