1 / 26

Resource Management for Computer Operating Systems

Resource Management for Computer Operating Systems . Sarah Bird PhD Candidate Berkeley EECS Burton Smith Technical Fellow Microsoft. Background. M anagement of computing resources is a primary operating system responsibility It lets multiple applications share a single system

whitby
Download Presentation

Resource Management for Computer Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Resource Management for Computer Operating Systems Sarah Bird PhD Candidate Berkeley EECS Burton Smith Technical Fellow Microsoft

  2. Background • Management of computing resources is a primary operating system responsibility • It lets multiple applications share a single system • These days, we face new challenges • Increasing numbers of cores (hardware threads) • Emergence of parallel applications • Quality of Service (QoS) requirements • The need to manage power and energy • Current practice fails to address these issues • Thread management is a good example

  3. Windows Thread Management • Windows maintains queues of runnable threads • One queue per priority per core • Each core chooses a thread from the head of its nonempty queue of highest priority and runs it • The thread runs for a “time quantum” unless it blocks or a higher priority queue becomes nonempty • Thread priority can change at scheduling events, with a new priority based on what just happened: • One thread unblocking another • I/O completion (keyboard, mouse, storage, network) • Preemption by a higher priority thread • Time quantum expiration • New thread creation • Other operating systems are pretty similar

  4. Some Obvious Shortcomings • Operating system thread scheduling is costly • It incurs (needless) changes in protection • User mode thread scheduling is much cheaper • Thread progress is unpredictable • Lock-free techniques were invented to avoid this • Processes have no control over threads • Despite their big role in memory management • Response times are impossible to guarantee • What number of threads is enough for a process? • Power and energy are ignored

  5. A Way Forward • Resources should be allocated to processes • Cores of various types • Memory (working sets) • Bandwidths, e.g. to shared caches, memory, storage and interconnection networks • The operating system should: • Optimize the responsiveness of the system • Respond to changes in user expectations • Respond to changes in process requirements • Maintain resource, power and energy constraints What follows is a scheme to realize this vision

  6. Responsiveness • Run times determine process responsiveness • The time from a mouse click to its result • The time from a service request to its response • The time from job launch to job completion • The time to execute a specified amount of work • The relationship is usually a nonlinear one • Achievable responses may be needlessly fast • There is generally a threshold of acceptability • Run time depends on the resources • Some resources will have more effect than others • Effects will often vary with computational phase

  7. Penalty Functions • The penalty function of a process defines how run times translate into responsiveness • Its shape expresses the nonlinearity of the relationship • The shape will depend on the application and on the current user interface state (e.g. minimized) • We let the total penalty be the instantaneous sum of the current penalties of the running processes • Resources determine run times and therefore penalties • Assigning resources to processes to minimize the total penalty maximizes system responsiveness

  8. Typical Penalty Functions Penalty Penalty Run time Run time Deadline

  9. Adjusting Penalty Functions • Penalty functions are like priorities, except: • They apply to processes, not kernel threads • They are explicit convex functions of the run time • The operating system can adjust their slopes • Up or down, based on user behavior or preference • The deadlines can probably be left alone • The total penalty is easy to compute • The objective is to keep minimizing it

  10. Run Time Functions • Generally, they exhibit “diminishing returns” as resources are added to a process • That is, run times are generally convex functions • They vary with input and must be measured • Sometimes run time increases with some resource • We may see “plateaus” or various types of “noise” Run time Memory allocation

  11. Resource Allocation As Optimization Continuously minimize pP p(p(rp,0, … rp,n-1) with respect to the resource allocations rp,j,where • P, p, p, and the rp,j are all time-varying; • P is the set of runnable processes; • rp,j 0 is the allocation of resource j to process p; • The penalty p depends on the run time p; • p depends in turn on the allocated resources rp,j; • pPrp,j= Aj, the available quantity of resource j. • All unused (slack) resources are allocated to process 0

  12. The OptimizationPicture Continuously minimize the total penalty Penalty1 Runtime2 Runtime1 Runtimei Runtime1(r(0,1), …, r(n-1,1)) Penalty2 Runtime2 (r(0,2), …, r(n-1,2)) Subject to the total available resources Penaltyi Runtimei(r(0,i), …, r(n-1,i))

  13. Convexity and Concavity • A function f is convex if for all x, y in n,  in , and0    1, f(x + (1−)y)  f(x) + (1−)f(y) • Chords of the function’s graph must lie on or above it • A function f is concave if for all x, y in n, in , and0    1, f(x + (1−)y)  f(x) + (1−)f(y) • Alternatively, f is concave if and only if –f is convex • Affine functions Ax + b are both convex and concave • If functions fi are convex so is their sum ifi • If functions fi are convex so is their maximum maxi fi • If f is both convex and monotone non-decreasing and g is convex then h(x) =f(g(x)) is convex • If f is convex, so is the set C = { xdom(f) | f(x) } • C is called the -sublevel set of f

  14. Convex Optimization • A convex optimization problem has the form: Minimize f0(x1, … xm) subject to fi(x1, … xm)  0, i = 1, … k where the functions fi:RmR are all convex • Convex optimization has several virtues • It guarantees a single global extremum • It is not much slower than linear programming • In our formulation, resource management is a convex optimization problem

  15. Managing Power and Energy • Total system power can be limited by an affine resource constraint jwj·p 0 rp,j W • Total power can also be limited using 0 and 0 • Assume all slack resources r0,j are powered off • Instead of being a run time,0 is total power • It will be convex in each of the slack resources r0,j • 0 has a slope that can depend on the battery charge • Low-penalty work loses to 0 when the battery is depleted Penalty Total Power As charge depletes,slope increases a0,r Total Power

  16. Modeling Run Times • Run time measurements can be used to construct a model, yielding several advantages • “Noise” can be removed while building the model • Interpolation among measurements is avoided • Rates of change with resources are computable • We have constructed several types of model • Linearand quadratic (rate) models • Kernel Canonical Correlation Analysis • Genetically Programmed Response Surfaces • Most recently, convex run time models

  17. Convex Models w: learned parameters b: bandwidth allocations : bandwidth amplifications m: memory allocations

  18. Modeling Results • Our convex models fit the data very well • We are now seeing how they work in practice • We think linear least-squares applied to past run time data can learn the model parameters • The QR matrix factorization can be updated and downdated to track changes in process behavior • We have also discovered some interesting artifacts along the way…

  19. Nbodies Frame Time

  20. Nbodies Frame Time

  21. PACORA • PACORA stands for “Performance-Aware Convex Optimization for Resource Allocation” • See our paper in the poster session of HotPar ‘11 • It manages resource allocation policy in the Berkeley ParLab’sTesselation operating system • One of us (Bird) is currently implementing it • Nearly all of the ideas presented here are being prototyped in PACORA and in Tesselation • If these ideas work they may be used elsewhere

  22. PACORA in Tessellation Global Policies /User Policies and Preferences Major Change Request App #2 App #1 App #3 Set Penalty Functions Admission Control ACK/NACK ACK/NACK App Creation and Resizing Requests from Users Minor Changes Offline Models and Behavioral Parameters Convex Optimization Current Application Store All system resources (Current Resources) Policy Service Build and Refine Resource Value Functions App group with fraction of resources App Decision Validation Performance Reports App App App User/System Performance/Energy Counters Resource Allocations

  23. Acknowledgements • Stephen Boyd and LievenVandenberghe, whose booktaught us about convex optimization • Krste Asanović, David Patterson, and John Kubiatowicz at Berkeley, for their strong support • The Microsoft Windows developers, for giving us real applications and ways to measure them • Colleagues at the UC Berkeley ParLab and at Microsoft Research, for helping us tell the story

  24. Thank you. Any Questions?

More Related