1 / 36

Load Balancing

Load Balancing. Definition : A load is balanced if no processes are idle. How? Partition the computation into units of work (tasks or jobs) Assign tasks to different processors Load Balancing Categories Static (load assigned before application runs)

rhondah
Download Presentation

Load Balancing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Load Balancing Definition: A load is balanced if no processes are idle • How? • Partition the computation into units of work (tasks or jobs) • Assign tasks to different processors • Load Balancing Categories • Static (load assigned before application runs) • Dynamic (load assigned as applications run) • Centralized (Tasks assigned by the master or root process) • De-centralized (Tasks reassigned among slaves) • Semi-dynamic (application periodically suspended and load balanced) • Load Balancing Algorithms are: • Adaptive if they adapt to system load levels using thresholds • Stable if load balancing traffic is independent of load levels • Symmetric if both senders and receivers initiate action • Effective if load balancing overhead is minimal Note: Load balancing is an NP-Complete problem

  2. Improving the Load Balance By realigning processing work, we improve speed-up

  3. Round Robin Tasks assigned sequentially to processors If tasks > processors, the allocation wraps around Randomized: Tasks are assigned randomly to processors Partitioning – Tasks are represented by a graph Recursive Bisection Simulated Annealing Genetic Algorithms Multi-level Contraction and Refinement Advantages Simple to implement Minimal run time overhead Disadvantages Predicting execution times is often not knowable Affect of communication dynamics is often ignored The number of iterations required by processors to converge on a solution is often indeterminate Static Load Balancing Done prior to executing the parallel application Note: The Random algorithm is a popular benchmark for comparison

  4. A Load Balancing Partitioning Graph • The Nodes represent tasks • The Edges represent communication cost • The Node values represent processing cost • A second node value could represent reassignment cost

  5. Dynamic Load Balancing Done as a parallel application executes • Centralized • A single process hands out tasks • Processes ask for more work when their processing completes • Double buffering (ask for more while still working) can be effective • Decentralized • Processes detect that their work load is low • Processes sense an overload condition • When new tasks are spawned during execution • When a sudden increase in task load occurs • Questions • Which neighbors should participate in the rebalancing? • How should the adaptive thresholds be set? • What are the communications needed to balance? • How often should balancing occur?

  6. Centralized Load Balancing Work Pool, Processer Farm, or Replicated Worker Algorithm Master Processor: Maintains the work pool (queue, heap, etc.) While ( task=Remove()) != null) Receive(pi, request_msg) Send(pi, task) While(more processes) Receive(pi, request_msg) Send(pi, termination_msg) Slave Processor: Perform task and then ask for another task = Receive(pmaster, message) While (task!=terminate) Process task Send(pmaster, request_msg) task = Receive(pmaster, message) Master Slaves In this case, the slaves do not spawn new tasks How would the pseudo code change if they did?

  7. Decentralized Load Balancing (Worker processes interact among themselves) • There is no Master Processor • Each Processor maintains a work queue • Processors interact with neighbors to request and distribute tasks

  8. Balancing Algorithm Application Decentralized Mechanisms Balancing is among a subset of the total running processes • Receiver Initiated • Process requests tasks when it is about to go idle • Effective when the load is heavy • Unstable when the load is light (A request frequency threshold is necessary) • Sender Initiated • Process with a heavy load distributes the excess • Effective when the load is heavy • Can cause thrashing when loads are heavy (synchronizing system load with neighbors is necessary) Task Queue

  9. Process Selection • Global or Local? • Global involves all of the processors of the network • May require expensive global synchronization • May be difficult if the load dynamic is rapidly changing • Local involves only neighbor processes • Overall load may not be balanced • Easier to manage and less overhead than the global approach • Neighbor selection algorithms • Random: randomly choose another process • Easy to implement and studies show reasonable results • Round Robin: Select among neighbors using modular arithmetic • Easy to implement. Results similar to random selection • Adaptive Contracting: Issue bids to neighbors; best bid wins • Handshake between neighbors needed • It is possible to synchronize loads

  10. Choosing Thresholds • How do we estimate system load? • Synchronization averages task queue length or processes • Average number of tasks or projected execution time • When is the load low? • When a process is about to go idle • Goal: prevent idleness, not achieve perfect balance • A low threshold constant is sufficient • When is the load high? • When some processes have many tasks and others are idle • Goal: prevent thrashing • Synchronization among processors is necessary • An exponentially growing threshold works well • What is the job request frequency? • Goal: minimize load balancing overhead

  11. L 1 2 1 1 2 2 2 2 Gradient Algorithm Maintains a global pressure grid • Node Data Structures • For each neighbor • Distance, in hops, to the nearest lightly-loaded process • A load status flag indicating if the current processor is lightly-loaded, or normal • Routing • Spawned jobs go to the nearest lightly-loaded process • Local Synchronization • Node status changes are multicast to its neighbors

  12. Symmetric Broadcast Networks (SBN) Stage 3 5 Global Synchronization Stage 2 1 • Characteristics • A unique SBN starts at each node • Each SBN is lg P deep • Simple operations algebraically compute successors • Easily adapts to the hypercube • Algorithm • Starts with a lightly loaded process • Phase 1: SBN Broadcast • Phase 2: Gather task queue lengths • Load is balanced during the load and gather phases Stage 1 3 7 Stage 0 4 2 0 6 Successor 1 = (p+2s-1)%P; 1≤s≤3Successor 2 = (p-2s-1); 1≤s<3Note: If successor 2<0 successor2 +=P

  13. pi pi+1 requests task Request task if queue not full Receive task from request Deliver task to pi+1 Dequeue and process task Line BalancingAlgorithm • Master or slave processors adjust pipeline • Slave processors • Request and receives tasks if queue not full • Pass tasks on if task request is posted • Non blocking receives are necessary to implement this algorithm Uses a pipeline approach Note: This algorithm easily extends to a tree topology

  14. Semi-dynamic • Pseudo code Run algorithm Time to check balance? Suspend application IF load is balanced, resume application Re-partition the load Distribute data structures among processors Resume execution • Partitioning • Model application execution by a partitioning graph • Partitioning is an NP-Complete problem • Goals: Balance processing and minimize communication and relocation costs • Partitioning Heuristics • Recursive Bisection, Simulated Annealing, Multi-level, MinEx

  15. P9R6 P6R6 c4 c6 P2R1 c3 P1 c5 P2 c2 P4R1 P4R4 c3 c1 P7R5 P5R3 c1 P8R3 c8 c7 P2R1 Partitioning Graph P1 Load = (9+4+7+2) + (4+3+1+7) = 37 P2 Load = (6+2+4+8+5) + (4+3+1+7) = 40 Question: When can we move a task to improve load balance?

  16. Distributed Termination • Insufficient condition for distributed termination • Empty task queues at every process • Sufficient condition for distributed termination requires • All local termination conditions satisfied • No messages in transit that could restart an inactive process • Termination algorithms • Acknowledgment • Ring • Tree • Fixed energy distribution

  17. Acknowledge first task First task Inactive Active Acknowledgement Termination Pi • Process Receives task • Immediately acknowledge if source is not parent • Acknowledge parent as process goes idle • Process goes idle after it • completes processing local tasks • Sends all acknowledgments • Receives all pending acknowledgments • Notes • The process sending an initial task that activates another process becomes that process's parent • A process always goes inactive before its parent • If the master goes inactive, termination occurs Pj

  18. Single Pass Ring Termination • Pseudo code P0 sends a token to P1 when it goes idle Pi receives token IF Pi is idle it passes token to Pi+1 ELSE Pi sends token to Pi+1 when it goes idle P0receives token Broadcast final termination message • Assumptions • Processes cannot reactivate after going idle • Processes cannot pass new tasks to an idle process Token P0 P1 P2 Pn

  19. Dual Pass Ring Termination Handles task sent to a process that already passed the token on Key Point: Processors pass either Black or White tokens on only if they are idle Pseudo code (Only idle processors send tokens) WHEN P0 goes idle and has token, itsends white token to p1 IF Pi sends a task to pj where j<i Pi becomes a black process WHEN Pi>0 receives token and goes idle IF Pi is a black process Pi colors the token black, Pi becomes White ELSE Pi sends token to p(i+1)%P unchanged in color IF P0 receives token and is idle IF token is White, application terminates ELSE po sends a White token to P1 Process: white=ready for termination, black: sent a task to Pj-x Token: white=ready for termination, black=communication possible

  20. AND Terminated Leaf Nodes Tree Termination • If a Leaf process terminates, it sends a token to it’s parent process • Internal nodes send tokens to their parent when all of their child processes terminate • If the root node receives the token, the application can terminate

  21. Fixed Energy Termination Energy defined by an integer or long value • P0 starts with full energy • When Pi receives a task, it also receives an energy allocation • When Pi spawns tasks, it assigns them to processors with additional energy allocations within its allocation • When a process completes it returns its energy allotment • The application terminates when the master becomes idle • Implementation • Problem: Integer division eventually becomes zero • Solution: • Use two level energy allocation <generation, energy> • The generation increases each time energy value goes to zero

  22. Fair Scheduling in Web Servers CS 213 Lecture 17 L.N. Bhuyan

  23. Objective • Create an arbitrary number of service quality classes and assign a priority weight for each class. • Provide service differentiation for different use classes in terms of the allocation of CPU and disk I/O capacities

More Related