1 / 24

# Heuristics Summary for Heterogeneous Computing Systems - PowerPoint PPT Presentation

Heuristics Summary for Heterogeneous Computing Systems. By: Chris Klumph and Kody Willman . Types of Heuristics. References. Terminology. Static Mappings 6 Example mappings 4 Graph chromosome mappings 1 Tree mapping. Dynamic Mappings Immediate mode (5) Batch mode (3). Previous. Next.

Related searches for Heuristics Summary for Heterogeneous Computing Systems

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about 'Heuristics Summary for Heterogeneous Computing Systems' - valora

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Heuristics Summary for Heterogeneous Computing Systems

By:

Chris Klumph and Kody Willman

References

Terminology

• Static Mappings

• 6 Example mappings

• 4 Graph chromosome mappings

• 1 Tree mapping

• Dynamic Mappings

• Immediate mode (5)

• Batch mode (3)

Next

Home

Types of Static Mappings

Static Mapping is when you know what tasks are scheduled, and you just need to choose the best way to map them.

• Minimum Execution Time (MET)

• Minimum Completion Time (MCT)

• Min-Min

• Max-Min

• Duplex

• Genetic Algorithms (GA)

• Simulated Annealing (SA)

• Genetic Simulated Annealing (GSA)

• Tabu

• A*

Next

Home

Example:

This Example will be used to demonstrate the usage of 6 of the Static Heuristics

OLB

MET

MCT

Min-Min

Max-Min

Duplex

Next

Home

Static Heuristics details

Completion Times of Example

• Assigns each task, in arbitrary order to the machine expected to be available next. This is regardless of the taskâ€™s execution time on that machine.

• This is used to keep all machines as busy as possible and doesnâ€™t care much about optimizations. Its advantages is in its simplicity, but can result in poor makespans.

Next

Home

Static Heuristics details

Minimum Execution Time (MET)

Completion Times of Example

• Assigns each task in arbitrary order to the machine with the best expected execution time for that task. This is regardless of that machines availability

• This is used to give each task to its best machine. Can cause severe load imbalancing across machines. Task 1-20 all do best on machine A and are assigned to it, but no tasks are best on machine B so it gets no tasks.

Next

Home

Static Heuristics details

Minimum Completion Time (MCT)

Completion Times of Example

• Assigns each task in arbitrary order to the machine with the minimum expected completion time for that task.

• This causes some problems with tasks assigned to machines that do not have the minimum execution time for them.

Next

Home

Static Heuristics details

Min-Min

Completion Times of Example

• Consider all the unmapped tasks, with the known set of minimum completion times. The task with the overall minimum completion time is selected and assigned to the corresponding machine. Update machines and times, repeating till all tasks mapped.

• For all unmapped tasks find minimum completion time and assign. (Similar to MCT, but consider all tasks and not one at a time.)

Next

Home

Static Heuristics details

Max-Min

Completion Times of Example

• Consider all the unmapped tasks, with the known set of minimum completion times. The task with the overall maximum completion time is selected and assigned to the corresponding machines. Update machines and times, repeat till all tasks mapped.

• For all unmapped tasks find maximum completion time and assign.

Next

Home

Static Heuristics details

Duplex

Completion Times of Example

• This is a literal combination of the Min-Min and Max-Min heuristics. It performs both and uses the better solution.

• This can be performed to exploit the conditions in which either performs well with a negligible overhead.

Next

Home

Static Heuristics details

Genetic Algorithms (GA)

• Technique used to search large solution spaces.

• Pseudo code:

• Initial population generation;

• Evaluation;

• While(stopping criteria not met)

• Selection;

• Crossover;

• Mutation;

• Evaluation;

• Output best solution;

• Initial population made by 200 randomly generated chromosomes with uniform distribution or seeding by Min-Min.

• Stopping criteria usually 1k iterations, or 150 no change in best solution.

• Evaluation â€“ finds which chromosomes are better and keeps them for subsequent populations.

• Selection â€“ duplicates better chromosomes and deletes others.

• Crossover â€“ selecting 2 random chromosomes, random point(s) within chromosomes, and swap data points in between points. Each chromosome chosen with 60% probability.

• Mutation â€“ randomly selects chromosome, randomly selects task within and reassigns it to a new machine. Each chromosome chosen with 40% probability

Next

Home

Static Heuristics details

Simulated Annealing (SA)

• Iterative technique that considers only one possible solution (mapping) for each metatask at a time. Uses same representation of chromosome as GA.

• This allows poorer solutions to be accepted to attempt to obtain a better search space.

• Uses a cooling technique that makes it harder to accept a poorer solution the longer the task runs. A 50% probability.

• After each mutation only 90% of previous chromosomes kept for next iteration.

Example: Trying to find the minimum on the graph. Starting at Area 1 the nearest min is in Area 2 but thatâ€™s not the overall lowest, by accepting a possible 2< could possible find the actual overall minimum in Area 5. This allows for possibilities in other solution spaces to be found.

Next

Home

Static Heuristics details

Genetic Simulated Annealing (GSA)

• Itâ€™s a combination of SA and GA, where it follows procedures similar to GA, but uses a cooling process similar to SA for accepting or rejecting new chromosomes.

• Each iteration the initial population is reduced 90% of its current value for each makespan.

• This makes it harder for poorer solutions to be accepted

Example: Trying to find the minimum on the graph. Starting at Area 1 the nearest min is in Area 2 but thatâ€™s not the overall lowest, by accepting a possible 2< could possible find the actual overall minimum in Area 5. This allows for possibilities in other solution spaces to be found. But the cooling process means that after a certain time no more poorer possibilities would be accepted.

Next

Home

Static Heuristics details

Tabu

• This search of the solution space keeps track of the regions which have already been searched so as to not repeat a search near those areas. This also uses the same type of chromosome mapping as the GA approach.

• Beginning with random mapping and uniform distribution, perform short hops to find local minimum, then perform long hops to see if there is a overall minimum somewhere else on the mapping.

Example: Trying to find the minimum on the graph. Starting at Area 1 make short hops, one area, to find the local min, in Area 2, make a long hop to Area 6, find local min, in Area 5, that is lower than Area 2, make long hop to Area 10, local min is bigger than Area 5â€™s min. Guess that Area 5 is min for solution space.

Next

Home

Static Heuristics details

A*

• This search technique is based on a Âµ-nary tree, beginning at a root node that is null solution. As the tree grows, nodes representing partial mappings (subsets of tasks to machines) are calculated, each child having one more task than the parent node. After generating Âµ children the parent becomes inactive. Keep a limit on the number of active nodes to limit the maximum execution time. All children are then evaluated to find best partial mapping. Those best children then become a parent and the process continues until maximum active nodes or found best mapping.

Next

Home

Types of Dynamic Mappings

Dynamic Mapping is when you have to account for a changing situation while mapping.

• Immediate Mode Mappings

• Minimum Completion Time

• Minimum Execution Time

• Switching Algorithm

• K-Percent Best

• Batch Mode Mappings

• Min-Min

• Max-Min

• Sufferage

• Notes on Batch Mode Mapping

Next

Home

Dynamic Immediate Mode Mapping

Minimum Completion Time

Minimum Execution Time

• Assigns each task to the machine that results in that taskâ€™s earliest completion time. This is a fast-greedy heuristic and is considered a benchmark for the Immediate mode.

• This is similar to the Static mapping, but different in that it has to be able to react to changing situation instead of a set task list.

• Takes only O(m) time to map a given task.

• Assigns each task to the machine that performs that taskâ€™s computation in the least amount of time. Also known as Limited Best Assignment.

• Does not consider machine ready times, which can cause a severe imbalance in loads across the machines.

• It is a very simple heuristic needing only O(m) time to find the machine that has the minimum execution time.

Next

Dynamic Immediate Mode Mapping

Switching Algorithm

K-Percent Best

• This uses the MCT and MET heuristics in a cyclic fashion depending on the load distribution across the machines. Takes O(m) time.

• The purpose of this is to use the MET to get the most tasks out there and then use the MCT to smooth things out for load balancing between the machines.

• Need to have an load balance index as to when to switch between one and the other. Or having lower and upper limits.

• This only considers a subset of machines while mapping a task. The task is assigned to a machine that provides the earliest completion time in that subset.

• The purpose is to avoid putting the current task onto a machine which might be more suitable for a yet-to-come task, â€śForesight.â€ť

• For each task O(m log m) time is spent in ranking the machines for the subset, and O(m) in assigning the task to the machine. Overall taking O(m log m) for this KPB.

Next

Home

Dynamic Immediate Mode Mapping

• Assigns a task to the machine that becomes available next, without considering the execution time of the task on that machine. If multiple machines become available at the same time, then one is arbitrarily chosen.

• Depending on the implementation the mapper may need to examine all m machines to find the one that will be available next. Therefore it takes O(m) to find the assignment.

Next

Home

Dynamic Batch Mode Mapping

Min-Min

Max-Min

• One the machine that provides the earliest completion time is found for every task, the maximum earliest completion time is determined and assigned to the corresponding machine.

• This has the same complexity as min-min and takes O(S^2m)

• It is likely to do better than the Min-Min in cases where there are many more shorter tasks than longer tasks, because it can fill the shorter tasks in around the longer tasks to average out the system.

• Begins by scheduling the tasks that change the expected machines read time status by the least amount, making the task finish in its earliest completion time.

• The percentage of tasks assigned their best machine is higher in min-min than other batch mode heuristics.

• Takes O(S^2m) to complete his heuristic. The average meta-task size is S and because it is iterative it checks all tasks against all machines each time.

Next

Home

Dynamic Batch Mode Mapping cont

Sufferage

Other notes on Batch mode

• Based on the idea that by assigning a machine to a task that would â€śsufferâ€ť most in terms of expected completion time if that task were not assigned to it. An example would be two machines and two tasks

• M1 does t1 = 20 and t2 = 50

• M2 does t1 = 25 and t2 = 90

• The completion time of m1t1 & m2t2 is 110, and m1t2 & m2t1 is 75,

• By not doing minimum you get a better overall time, donâ€™t â€śsufferâ€ť as much.

• The complexity of this heuristic makes its total time to completion O(wSm) where 1 â‰¤ w â‰¤ S.

• For Batch Mode there are two strategies maps used

• Regular time interval â€“ map meta-tasks every ten seconds, where redundant mapping is avoided.

• Fixed count strategy maps meta-task M when one of following conditions are met:

• Arriving task makes M larger than or equal to predetermined number K or

• All tasks in set K have arrived, a task finishes, and those tasks yet to begin are larger than or equal to K

Next

Home

Terminology

• Makespan â€“ assignment of jobs to minimize completion time.

• Fast-Greedy â€“ makes the locally optimum choice at each stage. Doesnâ€™t look back at previous results.

• O(m) time â€“ look up Big O notation, describes how the size of the input affects algorithms usage.

Mapping of Heuristics is like tetris, fit the best task(piece) to the best machine(slot) for optimal results(score). Any black spaces are wasted computational resources.

Next

Home

References

• M. Maheswaran, S. Ali, H. J. Siegel, D. A. Hensgen, R. F. Freund, â€śDynamic Mapping of a Class of Independent Tasks onto Heterogeneous Computing Systems,â€ť in â€śJournal of Parallel and Distributed Computing 59,â€ť pp.107-131 (1999)

• T. D. Braun, H. J. Siegel, N. Beck, L. L. Boloni, M. Maheswaran, A. I. Reuther, J. P. Robertson, M. D. Theys, B. Yao, â€śA Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Computing Systems,â€ť in â€śJournal of Parallel and Distributed Computing 61,â€ť pp.810-837 (2001)

Home

Howard Jay Siegel

• He is a professor in the School of Electrical and Computer Engineering at Colorado State University. He is a fellow of the IEEE and a fellow of the ACM. He received two BS degrees from MIT and an MA, MSE, and Ph.D. from Princeton University. Professor Siegel has coauthored over 280 technical papers, has co-edited seven volumes, and wrote the book Interconnection Networks for Large-Scale Parallel Processing. He was a coeditor-in-Chief of the Journal of Parallel and Distributed Computingand was on the Editorial Boards of the IEEE Transactions on Parallel and Distributed Systems and the IEEE Transactions on Computers. He was Program Chair/Co-Chair of three conferences, General Chair/Co-Chair of four conferences, and Chair/C0-Chair of four workshops. He is an international keynote speaker and tutorial lecturer and a consultant for government and industry.

End of Presentation