1 / 31

Franciszek Seredynski , Damian Kurdej Polish Academy of Sciences and

A PPLYING L EARNING C LASSIFIER S YSTEMS for MULTIPROCESSOR SCHEDULING PROBLEM. Franciszek Seredynski , Damian Kurdej Polish Academy of Sciences and Polish-Japanese Institute of Information Technology. Motivations. New scheduling algorithms are proposed near every day

ranit
Download Presentation

Franciszek Seredynski , Damian Kurdej Polish Academy of Sciences and

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. APPLYING LEARNING CLASSIFIER SYSTEMS for MULTIPROCESSOR SCHEDULING PROBLEM Franciszek Seredynski, Damian Kurdej Polish Academy of Sciences and Polish-Japanese Institute of Information Technology

  2. Motivations • New scheduling algorithms are proposed near every day • In the light of • NP-hard compliteness of the scheduling problem, and • No free lunch theorem concerning metaheuristics this situation may last forever, at least till the moment of appearing quantum computers • Can we use the knowledge gained from the experience with already known scheduling algorithms (hypeheuristics approach) ? • We will use GA-based Learning classifier systems (LCS) to extract some knowledge and use it in the scheduling process

  3. Plan of thepresentation • Multiprocessor Scheduling Problem • The idea of LCS • The concept of LCS-based scheduling • Experimental results • Conclusions

  4. Mutiprocessorscheduling problem Examples of a precedence task graph(a) and a system graph (b). 2 a) b) t1 Multiprocessorsystem: undirected, unweighted graph Gs=(Vs,Es), called a system graph. P1 P2 3 5 5 3 t2 t3 Parallel program: weighted, directed, acyclic graph GP=<VP,EP>, called a precedence task graph or a program graph. 1 P3 P4 The purpose of the scheduling is to distribute the tasks among the processorsin such a way that the precedence constraints are preserved and the response time T (the total execution time) is minimized. 5 t4 T = f (allocation, scheduling_policy = const) 4

  5. Problem formulation • Given a set of program graph instances • Given a multiprocessor system • Given a number of scheduling algorithms (heuristics) solving instances of the scheduling problem with some efficiency • Is it possible to train LCS system to match a given instance of the scheduling problem with the best for it scheduling algorithm (to minimize the total exectution time) from a set of scheduling algorithms ?

  6. Idea of GA-based Learning Classifier System (LCS) Learning Classifier System System for discovery of new rules Evaluation system Decision system Environment

  7. Idea of Learning Classifier System (LCS) Learning Classifier System Decision system System for discovery of new rules Evaluation system Environment state or message e.g.10100 Environment

  8. Idea of Learning Classifier System (LCS) Learning Classifier System Decision system System for discovery of new rules Evaluation system action e.g. Turn right Environment state e.g.10100 Environment

  9. Idea of Learning Classifier System (LCS) Learning Classifier System Decision system System for discovery of new rules Evaluation system reward e.g. 120 action e.g. Turn right Environment state e.g.10100 Environment

  10. A #011: 01 : 43 C S Classifier (rule) in classical LCS • The structure of a classifier • Condtition part C • Action A • Strength S • Strength S • Used when a classifier is selected from a set of classifiersto perform an action • Used when GA creates new rules

  11. Classifier in XCS a ε exp as 010##0#####:0 1000 2,504 0,77 499 19924 146,76 109 C p f ts num • C:condition part • a:action • p:expected reward • e:prediction error • f:fitness • exp:experience of classifier • ts: remembers recent time when GA was applied to this classifier • as:expected population size [A], in which appearsclassifier • num:numerosity of classifier

  12. Efector 01 Detector Match set [M] 0011 C : a : p: e: f #011:01:43:.01:99 #0##:11:11:.13:9 001#:01:27:.05:52 #0#1:11:18:.24:3 Action set [A] Population [P] C : a : p: e: f #011:01:43:.01:99 001#:01:27:.05:52 C : a : p: e: f #011:01:43:.01:99 #0##:11:11:.13:9 001#:01:27:.05:52 #0#1:11:18:.24:3 11##:00:32:.02:92 1#01:10:24:.17:15 ... Prediction array PA 00- 01 37.49 10 - 11 12,75 Action set [A]-1 C : a : p: e: f 11##:00:32:.02:92 XCS Environment 5. 1. a σ 2. 4. 3. 9. 8. GA Subsumption 7. cover Enforcement 6. ρ

  13. Features of XCS • Creates population of classifiers • Processes messagesreceived fromenvironment • AppliesGAto evolve classifiers • Sends action to environment • Learns, generalizes and modifies the set of classifiers

  14. Our problem • Given 200 program graph instances created on the base of the 15-tree graph: training set • Each instance is a tree with different random task and communication weights • Two processor system is considered • Given 5 scheduling heuristics • We want to train LCS system to select in the best way the scheduling heuristic to solve given set of instance of the scheduling problem to provide the best possible solutions ?

  15. Set of list algorithms • ISH (Insert Scheduling Heuristic) • MCP (Modified Critical Path) • STF (Shortest Time First) • LTF (Longest Time First) • own list algorithm: priority of a task depends on a size of the subgraph • We know how works each algorithm (response time) on the set of scheduling instances

  16. XCS-based scheduling system • XCS receives information about an instance of the scheduling problem XCS 1. Program graph + System graph

  17. XCS-based scheduling system • XCS receives information about an instance of the scheduling problem • XCS selects the best available heuristic XCS 1. Program graph + System graph 2. scheduling algorithm

  18. XCS-based scheduling system • XCS receives information about an instance of the scheduling problem • XCS selects the best heuristic from the set of available heuristics • Program graph and a system graph become input data of scheduling algorithm XCS 1. Program graph + System graph 2. 3. scheduling algorithm

  19. XCS-based scheduling system • XCS receives information about an instance of the scheduling problem • XCS selects the best heuristic from the set of available heuristics • Program graph and a system graph become input data of scheduling algorithm • Scheduling algorithm delivers a solution XCS 1. Program graph + System graph 2. 3. scheduling algorithm 4. Gantt diagram

  20. Program graph signature: the basic information concerning program graph • LCS receives from environment a signature of program graph • The signature codes some static properties of program graph • comm/comp – the averaged communication to computation time for a program graph (3 bits) • information about distribution of tasks with a given computational requirements (12 bits) • information about distribution of communication time requirements to communicate between tasks (12 bits) • Information about critical path based on evaluation of comp/comm (16bits) • The length of the signature: 43 bits

  21. Distribution of tasks with a given computational requirements/distribution of communication time requirements

  22. Coding information concerning critical path based on evaluation of comp/comm • Computing ratioson critical path: ratios[0] = 1/4, ratios[1] = 5/3, ratios[2] = 1/3 • Normalization: ratios[0] = 3/27, coding as 01, ratios[1] = 20/27,coding as 11, ratios[2] = 4/27, coding as 01. • Coding signal concerning critical path: 0111010000000000

  23. Training LCS: number of correct matching scheduling algorithms to instances as function of number of training cycles

  24. Training LCS: population size of rulesas function of a number of cycles

  25. Training: summary of experiments • Nontrained system correctly matched heuristics with scheduling instances in 40-50% cases • The system was able to learn to match correctly in 100% heuristics to instances • It means that information about the matching process was extracted during the learning process • Classifiers contain this information and during the learning process the process of generalization of rules was observed • Learning process is a costly process, but the gained information can be used in the scheduling

  26. LCS-based scheduling system: normal operation mode • Modification of instances (program graphs) from training set: testing set • All computation and communication weights were scailed by 10 • Next, weights of ktasks or communications were changed by constant d

  27. Experiment: k=1, d=1

  28. Experiment: k=2, d=2

  29. Experiment: k=3, d=3

  30. Normal operation mode: summary of experiments

  31. Conclusions • LCS has been proposed to learn optimal matching scheduling algorithms to instances • Instances were represented by specially signatures • During the learning process the knowledge about matching was extracted in the shape of LCS rules, and next generalized • Creating signatures is one of the most crucial issues in the proposed approach • Performance of the system depends also on many parameters of LCS • We believe that encouraging results of experiments open new possibilities in developing hyperheuristics

More Related