1 / 26

Extensions of the Basic Model

Extensions of the Basic Model. Chapter 6 Elements of Sequencing and Scheduling by Kenneth R. Baker Byung-Hyun Ha. R1. Outline. Introduction Nonsimultaneous arrivals Minimizing the makespan Minimizing maximum tardiness Other measures of performance Dependent jobs

ada
Download Presentation

Extensions of the Basic Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Extensions of the Basic Model Chapter 6 Elements of Sequencing and Schedulingby Kenneth R. Baker Byung-Hyun Ha R1

  2. Outline • Introduction • Nonsimultaneous arrivals • Minimizing the makespan • Minimizing maximum tardiness • Other measures of performance • Dependent jobs • Minimizing maximum tardiness • Minimizing total flowtime with strings • Minimizing total flowtime with parallel chains • Sequence-dependent setup times • Dynamic programming solutions • Branch and bound solutions • Heuristic solutions • Summary

  3. Introduction • Basic single-machine model • Assumptions C1. A set of n independent, single operation jobs is available simultaneously (at time zero). C2. Setup times for the jobs are independent of job sequence and are included in processing times. C3. Job descriptors are deterministic and known in advance. C4. One machine is continuously available and never kept idle while work is waiting. C5. Once an operation begins, it proceeds without interruption. • An opportunity to study a variety of scheduling criteria as well as a number of solution techniques • Possible generalization (in this chapter) • C1 by non-simultaneous jobs arrival • C1 by dependent job sets • C2 by sequence-dependent setup • C3 by use of probabilistic methods

  4. Nonsimultaneous Arrivals • Static version of single-machine problem • All jobs are simultaneously available for processing • e.g., 1 || Cj , 1 || wjUj • Dynamic version • Allowing different ready times (rj) • Examples of scheduling with ready times • Basic model (C4 and C5 are regarded) • Inserted idle time is allowed -- 1 | rj | Tj • With job preemption allowed (preempt-resume mode) -- 1 | rj, prmp | Tj 1 2 T1* = 3 5 7 2 1 T2* = 1 1 3 8 1 2 1 T3* = 0 1 3 7

  5. Nonsimultaneous Arrivals • Scheduling with preemption allowed • Preempt-resume mode • Schedules without inserted idle time constitute a dominant set for regular measures. • Properties associated with transitive rules are often essentially unchanged • Dispatching as decision making is possible for optimality -- no look-ahead • Example -- 1 | rj, prmp | Tmax • Keep machine assigned to the available job with the earliest due date • Example -- 1 | rj, prmp | Cj • keep machine assigned to the available job with the minimum remaining processing time (SRPT: shortest remaining processing time) • Preempt-repeat mode • Job must be restarted each time it is interrupted • Schedules without preemption (permutation schedules) constitute a dominant set • Inserted idle time is determined uniquely by choice of permutation • Need to use look-ahead information (which makes solution approach complex)

  6. Nonsimultaneous Arrivals • Minimizing the makespan • 1 | rj | Cmax • Makespan -- denoted by M or Cmax • Related to throughput of schedule  Cmax is always constant in basic model (1 || Cmax) • M is minimized by Earliest Ready Time (ERT) rule • A nondelay dispatching procedure • Yielding blocks of jobs • Generalization of the problem • Each job with delivery time, qj • Delivery takes place immediately after job complete, in parallel • Makespan includes delivery time • Symmetric property -- equivalent to reversed problem M = C2 + q2 1 2 3 tail of 1 tails (delivery) tail of 2 tail of 3

  7. Nonsimultaneous Arrivals • Generalization of the problem (cont’d) • Head-body-tail problem • Job specification with triples (rj, pj, qj) • NP-hard -- equivalent to 1 | rj | Lmax (discussed later) • A good heuristic solution • Nondelay dispatching procedure that always selects the available job with the largest tail qj • ALGORITHM 1 -- The Largest Tail (LT) Procedure 1. Initially, let t = 0. 2. If there are no unscheduled jobs at time t, set t equal to the minimum ready time among unscheduled jobs; otherwise, proceed. 3. Find job j with the largest qj among unscheduled jobs available at time t. Schedule j to begin at time t. 4. Increase t by pj . If all n jobs are scheduled, stop; otherwise return to Step 2. • Exercise -- head-body-tail problem with 5 jobs • Algorithm 1 should be executed twice (why?)

  8. Nonsimultaneous Arrivals • Generalization of the problem (cont’d) • Optimality condition • Makespan M = ri + j=i..kpj + qk , • for some job i that initiates a block and for some k in the block called the critical job (jobs are renumbered according to sequence) • If qk  qj for all jobs j from i to k, M is optimal. (Theorem 1, discussed later)  It is sufficient condition, i.e., if not, it may not be optimal. a block M = Ck + qk ... ... ... ... i i + 1 k ... tail of i ... critical job tail of k ... ...

  9. Nonsimultaneous Arrivals • Minimizing maximum tardiness • 1 | rj | Lmax • Strongly NP-hard (p. 44 of Pinedo, 2008) • 3-PARTITION reduces to 1 | rj | Lmax  EDD solves 1 || Lmax • Equivalence to head-body-tail problem • Let qj = D – dj , where D = max{dj} • min. Lmax = max{Cj – dj} = max{Cj – (D – qj)} = max{Cj + qj} – D • Theorem 1 • In the dynamic Lmax-problem, a non-delay implementation of the EDD rule yields Lmax = ri + j=i..kpj – dk for some job i that initiates a block, and for some job k in the same block, where the jobs are numbered in order of appearance in the schedule. If dk  dj for all jobs j from i to k, then Lmax is optimal. • Proof of Theorem 1 • Relaxation by considering only jobs from i to k with ready times as ri

  10. Nonsimultaneous Arrivals • Other measures of performance • Mostly NP-hard, if not preempt-resume mode • e.g., 1 | rj | Lmax , 1 | rj | Cj , 1 | rj | Uj (then, how about 1 | rj | Tj?) • Preempt-resume mode problem as lower bound for branch-and-bound • Not clear in case of 1 | rj | Uj or 1 | rj | Tj • 1 | rj | Lmax • Theorem 2 • In the dynamic Lmax-problem, suppose that the nondelay implementation of EDD yields a sequence of the jobs in EDD order. Then this nondelay schedule is optimal. • Proof of Theorem 2 • Relaxation by all ready times as zero • Theorem 3 • In the dynamic Lmax-problem, if the ready times and due dates are agreeable, then the nondelay implementation of EDD is optimal.

  11. Nonsimultaneous Arrivals • Other measures of performance (cont’d) • 1 | rj | Cj • Theorem 4 • In the dynamic F-problem, if the ready times and processing times are agreeable, then the nondelay implementation of SPT is optimal. • Some heuristics for general cases • Nondelay adaptation of SPT • First Off First On (FOFO) rule • Exploiting look-ahead information (so, not dispatching) • Priority to job with smallest sum of earliest start time (rj) and earliest finish time (rj + pj), i.e., (2rj + pj) • 1 | rj | Tj • Theorem 5 • In the dynamic T-problem, if the ready times, processing times and due dates are all agreeable, the nondelay implementation of MDD is optimal. • 1 | rj | Uj • ALGORITHM 2 -- Minimizing U (Dynamic Version) • Optimal in case of agreeable ready times and due dates

  12. Dependent Jobs • Constraints in scheduling • Machine capacity (in basic model) + Technical restriction -- specification by admissible sequence of two jobs • Reduction of the set of feasible solutions  Dominance between jobs • Precedence constraints, i j • Job j is not permitted to begin until job i is complete • Job i is predecessor of job j, job j is successor of job i  Direct predecessor, direct successor • Example -- 1 | prec | Cj • Three jobs a, b, c with pa pb  pc • Optimal without precedence: a-b-c • With additional precedence c  a • Clearly, c-b-a is not (why?) • Then, c-a-b or b-c-a?

  13. Dependent Jobs • Minimizing maximum tardiness • 1 | rj , prec | Lmax -- NP-hard (why?) • Apply any optimization approach for 1 | rj | Lmax , after the following revision • Dominance property • Job j follows job i in an optimal sequence if ri rj and di  dj . • For each precedence i j, revise rj and di to rj' and di' such that • rj' = max{rj , ri + pi} and di' = min{di, dj – pj} • i.e., make agreeable ready times and due dates consistent with the precedence  It is not necessary to design new algorithm for this special case • Justification of the revision -- when di dj – pj • Li = Ci – di' = Ci – (dj – pj)  (Cj – pj) – (dj – pj) = Cj – dj = Lj • 1 | prec | Lmax • Revise only due dates, and apply EDD • 1 | prec | gj(Cj) -- extension of Theorem 1 of Ch. 3 • When the objective is to minimize the maximum penalty, job i may be assigned the last position in sequence if job i has no unscheduled successors and gi(P)  gj(P) for all jobs j  i.

  14. Dependent Jobs • 1 | prec | Cj • Strongly NP-hard for arbitrary precedence structure • Some special cases with existing polynomial-time algorithm • Precedence structure with strings and chains • Minimizing total flowtime with strings • String • A set of jobs that must appear together (continuously) and in a fixed order • e.g., 4 jobs with a string (1-2-3) • Only two possible sequences: 1-2-3-4 or 4-1-2-3 • Some applications • Conflict between sorting and precedence constraints • e.g., Single relevant precedence constraints i j but pj  pi • j is preferred to i for F criterion • There exists an optimal sequence in which jobs i and j are adjacent, in that order (why?) • Contiguity constraint, e.g., group of jobs with common major setup • Chains and series-parallel network (discussed next)

  15. Dependent Jobs • Minimizing total flowtime with strings (cont’d) • Problem with s strings • nk -- number of jobs in string k (1  k  s) • pkj -- processing time of job j in string k (1  j  nk) • Let • pk = j=1..nkpkj -- total processing time in string k • F(k, j) -- flowtime of job j in string k • F(k) = F(k, nk) -- flowtime of string k • Objective -- to minimize the total flowtime (of jobs) • Min. F = k=1..sj=1..nkF(k, j) • Theorem 6 • In the single-machine problem with job strings, total flowtime is minimized by sequencing the strings in the order p[1]/n[1]  p[2]/n[2]  ...  p[s]/n[s] • Proof of Theorem 6 • F = k=1..sj=1..nkF(k, j) = k=1..sj=1..nk(F(k) – i=(j+1)..nkpki) F = k=1..sj=1..nkF(k) – k=1..sj=1..nki=(j+1)..nkpki = k=1..snkF(k)– c

  16. Dependent Jobs • Minimizing total flowtime with parallel chains • Chain • Precedence structure in which each job has at most one direct predecessor and one direct successor • The jobs in a chain do not necessarily have to be sequenced contiguously  The jobs in a string should be • Example with 9 jobs • Feasible sequences: 4-1-2-3-7-8-9-5-6, 7-1-4-2-5-6-3-8-9, ... • ALGORITHM 3 -- Parallel Chain Algorithm for F 1. Initially, each job is a string. 2. Find a pair of string, u and v, such that u directly precedes v and pv /nv pu /nu . Replace the pair by the string (u, v). Then repeat this step. When no such pair can be found, proceed to step 3. 3. Sort the strings in non-decreasing order of p/n. This is an optimal schedule. • Justification of Algorithm 3 • Extended from Theorem 6 and related analysis 10 4 6 5 7 1 8 4 7 1 2 3 4 5 6 7 8 9

  17. Dependent Jobs • Minimizing total flowtime with parallel chains (cont’d) • Series-parallel precedence structure • Network N with a single node, or that can be partitioned into two subnetworks N1 and N2 which are themselves series-parallel and where either: • N1 is in series with N2 (if i N1 and j  N2 , then i  j), or • N1 is in parallel with N2 (if i N1 and j  N2 , then i  j and j  i) • Example with 8 jobs • Optimal sequence construction -- recursively apply the following from leaves: • Series type N: forming string (N1, N2) • Parallel type N: applying Algorithm 3 7 7 5 2 3 P 6 5 8 S P 5 4 7 3 1 4 6 8 S P 4 S S 3 series-parallel structure decomposition tree 5 7 8 2 1 2

  18. Sequence-dependent Setup Times • 1 | sjk | Cmax • Setup time that cannot be absorbed in a job’s processing time • Examples • Production of different chemical compounds, colors of paint, strengths of detergent, blends of fuels (with cleansing required for switching) • Process line for four types of gasoline • Setup times -- sij matrix • Makespan • 1-2-3-4-1: p1 + 30 + p2 + 20 + p3 + 60 + p4 + 20 = j pj + 30 + 20 + 60 + 20 • 1-2-4-3-1: p1 + 30 + p2 + 80 + p3 + 10 + p4 + 30 = j pj + 30 + 80 + 10 + 30 • ... • Objective -- to minimize makespan  to minimize total setup times • Min. M = F[n] + s[n],[n+1] = j=1..n+1 s[j –1],[j] + j=1..npj  min. j=1..n+1 s[j –1],[j]

  19. Sequence-dependent Setup Times • 1 | sjk | Cmax(cont’d) • Strongly NP-hard • Traveling salesman problem (TSP) reduces to 1 | sjk | Cmax . • TSP -- mathematical programming model • Decision variables • xij = 1, if path (i, j) is part of a tourxij = 0, otherwise • Objective • z = i j(i) sij xij • Constraints • xij’s make a tour • Representation of a solution by selection of paths • Cost of solution -- length of a tour • Sum of length of selected paths • Example • x12 = x24 = x43= x31 = 1 • 0, others 30 1 2 40 50 80 90 20 20 30 15 30 10 4 3 60 setup times sij graph representation 30 1 2 40 50 80 90 20 20 30 15 30 10 4 3 60 tour 1-2-4-3 length: 150

  20. Sequence-dependent Setup Times • Dynamic programming solutions • Let • n -- number of cities • X -- the set of all cities • J -- a subset of X • i -- arbitrarily chosen origin of tour • A representation of optimal tour • Sequence of sets {i}, S, {k}, J, {i} • where i k, S  J = , |S| + |J| = n – 2, {i, k} SJ = X • Tour begins at city i, proceeded to cities in S, visits city k, then proceeds to cities in J, and finally returns to i. • Formulation • f(k, J) = minjJ{skj + f(j, J – {j})} • The length of the shortest path from city k that passes through the cities in J and finishes at city i • f(k, ) = ski -- base case • f(i, X) -- the length of the optimal tour

  21. Sequence-dependent Setup Times • Branch and bound solutions • Branching scheme • Creating two subproblems at all levels • One containing a specific path constrained to be part of the solution • The other subproblem prohibiting that same path • e.g., a partition by solutions with (2,1) and solutions without (2,1), ... P 21 *21 21,34 21,*34 *21,23 *21,*23

  22. Sequence-dependent Setup Times • Branch and bound solutions (cont’d) • Reduction of sij matrix • Subtracting the minimum row elements from each row • Subtracting the minimum column elements from each column • Lower bound • Sum of the subtraction constants for reduction  Better one by solving (relaxed) assignment problem • Example • Root node (original problem) reduction: LB = 20 • z = i j(i) sij xij = i j(i) s'ij xij + 4 + 5 + 4 + 2 + 5 = i j(i) s'ij xij + 20  Selection of path with x12 = x21 = x35 = x43 = x54 = 1? • i j(i) sij xij = i j(i) s'ij xij + 20 = 20 • Optimal? No! Subtour!

  23. Sequence-dependent Setup Times • Branch and bound solutions (cont’d) • Justification of reduction and lower bound • Exactly one element in each row is contained in a solution • Exactly one element in each column is contained in a solution • Lower bound as the distance that are unavoidable in any solution • Example of reduction • Original and reduced length (LB = 120) • Analysis in perspective of node 4 + + 30 0 0 50 20 20 1 1 1 20 10 10 90 60 30 40 20 20 80 60 30 15 5 5 2 4 2 4 2 4 20 0 0 60 30 0 10 0 0 3 30 3 0 3 0 row reduction by 30+20+30+10 column reduction by 0+0+0+30 30 0 0

  24. Sequence-dependent Setup Times • Branch and bound solutions (cont’d) • Branching scheme (cont’d) • Selection of a zero element in reduced matrix for two subproblems (why?) • A possible element selection criterion • One that would permit the largest possible reduction next, when prohibited P 20 21 *21 24 25

  25. Sequence-dependent Setup Times • Heuristic solutions • Simple greedy procedures • Closest unvisited city • Variations • Closest unvisited city based on reduced matrix (relative distance) • Closest unvisited pair of cities (using look-ahead) • Applying procedure for every city as origin • Insertion procedure 1. Select two cities arbitrarily and make partial tour 2. Insert remaining cities at every possible place of current partial tour one by one, and choose best one. 3. Repeat Step 2 until complete tour is constructed. • General search methods • Huge amount..

  26. Summary • Generalization of basic single-machine model • More applicability and new difficulties • Dynamic models • Job preemption • Preempt-resume, preempt-repeat • Inserted idle times • Look-ahead procedures • Precedence constraints • Strings and chains • Series-parallel precedence • Sequence-dependent setup times • Traveling salesman problem • END OF SINGLE MACHINE!! AT LAST!!

More Related