08. Machine scheduling

CHAPTER 8 MACHINE SCHEDULING
8.1 Basic Concepts of Machine Scheduling
 allocation of jobs (or tasks) T1, T2, …, Tn to be
performed on processors (or machines) P1, P2, …,
Pm
 allocation to be made to optimize some objective
 machine scheduling problems include
sequencing , which determines the order in which
tasks are processed, and
scheduling that determines at what time the
processing of a task begins and ends

single machine scheduling
parallel machine scheduling
dedicated machine scheduling

open shop, in which each task must be processed
by all machines, but no specific order in which
processing has to take place

flow shop, in which each job must be processed all
machines, and each task processed by the
machines in the same specified order

job shop, in which each job must be processed on a
specific set of machines, and the processing order
is also job-specific

a number of parameters

processing time pij is the processing time of task Tj
on machine Pi (if only a single machine or all
machines have the same processing time for a job,
we simplify the notation to pj.

ready time (also referred to as arrival time or
release time) rj, the time at which task Tj is ready
for processing. In the simplest case, rj = 0

due time dj, the time at which task Tj should be
finished.

a number of variables

completion time cj of job Tj, which is the time at
which task Tj is completely finished

flow time fj of task Tj, defined as fj = cj  rj, is the
time that a job is in the system, waiting to be
processed or being processed

lateness of a task Tj, is j = cj  dj and tardiness,
tj = max{j, 0}

a number of alternative performance criteria;
three important criteria are:

makespan (or schedule length) Cmax = max {c j } ,
j
the time at which the last of the tasks has been
completed

mean flow time F is the unweighted average F =
1
( f1  f 2  ...  f n ) (differs from the mean
n
completion time n1 (c1  c2  ...  cn ) only by the
constant n1 ( r1  r2  ...  rn ) and refers to the
average time that a job is in the system, either
waiting to be processed or being processed)

maximal lateness Lmax = max { j } is the longest
j
lateness among any of the jobs.
8.2 Single Machine Scheduling

Minimizing the makespan is not meaningful: each
sequence of tasks will result in the same value of
Cmax which will equal the sum of processing times
of all tasks

Minimizing the mean flow time: the simple Shortest
Processing Time (STP) Algorithm solves the
problem optimally:
SPT Algorithm: Schedule the task with the
shortest processing time first. Delete it and
repeat until all tasks have been scheduled.
Extension of the rule: not only processing times pj to
consider, but also weights wj associated with the tasks.
The objective is then to minimize the average weighted
flow time, defined for task Tj as wjfj. The weighted
generalization of the SPT algorithm is:
WSPT Algorithm (Smith’s Ratio Rule):
Schedule the task with the shortest weighted
processing time pj/wj first. Delete it and repeat
until all tasks have been scheduled.
Example 1: There are seven machines in a
manufacturing unit. Maintenance has to be performed
periodically. Costs are incurred for downtime,
regardless if a machine waits for service or is being
served. Processing times of the machines, downtime
costs, and weighted processing times are given in the
table below.
Job #
Service time
(minutes)
Downtime cost
($ per minute)
Weighted
processing time
pj/wj
T1 T2 T3 T4 T5
T6 T7
30 25 40 50 45
60 35
2
8
3
6
9
4
3
15 8⅓ 6⅔ 5 95 11¼ 7½ 11⅔
Applying WSPT, first schedule T4 (with the lowest
weighted processing time of 5 95 ), followed by T3 with
the next-lowest weighted processing time of 6⅔,
followed by T6, T2, T5, T7, and T1. The schedule is
shown in the Gantt chart (after the American engineer
Henry L. Gantt (1861 – 1919), who developed these
charts in 1917).
Tasks T4, T3, T6, …, T1 now have idle times of 0, 50, 90,
150, 175, 220, and 255. Adding the processing time
results in completion times (= flow times) 50, 90, 150,
175, 220, 255, and 285. Multiplying by the individual
per-minute costs and adding results in a total of
$4,930.
 Minimizing maximal lateness Lmax is done
optimally by the earliest due date algorithm (or
EDD algorithm or Jackson’s rule):
EDD Algorithm: Schedule the task with the
earliest due date first. Delete it and repeat
until all tasks have been scheduled.
Example 2: A firm processes book-keeping jobs.
Eleven tasks are to be completed by a single team, one
after another. Processing times and due dates are in
the table below.
Job #
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11
Processing 6 9 4 11 7 5 5 3 14 8 4
time (hours)
Due dates
25 15 32 70 55 10 45 30 30 80 58
The EDD rule schedules task T6 first (its due date is
10, earliest of all), followed by T2, T1. The Gantt chart
for the schedule is shown below (the tie between T8
and T9 with the same due dates is broken arbitrarily).
Tasks T6, T2, T1, and T8 are finished before their due
dates, T9 is late by 7 hours, T3 by 9 and T7 by 1; T5, T11,
T4, and T10 are finished before they are due. The
maximal lateness occurs for job T3, so that Lmax = 9.
There are other optimal schedules.
8.3 Parallel Machine Scheduling

Minimizing
makespan
is
quite
hard
computationally, even for just two machines: we
need heuristic methods; one is the longest
processing time first (or LPT) algorithm.
LPT is a list scheduling method, it makes a priority list
of jobs, starting with the highest priority. Jobs are
then assigned one at a time to the first available
machine.
LPT Algorithm: Put the tasks in order of
nonincreasing processing times. Assign the first task to
the first available machine. Repeat until all tasks have
been scheduled.
Example 3: Use again Example 1 above, but now we
have three servicemen available to perform the tasks.
With the seven tasks in order of processing time,
starting with the longest, we obtain the sequence T6,
T4, T5, T3, T7, T1, and T2 with processing times of 60,
50, 45, 40, 35, 30, and 25 minutes. In the beginning, all
three machines are available, so we first assign the
longest task T6 to machine P1.(P1 will be available
again at time 60, when T6 is completely processed).
Next T4 goes to P2, now available. (P2 will be available
again at time 50, when T4 is completely processed).
Next T5 is assigned to P3, which will be available again
at time 45. Next task to be scheduled is now T3. The
three machines become available again at 60, 50, and
45, so T3 is scheduled on P3, and so on. (Shaded areas
indicate idle time).
The schedule length is Cmax = 110. This schedule is not
optimal, but the LPT algorithm is not an exact
algorithm but only a heuristic. (The optimal solution
schedules T6 and T7 on P1, T4 and T5 on P2, and T3, T1,
and T2 on P3.This schedule has no idle time and
finishes at time 95. Note that optimality does not
require that there is no idle time. On the other hand, if
there is no idle time, the schedule must be optimal).
The LPT algorithm is not guaranteed to find an
optimal solution that truly minimizes the schedule
length Cmax. Instead, we may try to estimate how far
from an optimal solution the heuristic solution found
by the LPT algorithm might be. The performance ratio
R = Cmax(heuristic)/Cmax(optimum),
where Cmax(heuristic) is the schedule length obtained
by the heuristic method and Cmax(optimum) is the true
minimal schedule length. Since Cmax is a minimization
objective, Cmax(optimum)  Cmax(heuristic), so that R 
1, and the smaller the value of R, the closer the
obtained schedule will be to the true minimum. The
performance ratio RLPT for the LPT algorithm applied
to an n-job, m-machine problem satisfies
RLPT 
4 1
.

3 3m
4
1

=
3 3(2)
7/6  1.167, meaning that in the worst case, the LPT
algorithm will find a schedule that is 16.7% longer
than optimal.This bound is actually tight, as shown in
the following
For a two-machine (m = 2) problem, RLPT =
Example 4: Let a two-machine, five-job scheduling
problem have the task processing times 3, 3, 2, 2, and
2, respectively. Applying the LPT algorithm to this
problem results in (a), whereas an optimal schedule is
shown in (b). With Cmax = 7 for the LPT schedule
shown in (a), and Cmax = 6 for the optimal schedule
shown in (b), we see that the worst-case bound of RLPT
= 7/6 is achieved.
Having shown that the worst-case scenario can occur
4 1
for the performance ratio bound of RLPT = 
, we
3 3m
can look at the positive side and conclude that for a
two-machine problem an LPT schedule can never be
poorer than 16.7% above the optimal value. For a
three-machine problem, this becomes slightly worse
4
1
with RLPT = 
= 11/9  1.222, i.e., Cmax is no more
3 3(3)
than 22% longer than its optimal value. For four
machines, we obtain 25%, and for five machines
26.7%; for a large number of machines the value
approaches 33⅓%.
When scheduling tasks on several parallel processors,
it is sometimes possible to allow preemption, whereby a
task may be preempted, i.e., stopped, and restarted
later, on any (possibly another) processor. This makes
the problem of minimizing Cmax on parallel machines
easy. The so-called Wrap-Around Rule of McNaughton
finds an optimal schedule when preemption is
permitted. Specifically, assume that there are m
parallel processors on which n tasks are to be
performed with processing times pj, j=1, 2, …, n.
Clearly, no schedule exists with a makespan Cmax
shorter than the longest of the processing times
max{ p j }. Also, Cmax cannot be shorter than the mean
1 j  n
1
processing time of all jobs, i.e.,
m
Cmax

1
 max max{ p j },
m
 1 j n
n

j 1
n
p
j 1

p j .

j
. Therefore
The algorithm can be described as follows.
McNaughton’s Wrap-Around Rule: First
sequence the tasks in arbitrary order,
obtaining a sequence of length p1 + p2 + … +
pn time units. Then compute
C
*
max

1
: max max{ p j },
1 j  n
m

n

j 1

pj 

and break the time sequence at the points
iC * , i=1, 2, …, m1. Schedule all tasks in the
max
*
*
 on processor Pi, i=1,
; iCmax
interval (i  1)Cmax
2, …, m, noting that preempted tasks may be
at the beginning and/or the end of each
processor schedule. Finally, any task that is
preempted and processed on two different
processors Pi and Pi+1 will have to be
processed first on Pi+1, then preempted, and
finished on Pi.
It is clear that there will be no idle time on any of the
processors, unless there is any job with a processing
1 n
time longer than  p j .
m j 1
Example 5: Consider the processing times for the
eleven tasks in Example 2, but ignore the due dates.
With
two
processors,
we
first
compute
11


*
Cmax  max max{ p j }, ½ p j  = max {p9, ½(76)} =
1 j 11

j 1
max {14, 38} = 38. The Wrap-Around Rule will then
produce the following optimal schedule:

In a practical application of this optimal schedule, job
T6 would start on processor P2 at time t = 0 and
preempted at t = 4, then continue on P1 at t = 37 and
finished at t = 38.
Assume now that three processors (or teams) were
available, and that T9 increases from p9 = 14 to p9 = 34.
*
Then Cmax
= max {34, ⅓(96)} = max {34, 32} = 34.
Some idle time is now inevitable due to the long
processing time p9. Using the Wrap-Around Rule, we
obtain the schedule
For the preempted jobs in this optimal schedule, T5
will commence being processed at t = 0 on P2,
preempted at t = 3, and continued on P1 at t = 30, until
it is finished at t = 34. Job T9 will start being processed
on P3 at t = 0 and processed completely until it is
finished at t = 34 on P3. Job T10 is processed on P2 from
t = 16 to 24, immediately followed by T11 from t = 24 to
28, after which P2 is idle until the end of the schedule
at t = 34.
Now minimize mean flow time F. For identical parallel
processors, preemption is not profitable. Therefore,
we will only consider nonpreemptive schedules. The
problem can then be solved to optimality by means of
a simple technique:
Algorithm for minimizing mean flow time: Sort the
jobs in order of nondecreasing processing time (ties
are broken arbitrarily). Renumber them as
T1, T2,..., Tn. Then assign tasks T1, T1m , T12 m ,... to
machine P1, T2, T2m , T22 m ,... to P2, T3,T3m ,T32m ,... to
P3, and so on. Tasks are processed in the order they
are assigned.
Consider again Example 1 above. The reordered
sequence of tasks is (T1, T2, T3, T4, T5, T6, T7 ) = (T2, T1,
T7, T3, T5, T4, T6) with processing times (25, 30, 35, 40,
45, 50, 60). With three machines, assign to P1 the tasks
T1 , T4 , and T7 (or, renumbering them again, T2, T3,
and T6), P2 is assigned the jobs T2 and T5 (i.e., T1 and
T5), and P3 will process T3 and T6 , i.e., T7 and T4:
The mean completion time is F = [25 + 65 + 125 + 30
+ 75 + 35 + 85] = 440/7  62.8571.
1
7
 Minimization of the maximal lateness turns out to
be difficult, and we leave its discussion to
specialized books, such as Eiselt and Sandblom
(2004).
8.4 Dedicated Machine Scheduling
In an open shop, each task must be processed on each
of a number of different machines. The sequence of
machines is immaterial. We deal only with the case of
two machines, which happens to be easy, while
problems with more machines are difficult.
Minimizing the schedule length (makespan) Cmax is
easy. Optimal schedules can be found by the Longest
Alternate Processing Time (LAPT) Algorithm:
LAPT Algorithm: Whenever a machine
becomes idle, schedule the task on it with the
longest processing time on the other machine,
if it has not yet been processed on that
machine and is available at that time. If a task
is not available, the task with the next-longest
processing time on the other machine is
scheduled. Ties are broken arbitrarily.
Example 6: In an assembly shop, each semi-finished
product goes through two phases, assembly of
components, and checking of them. The sequence of
these tasks is immaterial. The times (in minutes) to
assemble and check the six products are:
Job #
T1 T2 T3 T4 T5 T6
Processing 30 15 40 30 10 25
time on P1
Processing 35 20 40 20 5 30
time on P2
Using LAPT, we begin by scheduling a task on
machine P1. The task with longest processing time on
P2 is T3, so this is scheduled first on P1. Next we
schedule a task on P2 which is still idle. The task with
the longest processing time on P1 is again T3, but it is
not available now, so we schedule the task with nextlongest processing time on P1. This is T1 or T4, choose
T1. With T3 and T1 being scheduled on the two
machines, T1 is first to finish, at time 35, and P2 is idle
again. Now, the available task with the next-longest
processing time on P1 is T4, which is scheduled next on
P2. Continuing, the optimal schedule with Cmax = 155
is:
Minimizing mean completion time and maximal
lateness are computationally difficult, even for two
machines.
Now the flow shop model, in which each task has to be
processed by all machines, but in the same,
prespecified order. The schedule above does not satisfy
this condition, since T3 is processed on P1 first and
later on P2, while T1 is processed on P2 first and then
on P1. Consider two machines, for which the
makespan is to be minimized. The famous Johnson’s
rule, first described in the early 1950s does this. All
jobs have to be processed on P1 first and then on P2,
and the processing time of task Tj is p1j on machine P1
and p2j on machine P2.
Johnson’s Algorithm: For all jobs with
processing time on P1 the same or less than
their processing time on P2 (i.e., p1j ≤ p2j), find
the
subschedule
S1 with
tasks
in
nondecreasing order of p1j values. For all
other jobs (i.e., with p1j > p2j), determine the
subschedule S2 with tasks in nonincreasing
order of their p2j values. The sequence of jobs
is then (S1, S2).
Consider Example 3, where now each job has to be
assembled first and then checked, i.e., processed on
machine P1 first and then on P2. Tasks for which the
condition p1j ≤ p2j holds are T1, T2, T3, and T6; those
with p1j > p2j are T4 and T5. With the former four tasks
in nondecreasing order of processing times on P1 we
get the subsequence S1 = (T2, T6, T1, T3), and with the
latter two in nonincreasing order of processing time on
P2 we get the sequence S2 = (T4, T5). The result is (T2,
T6, T1, T3, T4, T5), and the schedule is shown in the
Gantt chart below:
The schedule length is Cmax = 175. Since a flow shop is
more restrictive than an open shop, the increase of the
schedule length from 155 to 175 minutes is not
surprising.
The last model is a job shop: not all tasks need to be
performed on all machines, and the sequence in which
a job is processed on the machines is job-specific. We
will only consider two machines and minimize the
makespan. Jackson described an exact algorithm for
this in 1955. It uses Johnson’s (flow shop) algorithm as
a subroutine.
Jackson’s Job Shop Algorithm: Divide the set
of jobs into four categories:
J1 includes all jobs requiring processing only
on P1,
J2 includes all jobs requiring processing only
on P2,
J12 includes all jobs requiring processing on P1
first and then on P2, and
J21 includes all jobs requiring processing on P2
first and then on P1.
Apply Johnson’s rule to jobs in J12, resulting
in the sequence S12. Then apply the rule to
jobs in the set J21, but with p1j and p2j
swapped. The result is the subsequence S21.
Jobs in J1 and J2 are sequenced in arbitrary
order, denote their sequences by S1 and S2.
The job order on P1 is (S12, S1, S21), and the
job order on P2 is (S21, S2, S12).
We modify Example 3 as displayed in the table:
Job #
Processing time p1j
Processing time p2j
Processing sequence
T1
T2
30
15
35
20
P2, P1 P1, P2
T3

40
P2
T4 T5
T6
30 10
25
5
30

P1 P1, P2 P2, P1
We have J1 = {T4}, J2 = {T3}, J12 = {T2, T5}, and J21 =
{T1, T6}. Since J1 and J2 have only one job each, the
subsequences are S1 = (T4) and S2 = (T3). Applying
Johnson’s algorithm to J12, we obtain the sequence S12
= (T2, T5). For J21., applying Johnson’s rule to T1 and
T6, with processing times p1j and p2j switched, we get
S21 = (T1, T6). The overall sequence on P1 is then (S12,
S1, S21) = (T2, T5, T4, T1, T6), while the overall sequence
on P2 is (S21, S2, S12) = (T1, T6, T3, T2, T5). The resulting
schedule has an overall schedule length Cmax = 130
minutes: