An intelligent heuristic search method for flow

University of Wollongong
Research Online
University of Wollongong Thesis Collection
University of Wollongong Thesis Collections
2002
An intelligent heuristic search method for flowshop problems
Joshua P. Fan
University of Wollongong, [email protected]
Recommended Citation
Fan, Joshua P., An intelligent heuristic search method for flow-shop problems, Doctor of Philosophy thesis, Department of
Information Systems, University of Wollongong, 2002. http://ro.uow.edu.au/theses/1452
Research Online is the open access institutional repository for the
University of Wollongong. For further information contact the UOW
Library: [email protected]
A N INTELLIGENT HEURISTIC SEARCH METHOD
FOR
FLOW-SHOP PROBLEMS
A thesis submitted in fulfilment of the requirements
for the award of the degree
DOCTOR OF PHILOSOPHY
from
UNIVERSITY OF WOLLONGONG
by
JOSHUA POH-ONN FAN
B.MATH., B.E. (HONS.)
Department of Information Systems
AUGUST 2002
DECLARATION
This is to certify that the work presented in this thesis was carried out by the author in
the Department of Information Systems at the University of Wollongong, Australia
and is the result of original research and has not been submitted for a degree to a
other university or institution.
I
ACKNOWLEDGEMENTS
I would like to express my deepest gratitude to my supervisor Professor Graham
Winley for his thoughtful guidance and invaluable input and ideas throughout the
course of this research. Besides being a kind and caring supervisor, he constantly
encouraged me in my research and motivated me to keep the quality of this research
to the highest standard possible. His constant support enables me to conclude the
work of this research. I wish to thank him for being a great supervisor, mentor and
friend.
I would also like to thank Dr. Li-Yen Shue for his guidance during the early stage
this research.
I would like to thank Mr. Ang Yang Ang for being a dedicated friend to proofread th
thesis for me throughout every stage of my writing. I would like to thank Mr. Teh
Kang Hai for spending his valuable time to proofread the final version of my thesis
Finally, I would like to thank all my family members for their dedicated support an
patience throughout these many years of my work.
II
ABSTRACT
The flow-shop problem has been a problem for production management ever since the
manufacturing industry began. This problem involves the scheduling of production
processes for manufacturing orders (referred to as jobs), where each job requires th
same processing sequence and thus the same processing facilities (referred to as
machines), although the processing time for a job on a machine may be different from
that of a different job. The objective is to find a schedule that can complete all
jobs in the shortest possible time under the constraints of machine availability. T
fact that more than one job may be chosen to be processed on a machine at any one
time has made this problem combinatorial in nature. The most unwanted
characteristic of a combinatorial problem is the explosive growth of the solution s
and hence the computation time. In mathematical terms, the number of possible
schedules for n jobs and m machines is (n\)m.
Early research in this area is mainly based on Johnson's theorem (Johnson, 1954).
For flow-shop problems with only two machines or three machines with certain
characteristics and an arbitrary number of jobs, Johnson's theorem provides simple
procedures for finding an optimal solution. The optimal solution is the sequence of
arbitrary number of jobs that will minimise the maximum flow-time (makespan or
schedule time). Unfortunately, this theorem does not extend to general three-machine
problems or problems involving more than three machines.
The aim of this research is to define a new solution procedure (algorithm) for find
an optimal solution for the three-machine flow-shop problem. In this research, a
significant contribution has been made by developing an algorithm that can solve
III
flow-shop scheduling problems beyond two machines. The three-machine flow-shop
heuristic algorithm we have developed will always produce an optimal sequence. On
characteristic of the algorithm developed in this research is its ability to lea
past history to improve on the future search processes. The process of learning f
past history reduces the disadvantage of the large computation time associated wi
most flow-shop scheduling algorithms. In the process of developing the algorithm,
we have introduced two important improvements related to the way we estimate the
initial distance to the goal at the start of the search and during the search. In
with minor modifications the algorithm we have developed can find efficiently
multiple optimal solutions when they exist. We envisage that the algorithm develo
in this research will contribute significantly to the manufacturing industry, com
industry, digital data communication and transportation industries. The results o
research will be presented in five chapters.
In chapter 1 we discuss the basic concepts related to scheduling and we define th
special terms used in the field. Then, we discuss the two-machine flow-shop probl
and its solution as well as developments to date relating to the three-machine fl
shop problem. Finally we consider the development of modern heuristic algorithms
for searching, especially the A* algorithm and modifications to A* which make it
more efficient.
In chapter 2 we gradually develop our new heuristic algorithm for solving the thr
machine problem. The new algorithm is based on the Search and Learning A*
algorithm and is developed to ensure that the algorithm is optimal and complete.
begin with the development of a heuristic algorithm for the two-machine problem a
IV
after introducing an important improvement to that algorithm w e are able to extend
the algorithm to use with problems involving three machines.
As is usually the case with algorithms using heuristic functions, we can improve t
performance of the algorithm by choosing a high quality heuristic function which
underestimates the minimum makespan (is admissible) but is very close to it. In
chapter 3 we analyse and compare a set of heuristic functions suitable for use wit
three-machine problem. We establish the admissibility of the functions and describ
conditions which enable us to choose from amongst them a heuristic function which
closest to the minimum makespan. Chapter 3 concludes with a statement of the new
heuristic algorithm incorporating improvements developed in chapter 2 as well as
guidelines for choosing a high quality heuristic function.
In chapter 4 we compare the use of the flow-shop heuristic algorithm developed in
chapter 3 with the use of the genetic algorithm described in chapter 1. We discuss
other improvements we can apply to the algorithm in terms of memory usage. We
modify the flow-shop heuristic algorithm to enable us to find multiple optimal
solutions if they exist. We discuss the possibility of applying the algorithm to f
machine and five-machine flow-shop problems and some possible applications of the
algorithm to practical industrial problems.
Chapter 5 presents the conclusions and the findings of this research as well as
identifying areas for further related research.
V
LIST OF PUBLICATIONS
Shue, L. and Fan, J.P.O. (1996) "The development of an expert system to assist
laboratory scheduling," Proceedings of the LASTED
International Conference of
Artificial Intelligence, Expert Systems and Neural Networks, Hawaii, USA, pp 223
226.
Shue, L. and Fan, J.P.O. (1997) "An object oriented expert system for scheduli
computer laboratory," Proceedings of the LASTED
International Conference on
Software Engineering, San Francisco, USA, pp 268-272.
Fan, J.P.O. (1999) "The development of a heuristic search strategy for solving
flow-shop scheduling problem," Proceedings of the IASTED International Conference
on Applied Informatics, Innsbruck, Austria, pp 516-518.
Fan, J.P.O. (1999) "An intelligent search strategy for solving the flow-shop
scheduling problem," Proceedings of the IASTED
International Conference on
Software Engineering, Scottsdale, Arizona, USA, pp 99-103.
VI
TABLE OF CONTENTS
DECLARATION I
ACKNOWLEDGEMENTS
ABSTRACT
LIST OF PUBLICATIONS
TABLE OF CONTENTS
LIST OF FIGURES
LIST OF TABLES
LIST OF SEARCH PATH DIAGRAMS
LIST OF PANELS
CHAPTER 1: BACKGROUND RESEARCH OF FLOW-SHOP
SCHEDULING 1
1.1
1.2
1.3
1.4
1.5
1.6
1.7
1.8
II
Ill
VI
VII
X
X
XI
XI
Introduction
1
General Overview of Shop Scheduling
1
Classification of Shop Scheduling Problems
2
The Flow-Shop Problem
4
Two-Machine Flow-Shop Problem
6
The Johnson's Two-Machine Flow-Shop Solution (Johnson, 1954)
6
Three-Machine Flow-Shop Problem
9
Some Current Findings Regarding the Three-Machine Flow-Shop Problem 10
1.8.1 Extensions of Johnson's Rule to Three Machines (Johnson, 1954). 10
1.8.2 Branch and Bound Algorithms for Makespan Problems
11
1.8.3 Further Developments and Modifications to the Branch and Bound
13
Algorithm
1.8.4 Linear Programming (LP)
17
1.8.5 Integer Linear Programming (ILP)
18
1.8.6 Heuristic Approaches for Flow-Shop problems
19
1.8.7 Palmer Method (1965)
19
1.8.8 Gupta Method (1970, 1971)
20
1.8.9 C D S Method (1970)
21
1.8.10 Palmer-Based Fuzzy Flow-Shop Scheduling Algorithm (Hong &
Chuang, 1999; Hong & Wang, 1999)
23
1.8.11 Gupta-Based Flexible Flow-Shop Scheduling Algorithm (Hong et al,
2000)
24
1.8.12 Flow-shop Scheduling with Genetic Algorithm
25
1.8.13 Genetic Algorithm
25
1.8.14 Genetic Algorithm for Scheduling Problems
28
1.9 Uninformed Search Methods and Informed Search Methods
30
1.10 Formulation of Search Space for Scheduling Problems
32
1.10.1 State-Space Problem Representations (Russell & Norvig, 1995).... 32
1.10.2 Criteria to Evaluate the Effectiveness of a State-Space Search Method
34
VII
1.11 Uninformed Search Methods (Blind Search)
1.11.1 Breadth-First Search (Cohen & Faigenbaum, 1982)
1.11.2 Uniform Cost Search (Dijkstra, 1959)
1.11.3 Depth-First Search (Cohen & Faigenbaum, 1982)
1.11.4 Depth-Limited Search (Cohen & Faigenbaum, 1982)
1.11.5 Iterative Deepening Search (Slate & Atkin, 1977)
1.11.6 Bi-Directional Search (Pohl, 1969, 1971)
1.11.7 Comparison of Uninformed Search Methods (Russell & Norvig,
1995)
1.12 Informed (Heuristic) Search Methods
1.12.1 Introduction to Heuristics
1.12.2 Best-First Search (Cohen & Faigenbaum, 1982)
1.12.3 Minimising the Estimated Cost to Reach a Goal: Greedy Search
(Hart, et. al. 1968)
1.12.4 Minimising the Total Path Cost: A * Search (Hart, et. al. 1968)
1.12.5 Iterative Deepening A * Search (IDA*) (Korf, 1985a, 1985b)
1.12.6 S M A * Search (Russell, 1992)
1.12.7 Searching and Learning A * (SLA*) (Zamani, 1995)
CHAPTER 2: THE DEVELOPMENT OF THE THREE-MACHINE
HEURISTIC FLOW-SHOP ALGORITHM
2.1
2.1
2.2
2.3
2.4
41
41
41
42
43
44
47
48
50
54
Introduction
54
The State Transition Process
55
2.1.2 Heuristic Estimates and Backtracking
58
The Algorithm for the Two-Machine Flow-Shop Problem
60
2.2.1 The Algorithm (Version 1)
61
2.2.2 Search Path Diagrams and Panels
62
2.2.3 Calculating Heuristic Estimates at Nodes on a Search Path
64
2.2.4 Example (Table 2.3(b): T w o Machines, Three Jobs)
67
Improving M i n i m u m Heuristic Estimates when the Search has Commenced..
75
The Three-Machine Problem
84
CHAPTER 3: AN ANALYSIS AND COMPARISON OF PLAUSIBLE
HEURISTIC FUNCTIONS
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
35
35
36
37
38
38
39
89
Introduction
Definitions and Notations
Plausible Heuristics
The Admissibility of Hi, H 2 and H 3
The Relative Magnitudes of H b H 2 , H 3
Guidelines for Selecting the Best Admissible Heuristic
Modifications to Procedure (Version 1)
The Algorithm (Final Version)
A n Example
89
90
91
94
96
99
100
103
104
CHAPTER 4: RELATED FINDINGS AND DISCUSSION
106
4.1
4.2
4.3
Introduction
Comparison with the Genetic Algorithm
M e m o r y Management
4.3.1 Pruning
VIII
106
106
109
110
4.4
4.5
4.6
4.3.2 Memory-Bounded Approach
112
Multiple Optimal Solutions
113
Scalability
116
4.5.1 Four Machines Flow-Shop Problems
116
4.5.2 Five-Machine Flow-Shop Problems
121
Possible Applications of the Three-Machine Flow-Shop Algorithm
125
4.6.1 Manufacturing Production Line and Three Cells Robot Processing
125
4.6.2 Data Communication and Transportation
126
4.6.3 Three C P U s Multiprocessor Computer Systems
129
CHAPTER 5: CONCLUSIONS
REFERENCES
BIBLIOGRAPHY
APPENDICES
Al:
A2:
A3:
A4:
A5:
131
135
139
1
Using Heuristic Function Hi
Using Heuristic Functions H2, H3
A2.1: Using H 2
A2.2: Using H 3
Finding Multiple Optimal Solutions
Solving Four-Machine Flow-Shop Problem
Solving Five-Machine Flow-Shop Problem
IX
1
12
12
19
28
36
44
LIST OF FIGURES
Figure 1.1: Makespan of problem in Table 1-1 in natural sequence 8
Figure 1.2: Makespan of the optimal sequence
Figure 2.1: Time Line for the State Transition Process
Figure 3.1: Sequence of n jobs </)%t
Figure 4.1: Graph for Number of Jobs (n) Versus Time (t)
Figure 4.2: Unproductive nodes Nl, N 2 and N 3
9
58
91
108
111
LIST OF TABLES
Table 1.1: A two-machine flow-shop problem 7
Table 1.2: Optimal sequence of problem in Table 1-1
Table 1.3: Afive-jobthree-machine flow-shop problem
Table 1.4: Comparison of the uninformed search methods
Table 2.1: T w o Machines, Three Jobs
Table 2.2: State Transition Process
Table 2.3 (a): T w o Machines, n Jobs
Table 2.3 (b): T w o Machines, 3 Jobs
Table 2.4: Comparison of Algorithm (Version 1) and Algorithm (Version 2)
Table 2.5 (a): Three Machines, n Jobs
Table 2.5 (b): Three Machines, 3 Jobs
Table 3.1: Three machines and n jobs
Table 3.2: Three Machines, Three Jobs
Table 3.3(a): Best Admissible Heuristic for Ck = c m
Table 3.3(b): Best Admissible Heuristic(s) for ck > c m
Table 3.4: Comparison of Hi, H 2 , H 3 when ck > c ^ R > S
Table 4.1: Behaviour of Genetic Algorithm (Mulkens, 1994)
Table 4.2: Three Machines, Three Jobs
Table 4.3 (a): Four Machines, n Jobs
Table 4.3 (b): Four Machines, 3 Jobs
Table 4.4 (a): Five Machines, n Jobs
Table 4.4 (b): Five Machines, 3 Jobs
X
8
22
41
56
57
60
60
83
84
84
90
91
98
98
105
107
115
116
116
121
121
LIST OF SEARCH PATH DIAGRAMS
Search Path Diagram 2.1 68
Search Path Diagram 2.2: Continued from Search Path Diagram 2.1
Search Path Diagram 2.3: Continued from Search Path Diagram 2.2
Search Path Diagram 2.4
Search Path Diagram 2.5: Continued from Search Path Diagram 2.4
71
73
79
81
LIST OF PANELS
Panel 2.1 69
Panel 2.2: Continued from Panel 2.1
Panel 2.3: Continued from Panel 2.2
Panel 2.4
Panel 2.5: Continued from Panel 2.4
72
74
80
82
XI
Chapter 1: Background Research of Flow-shop Scheduling
1.1 Introduction
This chapter is divided into several topics. We first discuss the basic concept of
scheduling with the intention to define some of the special terms used in the field
Then, we discuss the two-machine flow-shop problem and its solution and the threemachine flow-shop problem with its development so far. Finally, we consider
developments in modern heuristic algorithms for searching, especially the
development of the admissible heuristic A* algorithm and some of the modifications
involved in making the A* algorithm more efficient.
1.2 General Overview of Shop Scheduling
Scheduling is the allocation of resources over time to perform a collection of task
is a decision-making function that helps the process of determining a schedule. The
theory of scheduling is a collection of principles, models, techniques and logical
conclusions that provide insight into the scheduling function. (Baker, 1974).
Shop scheduling is a special term given to any shop floor scheduling dealing with j
that arrive in a shop and that need to be arranged for maximum utilisation of the
available machines within the shop. This type of scheduling is not restricted to sh
floor scheduling, it is also applicable to other types of scheduling problems. It i
common practice in any shop scheduling that resources are usually referred to as
"machines" and basic task modules are referred to as "jobs". Jobs may consist of
several elementary tasks that are called "operations". (Baker, 1974).
1
1.3
Classification of Shop Scheduling Problems
A specific shop scheduling problem is described by four types of information:
1) The jobs and operations to be processed.
2) The number and types of machines that comprise the shop.
3) Disciplines that restrict the manner in which assignments of jobs to machines c
be made.
4) The criteria by which a schedule will be evaluated.
Shop scheduling problems differ in the number of jobs that are to be processed, th
manner in which the jobs arrive at the shop, and the order in which the different
machines are used for the operations of the individual jobs. The nature of the job
arrivals provides the distinction between static and dynamic problems. In a static
problem a certain number of jobs arrive simultaneously in a shop that is idle and
immediately available for work. No further jobs will arrive, so attention can be
focused on scheduling this completely known and available set of jobs. In a dynami
problem the shop is a continuous process. Jobs arrive intermittently, at times tha
predictable only in a statistical sense, and arrivals will continue indefinitely i
future. Due to the nature of the problems, entirely different methods are required
analyse questions of sequence in these two problems. (Conway, 1967)
The order in which the machines are used in the operations of individual jobs
determines whether a shop is a flow-shop or a job-shop. A flow-shop is one in whic
all the jobs follow essentially the same path from one machine to another. This me
that in a flow-shop there is a numbering of the machines such that the machine
number for operation y is greater than the machine number for operation x of the s
job, if x precedes y (Johnson, 1954). On the other hand, the randomly routed job-sh
2
has no c o m m o n pattern of movement from machine to machine. Each machine
number is equally likely to appear for each operation of each job. Both the flow-s
and the job-shop are considered extreme views of shop scheduling. Most real shop
scheduling falls somewhere between these two limits, but most of the research in s
scheduling has assumed one of the two extreme cases as a starting point.
A common practice in shop scheduling discussion is that when the type of shop is n
mentioned the assumption will always be job-shop unless stated otherwise. For
simplicity in identifying the different shop scheduling problems, a common notatio
has been developed as a standard for describing the problem. A four-parameter
notation is used to identify individual shop scheduling problems - written as A/B
(Conway, 1967):
A is used to describe the job-arrival process. For dynamic problems, A will
identify the probability distribution of the times between arrivals. For static
problems, it will specify the number of jobs - assumed to arrive simultaneously
unless stated otherwise. When n is used it denotes an arbitrary, but finite,
number of jobs in a static problem.
B is used to describe the number of machines in the shop. When m is used it
denotes an arbitrary number of machines.
C is used to describe the flow pattern in the shop. The principal symbols used
are F for the flow-shop limiting case, R for the randomly routed job-shop
limiting case, and G for completely general or arbitrary flow pattern. In the case
of a shop with a single machine there is no flow pattern, and the third parameter
of the description is omitted.
3
D is used to describe the criterion by which the schedule is to be evaluated.
Using this notation Johnson's problem (Johnson, 1954) is described as nl2IFIFmax :
sequence an arbitrary number of jobs in a two-machine flow-shop so as to minimise
the maximum flow-time (or schedule time).
The general job-shop problem is described as nlmlGIFmax : schedule n jobs in an
arbitrary shop of m machines so that the last is finished as soon as possible.
The general three-machine flow-shop problem is described as n/3/F/Fmax : sequence
an arbitrary number of jobs in a three-machine flow-shop so as to minimise the
maximum flow-time (or schedule time).
1.4 The Flow-Shop Problem
Consider a special case of shop scheduling where the machines in the shop follow
natural order. In most cases there are two or more machines, and at least some of
jobs have a sequence of operations to be performed before they are completed and
leave the shop. In such cases the collection of machines is said to constitute a.
shop and the machines are numbered in such a way that, for every job to be
considered, operation J is performed on a lower numbered machine than operation K
where J < K, or /precedes K (Johnson, 1954). This type of structure is called a l
precedence structure.
An obvious example of such a shop is an assembly line, where the workers or
workstations represent the machines. Any group of machines served by a
unidirectional, non-cyclic conveyor would be considered a flow-shop, and a strict
example would be a case in which all materials handling was accomplished by the
4
conveyor. The only requirement is that all movement between machines within the
shop be in a uniform direction.
We now discuss the flow-shop model in more detail. Consider a shop that contains m
different machines, and each job consists of n operations, each operation requires
different machine. As mentioned before, the flow-shop contains a natural machine
order. Hence, it is possible to number the machines so that if they'th operation of
job precedes its Ath operation, then the machine required by theyth operation has a
lower number than the machine required by the Ath operation. The machines in a
flow-shop are numbered 1, 2,..., m; and the operations of job / are correspondingly
numbered (i, 1), (/', 2), . . ., (i, m). Each job can be treated as if it had exact
operations (Johnson, 1954).
The conditions that characterise a flow-shop problem are:
1. A set of n multiple-operation jobs is available for processing at time zero. (Ea
job requires m operations and each operation requires a different machine.)
2. Setup times for the operations are sequence independent and are included in
processing times. (Setup time is the time taken before the first machine starts to
work on the first job, and processing time is the time taken for an operation to be
completed on a machine).
3. Job descriptors are known in advance. (Job descriptors are used to describe the
sequence of machines that are needed to carry out each operation in order to
complete a job)
4. m different machines are continuously available.
5. Individual operations are not preemptable. In other words, operations must follo
the order of the machines to complete the job.
5
In any flow-shop problem, there are n! different job sequences possible for each
machine, and therefore (n!)m different schedules to be examined. Of the (n!)m
different schedules, there is at least one schedule which will produce the optimal
solution - minimum makespan or minimum of the maximum flow-time. In some
instances, there may be more than one schedule that will produce an optimal soluti
The aim of solving the flow-shop problem is to find one of the optimal solutions
within the (n/)m different possible schedules.
1.5 Two-Machine Flow-Shop Problem
The two-machine flow-shop problem (n/2/F/Fmax) with the objective of minimising
makespan (maximum flow-time) is also known as Johnson's problem. As the name
of the problem suggests, the number of machines in the shop is restricted to only
The two different machines are continuously available. The number of jobs to be
processed by the two machines is unlimited but the number must be known at time
zero before the processing commences. Each job will have two operations. Each
operation will be processed by a different machine and the two operations are not
preemptable - must follow from machine 1 to machine 2.
1.6 The Johnson's Two-Machine Flow-Shop Solution
(Johnson, 1954)
In 1954, Johnson developed a method to solve the two-machine flow-shop problem.
The results originally obtained by Johnson are now standard fundamentals in the
theory of scheduling. In the formulation of this problem, job j is characterised by
processing time tji, required on machine 1, and tj2, required on machine 2 after th
6
operation on machine 1 is completed. A n optimal sequence can be characterised by
the following rule for ordering pairs of jobs.
Johnson's rule states that job i precedes job7 in an optimal sequence if,
m\n{tn,tj2\ < min{?,2,jf/i}.
An optimal sequence can be constructed using the above result. The positions in
sequence can be filled by a one-pass mechanism that identifies, at each stage, a job
that should fill either the first or the last available position.
The following is the implementation of Johnson's rule:
1. Find mint {tn, fa).
2. If the required processing time is the least for machine 1, place the associated job
in the first available position in sequence. G o to step 4.
3. If the required processing time is the least for machine 2, place the associated job
in the last available position in sequence. G o to step 4.
4. R e m o v e the assigned job from consideration and return to step 1 until all positions
in the sequence are filled.
To illustrate the algorithm, consider the five jobs problem shown below:
Table 1.1: A two-machine flow-shop problem
Job}
I
2
3
4~
5
tj;
3
5
1
6
7
tj2
6
2
2
6
5
Before applying Johnson's rule, the makespan according to the natural sequence of
jobs 1-2-3-4-5 is 27 as shown in Figure 1-1 below:
7
1
2
1
3
4
2
5
3
27
Figure 1.1: Makespan of problem in Table 1-1 in natural sequence
Table 1.2: Optimal sequence of problem in Table 1-1
1
Unscheduled
Jobs
1,2,3,4,5
2
1,2,4,5
t22
2 = [5]
3 xxx2
3
1,4,5
til
1 = [2]
3-1 xx 2
4
4,5
t52
5 = [4]
3-1 x 5-2
5
4
t41 = t 4 2
4 = [3]
3-1-4-5-2
Stage
t31
Position
Assignment
3 = [1]
Partial
Schedule
3 xxxx
M i n i m u m tik
The above worksheet shows h o w an optimal sequence is constructed in five stages
using the 4 step implementation of Johnson's rule. At each stage, the minimum
processing time among unscheduled jobs must be identified. Then step 2 assigns on
more position in sequence, and the process is repeated. The sequence that emerges
3-1-4-5-2. The schedule produced by the algorithm has a makespan of 24 as shown in
Figure 1-2 below.
8
3
>
1
3
4
1
5
4
2
5
2
24
Figure 1.2: Makespan of the optimal sequence
Johnson's algorithm is the standard method for finding the optimal sequence that w
produce minimum makespan for the two-machine flow-shop problem nl2IFIFmax.
1.7 Three-Machine Flow-Shop Problem
The three-machine flow-shop problem (n/3/F/Fmax) is similar to the two-machine
flow-shop problem (n/2/F/Fmax)- The objective is to find the minimum makespan
(minimum of the maximum flow-time). As suggested by the name of the problem,
the number of machines in the shop is restricted to three machines. The three
different machines are continuously available. The number of jobs to be processed
the three machines is unlimited but the number must be known at time zero before t
processing commences. Each job will have three operations. Each operation will be
processed by a different machine and the three operations are not preemptable - mu
follow from machine 1 to machine 2 and then to machine 3. Although the threemachine flow-shop problem looks only slightly more complicated than the two-
machine flow-shop problem, the complexity of finding the optimal sequence is of th
order (n!)3 as compared to (n!)2 in the two-machine flow-shop problem. This increas
in complexity is characteristic of combinatorial problems and results in an explos
9
growth of the solution space and the computation time needed to find an optimal
solution.
1.8 Some Current Findings Regarding the Three-Machine FlowShop Problem
1.8.1 Extension of Johnson's Rule to Three Machines
(Johnson, 1954)
Johnson's rule can find the optimal sequence for the two-machine flow-shop problem.
Unfortunately, it is not possible to generalise Johnson's two-machine rule to the t
machine flow-shop problem. In his original presentation Johnson showed that a
generalisation is only possible if the second machine is "dominated", when no
bottleneck could possibly occur on the second machine. This special case does not
happen often in real shop problems. The rules become:
1. If min* {&4 ^ max* {tia}, then job i precedes job/ in an optimal schedule provide
min{?n + tn,tj2 + to) < min{&2 + to,tj\ + to] .
2. If min* {tki) > max* {tia) then job i precedes job/ in an optimal schedule provid
min{fci + hi, to +1^} < mm{to + to, tj\ + to] .
In terms of applying the 4 step implementation of Johnson's rule as described in the
two-machine problem, the first step is to find the minimum in the form,
min, \tn + to, to + to) .
This new form has transformed the three machines into two "compound" machines
where tn+tE relates to the first machine and te+te relates to the second machine. Th
10
rest of the steps will be the same as stated for the Johnson's two-machine rule. This
three-machine flow-shop solution is only valid when the second machine is
"dominated" and it is not a general solution that can apply to all three-machine f
shop problems.
Johnson also found that in the three-machine case, if the application of the 4 step
implementation of Johnson's rule yields the same optimal sequence for the twomachine sub-problem represented by the set {tn, t^} where,
mm{tn,to}<rmn{to,tii} , and,
the separate two-machine sub-problem represented by the set {fe, to} where,
mm{to,to}< mm{to,to] ,
then the sequence is optimal for the full three-machine problem.
1.8.2 Branch and Bound Algorithms for Makespan Problems
Except for the very special cases mentioned in the previous section, the makespan
problem with three machines (m = 3) cannot be solved more efficiently than by
controlled enumeration. Nonetheless, branch and bound methods have been employed
with some success. For some larger flow-shop problems, the same branch and bound
approaches have also been used to find an optimal sequence. Even though sequence
schedules are not a dominant set for makespan problems when m > 4, it is intuitivel
plausible that the best sequence schedule should be at least very close to the true
optimum.
11
The basic branch and bound procedure was developed by Ignall and Schrage (1965),
and independently by Lomnicki (1965). To illustrate the procedure for the threemachine flow-shop problem, where m = 3, let d' denote the set of jobs that are not
contained in the partial sequence d. For a given partial sequence d, let:
qx = the latest completion time on machine 1 among jobs in d (hence the earliest ti
at which some jobs j e d} could begin processing).
q2 = the latest completion time on machine 2 among jobs in d.
q3 = the latest completion time on machine 3 among jobs in d.
The amount of processing yet required on machine 1 is 2^ tn •
Jed'
Let us suppose that a particular job k is last in sequence. After job k completes o
machine 1, an interval of at least (fa + Ad) must elapse before the whole schedule
possibly be complete. In the most favourable situation, the last job encounters no
delay between the completion of one operation and the start of its direct successo
and the last job has the minimal sum (ty + tp) among jobs j ed\
Hence one lower bound on the makespan is,
bi = q\ + ^\ tji + min{#2 + to] .
^~~j,
Jed'
Similar reasoning applied to the processing yet required on machine 2 yields a sec
lower bound,
bi = qi + ^ to + min{#3} .
12
Finally, a lower bound based on the processing yet required on machine 3 is,
bi = qi + 2_j to .
jed'
If we employ these calculations, the lower bound proposed by Ignall and Schrage is,
B = max {b\, b2, b3} .
1.8.3 Further Developments and Modifications to the Branch and
Bound Algorithm
A variety of extensions and refinements have been developed for the branch and
bound procedure. The major modifications are summarised below.
The first modification was a refinement done by the two original developers of the
branch and bound algorithm. In their original paper, Ignall and Schrage (1965) note
that their bounds b2 and 63 could be strengthened slightly. The use of q2 in the
calculation of b2 ignores the possibility that the starting time of job j on machin
may be constrained by commitments on machine 1. Hence, in the calculation of b2, it
is possible to replace q2 with,
q\ = maxiq2,a\ + rnin[#i]> .
{
J*>' J
Similarly, the use of qi in the calculation of 63 ignores the possibility that the
time of job j on machine 3 may be constrained by commitments on the earlier
machines. Hence, in the calculation of Z>3, it is possible to replace #3 with,
q\ = maxjq3,q2 + minf^],^ + mw{tji + to]> .
13
The form of the algorithm originally tested by Ignall and Schrage contained these
refinements.
Job-based bounds were introduced by McMahon and Burton (1967). They referred to
the lower bound B as a machine-based bound, in that the bound is determined from
unsatisfied requirements on the three machines. A complementary approach is to use
a job-based bound, which is determined from unsatisfied requirements for
unscheduled jobs. Let job k represent some unscheduled job. An amount of time (tki
+ tk2 + tk3) must still be added to the schedule, at or after time q\. In addition,
might also be spent processing other jobs j e d* that either precede or follow job k
sequence. To construct the most favourable case for minimum makespan, let J\
denote the set of jobs j ed', excluding job k, for which tji < to and let J2 denote
set of jobs j sd\ excluding job k, for which tji > to • Now suppose that all jobs in
precede job k in the sequence and that all jobs in J2 follow job k. The makespan M
must satisfy,
M > qx + (tki + tki + tki) + J ]to+ ]Tto•
jeJi jeJl
But an inequality for this form could be written for each job k &d'.
Therefore
another lower bound is,
b4 = qi + max < tki + tki + tki + ^ rmn[tji,tj3]
ked'
jed>
j*k
Another job-based bound is developed by applying similar reasoning to commitments
on machines 2 and 3, leading to a bound of,
14
bs = q'l + max<tki + tki + ^mm.[to,to]
ked'
jed<
J*k
A s the Branch and Bound algorithm is primarily related to the expanding of tree
nodes and limiting the search space in a search tree, the Job-based bounds introduced
above produce the final lower bound associated with any node in the branching tree
as,
B' = max {B, b4, b5J .
An obvious observation from the above development is that B'> B. This means that
the combination of machine-based and job-based bounds represented by B' will lead
to a more efficient search of the branching tree in the sense that fewer nodes will b
created for expansion.
The decision to use stronger bounds involves a trade-off between time spent
computing bounds and time spent searching the branching tree. The McMahonBurton contribution is representative of results that occur in other scheduling
problems as well: in large combinatorial problems, the increased effort required to
obtain strong bounds frequently tends to be offset by greater savings in reduced tree
searching.
Along with any given flow-shop problem, it is possible to construct a "reversed"
problem that is equivalent with respect to makespan. The reversed problem simply
assumes that the operations must be processed in the machine order 3-2-1 instead of 1
2-3. For a given set of processing times, however, the minimum makespan for the
reversed problem is identical to that of the original problem and, of course, the
optimal sequence for the reversed problem is the reverse of that for the original
15
problem. This being the case, it is natural to ask under what conditions the reversed
problem can be solved more efficiently than the original.
McMahon and Burton (1967) proposed one answer to this question, based on the
concept of a dominant machine. Machine 3 is said to "dominate" machine 1 if,
n n
^to>^tji
7=1
,and
y-i
machine 1 is said to "dominate" machine 3 if,
n n
7=1 7=1
They argued that when machine 3 is dominant, the idle time on machine 3 is largely
determined somewhat earlier in the job sequence than when machine 1 is dominant.
Since minimising makespan corresponds to minimising idle time on machine 3, they
concluded that lower bounds tend to be more efficient when machine 3 is dominant.
Therefore they proposed that the reversed problem be solved whenever machine 1 is
dominant. In their experimental investigation, McMahon and Burton found this
guideline to be very reliable, even though machine 1 dominant problems could
occasionally be solved more rapidly than their reversed versions.
As can be seen above, the Branch and Bound algorithm required a considerable
amount of effort to find the lower and upper bounds that will help in reducing the
search space in a search tree. This effort can sometimes offset the benefits that the
algorithm can bring. If the lower and upper bounds are not "strong" bounds, the
algorithm will be reduced to no more than an efficient exhaustive search algorithm.
Another difficulty with the Branch and Bound algorithm is that it has no mechanism
16
to differentiate the local optimal from the overall optimal. Local optimal refers to the
optimal within a branch in a search tree but it is not the optimal in the whole search
tree that contains many branches.
1.8.4 Linear Programming (LP)
The success of an Operation Research technique is ultimately measured by the spread
of its use as a decision-making tool. During the late 1940s, linear programming (LP)
proved to be a most effective operations research tool. Its success stems from its
flexibility in describing multitudes of real-life situations in the following areas:
military, industry, agriculture, transportation, economics, health systems, and even
behavioural and social sciences. Additionally, the availability of very efficient
computer codes for solving very large LP programs is an important factor in the
widespread use of the technique. Since the use of computers is a necessity for solving
LP problems of any practical size, certain conventions must be observed in setting up
the LP problem with the objective of reducing the adverse effect of computer roundoff errors.
Linear programming is a deterministic tool, meaning that all the model parameters are
assumed to be known with certainty. However, in real life, it is rare that one
encounters a problem in which true certainty prevails. The LP technique compensates
for this "deficiency" by providing systematic post-optimal and parametric analyses
that allow the decision maker to test the sensitivity of the "static" optimum solutio
discrete or continuous changes in the parameters of the model. In effect, these
additional techniques add a dynamic dimension to the property of optimum LP
solution.
17
A special class of linear programming techniques called Integer Linear Programming
has been employed to solve the three-machine flow-shop problem. The algorithm
does not produce consistent results as explained in the following section.
1.8.5 Integer Linear Programming (ILP)
Integer linear programming (ILP) essentially deals with linear programs in which
some or all of the variables assume integer or discrete values. An ILP is said to be
mixed or pure depending on whether some or all the variables are restricted to integer
values. Although several algorithms have been developed for the flow-shop problem
using ILP, none of these methods is totally reliable from a computational standpoint,
particularly as the number of integer variables increases exponentially as the number
of machines increases. The computational difficulty with available ILP algorithms
has led users to find other methods to solve the three-machine flow-shop problem.
One such attempt is to use continuous LP and then round the optimal solution
(minimum makespan) to the closest feasible integer values. This method can solve
some three-machine flow-shop problems. However, there is no guarantee in this case
that the resulting rounded solution will satisfy the constraints. This is always true
the original ILP has one or more equality constraints. Unfortunately, the threemachine flow-shop problem always has one or more equality constraints. From the
theory of linear programming, a rounded solution cannot be feasible, since it implies
that the same basis can yield two distinct solutions. Consequently, ILP and LP can
not guarantee an optimal solution (minimum makespan) for three-machine flow-shop
problems.
18
1.8.6
Heuristic Approaches for Flow-Shop Problems
The branch and bound approach and the elimination approach (modified branch and
bound) discussed above have two inevitable disadvantages, which are typical of
implicit enumeration methods. First, the computational requirements will be severe
for large problems. Second, even for relatively small problems, there is no guarantee
that the solution can be obtained quickly, since the extent of the partial enumeratio
depends on the data in the problem. Heuristic algorithms avoid these two drawbacks:
they can obtain solutions to large problems with limited computational effort, and
their computational requirements are predictable for problems of a given size. The
drawback of heuristic approaches is, of course, that they do not guarantee optimality
and in some instances it may even be difficult to judge their effectiveness. The
following three heuristic methods are representative of quick, sub-optimal solution
techniques for flow-shop makespan problems with more than two machines.
1.8.7 Palmer Method (1965)
The first heuristic method is called the Palmer method. The guidelines of the method
can be stated qualitatively as follows: give priority to jobs having the strongest
tendency to progress from short times to long times in the sequence of operations.
While there might be many ways of implementing this precept, Palmer proposed the
calculation of a slope index, sj, for each job where,
Sj
= (m-l)tjm + (m-3)tj,m_i + (m-5)tj,m.2 +....- (m-3)tj2 - (m-l)tji.
Then a sequence schedule is constructed using the job ordering,
Sl>S2>--->Ss
•
19
For m = 2, Palmer's heuristic sequences the jobs in non-increasing order of (tj2-tji).
This method is slightly different from Johnson's rule and will not guarantee mi
makespan.
1.8.8 Gupta Method (1970,1971)
Gupta sought a transitive job ordering also in the form s\^si^---^ss
mat
would
produce good schedules. He noted that when Johnson's rule is optimal in the thr
machine case, it is in the form of si ^ 52 ^' * • ^ ss where,
ej
SJ
~ —~(
7 '
mm\tji +to,to+ to]
where,
\ 1 if tji < tji,
ej = <
[-1
if tjl>tj2.
Generalising from this structure, Gupta proposed that for m > 2, the job index,
ej
SJ
= — : — ?
7,
m m \tjk + tj,k + \)
\<k<m-\
where,
f 1 if tji < tjm,
ej = i
[- 1
if
tji > tjm.
Thereafter the jobs are sequenced according to
Sl
>
S2
> • • • > ss • Gupta compa
method with Palmer's method extensively and found that his method generated bet
schedules than Palmer's in a substantial majority of cases. In addition, Gupta
20
investigated a set of other heuristics that are also based on schedule construction via
transitive rules.
1.8.9 CDS Method (1970)
Perhaps the most significant heuristic method for makespan problems is the CDS
algorithm which was developed by Campbell, Dudek, and Smith in 1970. The
strength of CDS lies in two properties. First, it uses Johnson's rule in a heuristic
fashion, and second, it generally creates several schedules from which the "best"
schedule is to be chosen.
The CDS algorithm corresponds to a multistage use of Johnson's rule applied to a new
problem, derived from the original, with processing times /}/ and t)-2. At stage 1,
t'ji = tji and t'j2 = tim-
In other words, Johnson's rule is applied to the first and mth operations and
intermediate operations are ignored. At stage 2,
t )i = tji + tj2 and t 'j2
=
tjm + tj,m-l •
That is, Johnson's rule is applied to the sums of the first two and last two operatio
processing times. In general at stage i,
i i
fji = ^tjk
k=l
and
fJ2 = ^tj,m-k
+i.
k=l
For each stage i (i = 1, 2, . . ., m-1), the job sequence obtained is used to calcula
makespan for the original problem. After m-1 stages, the best makespan among the
m-1 schedules is identified. It is quite common that some of the m-1 sequences may
21
be identical. Hence, it m a y create a tie situation. T o deal with ties at any stage, one
approach might be to evaluate the makespan for all alternatives at a given stage. In
the original presentation, the authors propose breaking ties between pairs of jobs
using the ordering of the two jobs in the previous stage.
Campbell, Dudek, and Smith tested the CDS algorithm extensively and examined its
performance as compared to Palmer's method on several problems. They found that
the CDS algorithm was more effective for both small and large problems. In addition,
the computer times required were of the same order of magnitude for n < 20. Only in
some larger problems would the question of trading-off solution value for computing
time arise.
The following is a comparison of these three heuristic methods for a five-job
0=1,2,3,4,5) three-machine problem (tjm, where m=l,2,3) as shown in Table 1-3
below.
Table 1.3: A five-job three-machine flow-shop problem
Jobj
1
2
3
4
5
tji
6
4
3
9
5
tj2
8
1
9
5
6
tj3
2
1
5
8
6
Palmer's method with m = 3 , sets the slope index to Sj=2tj3 - 2tji. A s a result, si=-8,
S2=-6, S3=4, S4=-2 and ss=2. That will produce the job sequence 3-5-4-2-1 in which
the makespan, M=37. Gupta's method produces a better outcome compared to
Palmer's method. Gupta's method yields the job sequence 5-3-4-1-2 in which the
makespan, M=36. The CDS algorithm provides the best result compared to the
previous two methods. The CDS method yields the job sequence 3-5-4-1-2 in which
M=35. Although, these three heuristic methods provide reasonably good
22
approximations to the m i n i m u m makespan they only produce sub-optimal solutions.
The job sequence for the optimal solution in this case is 3-4-5-1-2 and the minimum
makespan is 34 (M=34).
1.8.10 Palmer-Based Fuzzy Flow-Shop Scheduling Algorithm
(Hong & Chuang, 1999; Hong & Wang, 1999)
This algorithm represents one of the latest effort in developing a solution for the
shop problem with more than two machines. The algorithm is based on the fuzzy
Longest Processing Time (LPT) (Hong et al, 1998) scheduling algorithm and
Palmer's (1965) algorithm. In this algorithm, the possible processing times for eac
job are represented by a fuzzy set. The fuzzy LPT is used to assign jobs to each
machine in the flow-shop. Finally, Palmer's algorithm is used to deal with job
sequencing and timing.
The fuzzy LPT scheduling algorithm does not produce an optimal solution according
to Hong et al (Hong & Chuang, 1999; Hong & Wang, 1999) and we know that
Palmer's algorithm does not produce an optimal solution either. As a result, the
Palmer-based fuzzy flexible flow-shop scheduling algorithm is not an optimal
algorithm.
The following is a brief description of the LPT scheduling algorithm (Hong et al,
1998).
Given a set of n independent jobs (Ji to Jn), each with arbitrary processing time (
tn), and a set of k homogeneous machines (mi to mk), the LPT scheduling algorithm
assigns the job with the longest execution time from among those not yet assigned t
23
free machine whenever this machine becomes free. For cases when there is a tie,
arbitrary tie-breaking can be assumed. The following are the steps involved:
Step 1: Sort the jobs in a descending order according to the processing time.
Step 2:
Initialise the current finishing time of each machine to zero.
Step 3:
Assign thefirstjob in the job list to the machine with the m i n i m u m finish
time.
Step 4:
Set the n e w finishing time of the machine = the oldfinishingtime of the
machine + the processing time of the job.
Step 5:
R e m o v e the job from the job list.
Step 6:
Repeat Steps 3 to 5 until the job list is empty.
Step 7:
A m o n g the finishing times of the machines, choose the longest as the final
finishing time.
The finishing time produced by using the LPT scheduling algorithm is in general not
minimal. Hence, L P T scheduling algorithm is not an optimal algorithm.
1.8.11 Gupta-Based Flexible Flow-Shop Scheduling Algorithm
(Hong et al, 2000)
This algorithm represents the latest effort in developing solutions for the flow-shop
problem with more than two machines. This algorithm is a combination of the L P T
(Hong et al, 1998) algorithm and Gupta's (1970, 1971) algorithm in solving the flowshop problem. This is a continuing effort for H o n g et al to combine the L P T
scheduling algorithm with existing flow-shop scheduling methods to solve the flowshop problem with more than two machines. In this algorithm, the L P T scheduling
algorithm is used to assign jobs to each machine in the flow-shop and Gupta's
algorithm is used to deal with the job sequencing and timing. The procedure is
similar to the Palmer-based fuzzy scheduling algorithm as described above except that
24
this algorithm is not using fuzzy sets to represent the processing time of the job and it
uses Gupta's (1970, 1971) algorithm instead of Palmer's (1965) algorithm.
The Gupta-based flexible flow-shop scheduling algorithm produces the same outcome
as the Palmer-based algorithm. Both algorithms do not produce an optimal solution
(Hong et al, 2000). Like the LPT algorithm, the Gupta-based flexible flow-shop
scheduling algorithm is not an optimal algorithm.
1.8.12 Flow-shop Scheduling with Genetic Algorithm
In recent years, genetic algorithms have become a popular field for research,
especially for problems that were deemed unsolvable by using conventional
Operational Research methods. Scheduling of flow-shop problems is one such area
where genetic algorithms are being tried as an alternative way to find solutions for
problem. The application of genetic algorithms to scheduling problems does not
follow all the rules of classical genetic algorithms. Before we look at how a genetic
algorithm is used in solving flow-shop problems, we review the basic ideas of the
genetic algorithm.
1.8.13 Genetic Algorithm
Genetic algorithm (GA) is a global optimisation heuristic based on the principles of
natural selection and population evolution (Holland, 1975). The principle is to
identify high quality properties which can be combined into a new population so that
the new generation of solutions are better than the previous population (Kolen and
Perch, 1994). Unlike other heuristics which consider only one solution, GA considers
a population of feasible solutions. The algorithm consists of four components:
25
selection, crossover, mutation and replacement. The following section describes the
simple mechanism of the algorithm based on the four components:
initialise population/?
evaluate fitness of population/?
repeat {
select parents from population/? based on fitness values
perform the crossover step to generate children solutions for population (p+1)
perform the mutation step for children solutions in population (p+1)
evaluate fitness of population (p+1)
} until some convergence criteria are satisfied
The population can be initialised by randomly generating feasible solutions. Each
feasible solution is assigned a fitness value that is used to determine the probability
choosing it as a parent solution for the next generation. The solutions with high
fitness values will be selected to "breed' with other parent solutions in the crossover
step. This can be achieved by exchanging parts of a solution with another. The aim is
to combine good characteristics of parent solutions to create new children solutions.
Thus an important consideration in this step is to determine the crossover point such
as how to choose which edges (high quality properties) to be passed to the children
solutions. Solutions are also mutated by making a change to the children solution.
The mutation step is to ensure diversity in the population and to prevent premature
convergence. Not every solution needs to perform the mutation step. A portion of the
solution can have one or more edges (high quality properties) exchanged using some
assigned probability. The last step is the replacement of the current population in
which the parent generation is replaced with the new population of children solutions.
26
The process is repeated until a convergence criterion is satisfied.
This can be
achieved by repeating the process for a specific number of generations or until the
population does not show any further improvement.
Crossover and mutation are important steps in the genetic algorithm. Crossover
ensures that better children solutions are generated in the new generation and
mutation ensures that continuous improvement of the solution is achievable. These
two steps constitute the strength of the genetic algorithm which is the ability to find
global optimal solution. More importantly the genetic algorithm conducts the search
for an optimal solution based on the population in contrast to a single feasible soluti
in other optimisation techniques. This allows the genetic algorithm to take advantage
of the fittest solutions by assigning them higher probability which can result in a
better solution.
It is important to find good parameter setting for the genetic algorithm to work. One
of the factors is the determination of population size. The population size is importan
because if the population is too small, a premature convergence may occur which
leads to a local optimum. On the other hand, if the population is too large, then there
may be a significant increase in computation time, as there will be too many solutions
that need to be considered. Other factors include determination of a crossover point
for parent solutions and a strategy for mutation. The construction of a crossover
operator should not result in children solutions that are too far away from the parent
solutions. Similarly, if too many edges (high quality properties) are selected during
the mutation step, it will also increase computation time. This is because a very high
rate of mutation may result in too much diversity. On the other hand, a very low
mutation rate may result in a sub-optimal solution. Selection of a fitness function is
27
also important.
This is because an over simplistic fitness function m a y lead to
convergence to a local optimum.
1.8.14 Genetic Algorithm for Scheduling Problems
As mentioned above, the application of the genetic algorithm to scheduling problems
does not follow all the rules of the classic genetic algorithm. When dealing with
scheduling problems, we need to modify some of the rules in the genetic algorithm.
The chromosome (one unit of the population) will be used to code the order of the
optimisation. A simple representation such as using a vector of integers, is a
sufficient and efficient representation.
The crossover operator has to be modified in order to avoid confusion with the order
vectors since classical crossover will create a vector with double order numbering.
The mutation operator, for the same reason cannot simply change one order number
into another one. Hence a partial sub-list sequence operator is used.
The fitness function will be related to the ordering problem that needs to be optimise
This will often be a minimisation problem. For example, in manufacturing, the aim is
to minimise the total time needed to complete a series of jobs. The fitness function
will often be the computed makespan or a measurement related to it.
The computation of the fitness function will also include all the constraints of the
problem. Violation of some constraints may be taken into account by giving heavy
weight to those in the fitness and hence these will not be favoured in a problem that i
aimed at minimising the total time needed to complete a series of jobs.
28
Based on Mulkens's (1994)finding,the following genetic algorithm parameters were
modified for solving the flow-shop problem:
• the population size n was chosen as twice the length of the vector to be ordered, i
other words, twice the number of jobs to be scheduled.
• the initial population is made of n randomly defined order vectors.
• the crossover operator was applied between pairs of randomly selected individuals
(order vectors) according to their fitness.
• the population used at every new generation was made of the n best individuals
from the parent and child population, hence not all children were kept.
• the mutation operator was used with a rate of 10% as higher rates were found to
destroy the convergence mechanism and lower rates imply the risk of missing good
solutions.
• the stopping criteria is set for the population size to grow up to 10 times that o
original population, whose global fitness (i.e. sum of fitness value of each
chromosome (one unit of the population)) is the minimum value obtained at that
point in time.
When applying the genetic algorithm based on the parameters mentioned above to
two-machine flow-shop problems, the solution did rapidly converge to the correct
solution, the makespan is the same as the one given by Johnson's algorithm and the
schedules do match the ones given by Johnson's algorithm (Mulkens,1994).
The same algorithm applied to the three-machine flow-shop problem produces the
optimal sequence and the minimum makespan. However, the algorithm has a serious
deficiency. When the number of jobs is less than the number of machines, the
algorithm produces optimal sequence and minimum makespan with good efficiency.
29
W h e n the number of jobs is twice the number of machines, the efficiency decreases
drastically and it takes three times as much effort to produce the optimal solution a
when the number of jobs is less than the number of machines (Mulkens,1994; Nawaz
et al., 1988). As the number of jobs increases, the time to produce an optimal solutio
will be even longer and the efficiency of the algorithm continues to decline.
One disadvantage of using the genetic algorithm is the massive calculation involved
in producing each generation of the population. This coupled with the scheduling
problems which are generally computation intensive imposes much greater demand
on computational resources in order to produce a solution. The genetic algorithm is
seen to be less effective in terms of efficiency in producing a timely solution.
Overtime, the genetic algorithm might become a powerful method to find solutions
for scheduling problems when the computation and storage capabilities of the
computer increase.
1.9 Uninformed Search Methods and Informed Search Methods
So far, we have reviewed some of the most important efforts in developing solutions
for the three-machine flow-shop problem. In most cases, the solutions obtained are
close approximations to the optimal solution. Besides using the exhaustive search
method, there is so far no algorithm that can produce an optimal schedule for the
three-machine flow-shop problem. The exhaustive search method is extremely
inefficient and it will not be able to solve any three-machine flow-shop problem that
has more than 10 jobs. Consequently, we need to consider other ways to find the
optimal schedule for the three-machine flow-shop with minimum makespan.
30
In view of the combinatorial nature of most scheduling problems, the growth factor of
the solution search space is exponential. It will not be wise to search the whole
solution search space to find the optimal solution. It will be more beneficial if we
know the likelihood of where the optimal solution will be and search that portion of
the solution search space eliminating the unnecessary search wasted on areas that wil
not have an optimal solution. This quest for knowledge before conducting a search in
any solution search space has prompted the development of informed search methods
sometimes also referred to as Heuristic search methods. Most of the informed search
methods or heuristic search methods employ some form of intelligent mechanism to
guide the search within the solution search space by using information obtained prior
to commencing the search. In most cases, the prior knowledge can be derived from
known factors in the scheduling problem or from a previously unsuccessful search.
By using this prior knowledge, the search can be guided to the area where the solutio
is most likely to be and in contrast to an exhaustive search the informed search is
more efficient.
The algorithm developed in this research involves using a heuristic search method to
find the optimal solution for the three-machine flow-shop problem with minimum
makespan. Before we develop the heuristic search method used in this research, we
need to understand some of the fundamental principles in conventional uninformed
search methods. This is due to the fact that, most of the heuristic search methods
draw their foundation principles from conventional search methods. In the following
section we discuss some of the common principles involved in search space
formulation, uninformed search methods and informed search methods or heuristic
search methods.
31
1.10
Formulation of Search Space for Scheduling Problems
Most of the existing search methods are based on state-space search. There are two
broad categories of state-space search methods, uninformed search methods and
informed search methods, also referred to as heuristic search methods. Each of these
methods need to represent the problem with a state-space representation. State-space
representation splits a problem into subsets of individual sub-problems, which
represent possible candidates to lead to a solution. Each subset of the problem is
called a state. The following section explains the state-space representation in detail.
1.10.1 State-Space Problem Representations
(Russell & Norvig, 1995)
A problem is a collection of information that the method or algorithm will act upon.
For a single-state problem, the basic elements of a problem definition are the states
and actions. The following defines some of the important terms and criteria related to
state-space problem formulation:
• the initial state is the state that the method or algorithm knows itself to be in.
• the set of possible actions available to the method and algorithm.
• an operator is used to denote the description of an action in terms of which state
will be reached by carrying out an action in a particular state.
• A path in the state space is simply any sequence of actions leading from one state
to another.
In other words, the set of all states reachable from the initial state by any sequence o
actions following a path is defined as the state space of the problem (Russell, 1995).
32
It is meaningless to travel from one state to another state without any guidelines on
when the travelling will stop. Therefore, there must be a goal state to determine the
outcome of the problem. Thus we require:
• A goal test, which the method or algorithm can apply to a single state description
to determine if it is a goal state. Sometimes there are more than one possible goal
state, and the test has to ensure that we have reached one of them. In some
instances, the goal can be specified as an abstract property rather than an explicitly
enumerated set of states. As an example, in chess, the goal is to reach a state called
"checkmate," where the opponent's king can be captured on the next move no
matter what the opponent does.
In a practical problem, it may be the case that one solution is preferable to another,
even though they both reach the goal and solve the problem. In a state-space problem
one way to differentiate the quality of the solution is to examine the path cost.
• A path cost function is a function that assigns a cost to a path. The cost of a path
the sum of the costs of the individual actions along the path. The path cost
function is often denoted by g in a state-space problem.
In summary, the initial state, operator set, goal test, and path cost function define a
state-space problem. The output of a search algorithm is a solution. The solution is a
path from the initial state to a state that satisfies the goal test. The state that sa
the goal test is also known as the goal state.
33
1.10.2
Criteria to Evaluate the Effectiveness of a State-Space
Search Method
The effectiveness of a state-space search can be measured in three ways:
• Does it find a solution at all?
• Is it a good solution (one with a low path cost)?
• What is the search cost associated with the time and memory required to find a
solution?
The total cost of the search is the sum of the path cost and the search cost. The sear
cost is the amount of work you carry out before interacting with the search space, it
also called the offline cost. The path cost is called the online cost (Russell & Norvi
1995).
There are four criteria that are used to evaluate a search method (Russell & Norvig,
1995):
• Completeness: is used to measure whether a method can guarantee to find a
solution when there is one.
• Time complexity: is used to measure how long a method takes to find a solution.
• Space complexity: is used to measure how much memory a method needs to
perform the search.
• Optimality: is used to measure whether a method finds the highest-quality solution
when there are several different solutions.
In the following section, we describe some of the well-known state-space search
methods that have been developed in the last thirty years. The evaluation criteria
34
mentioned above will also be used to evaluate some of those methods. The first
category is uninformed search methods.
1.11 Uninformed Search Methods (Blind Search)
Uninformed search methods are designed to search recursively with little information
generated from the search itself to help in guiding the search. Most uninformed
search has a systematic routine to follow in regards to how the search should be
carried out. The search routine will always be the same regardless of whether it is a
complex or a simple problem. In most cases, uninformed search methods can be very
efficient if the problem is restricted to a non-combinatorial problem.
We consider some of the well-known uninformed search methods.
1.11.1 Breadth-First Search (Cohen & Faigenbaum, 1982)
According to this method, the root node of the search tree is expanded first, then all
the nodes generated by the root node are expanded next, and then their successors.
The process will be repeated until there are no more nodes to expand. In general, all
the nodes at depth d in the search tree are expanded before the nodes at depth d + 1.
Breadth-first search is a very systematic search method because it considers all the
paths of length 1 first, then all those of length 2 and so on. If there is a solution,
breadth-first search is guaranteed to find it, and if there are several solutions, bre
first search will always find the first goal state with the shallowest depth. In terms
the four evaluation criteria, breadth-first search is complete, and is optimal provide
the path cost is a non-decreasing function of the depth of the node, that is, when all
operators have the same cost.
35
Breadth-first search is not always the method to use when w e consider the amount of
time and memory it takes to complete a search. Let us consider a hypothetical state
space where every state can be expanded to yield b new states. The branching factor
of these states (and of the search tree) is b. The root of the search tree generate
nodes at the first level, each of which generates b more nodes in the second level.
a result, the total number of nodes in the second level is b2. Each of these nodes
continues to generate b more nodes, yielding b3 nodes at the third level and this
expansion continues on until the last level of the search tree. Now consider that t
solution for this problem has a path length of J. Then the maximum number of nodes
expanded before finding a solution is
l+b + b2+b3+.... + bd
This is the maximum number, but the solution could be found at any point on the dth
level. In the best case, the number could be smaller. The space complexity of the
n=d
breadth-first search is ^b" nodes. The time complexity for breadth-first search is
0(bd).
1.11.2 Uniform Cost Search (Dijkstra, 1959)
Breadth-first search finds the shallowest goal state, but that may not be the least
solution for a general path cost function. Uniform cost search modifies the breadth
first strategy by always expanding the lowest-cost node as measured by the path cos
g(n), rather than the lowest-depth node.
If the cost of the path never decreases as we travel along the path, then the first
solution that is found is guaranteed to be the cheapest solution. This is because,
36
there were a cheaper path that was a solution, it would have been expandedfirstand
would have been found earlier. Uniform cost search finds the cheapest solution
provided the cost of a path never decreases as we travel along the path.
1.11.3 Depth-First Search (Cohen & Faigenbaum, 1982)
The Depth-first search method always expands one of the nodes at the deepest level of
the tree. When the search hits a dead end at a non-goal node with no expansion, then
the search will go back and expand nodes at the previous (shallower) levels.
Depth-first search has very modest memory requirements. It needs to store only a
single path from the root to a leaf node, with the remaining unexpanded nodes along
the path. For a search tree with branching factor of b and the maximum depth of m,
depth-first search requires the storage of only bm nodes, in contrast to breadth-first
search which in the case where the shallowest goal is at depth d needs to store bd
nodes.
The time complexity for depth-first search is 0(bm). For problems that have very
many solutions, depth-first may actually be faster than breadth-first because it has
good chance of finding a solution after exploring only a small portion of the whole
search space.
The drawback of depth-first search is that it can get stuck going down the wrong path.
Many problems have very deep or even infinite search trees. If depth-first search has
chosen one of these deep search trees or an infinite search tree from the root node, t
search will always continue downward without backing up, even when a shallow
solution exists. When dealing with these kind of problems, depth-first search will
either get stuck in an infinite loop and never return a solution, or it may eventually
37
find a solution path that is longer than the optimal solution. A s a result, the depth-first
search method is incomplete and it is not optimal. The depth-first search method
should be avoided for search trees with large or infinite maximum depths.
1.11.4 Depth-Limited Search (Cohen & Faigenbaum, 1982)
The Depth-limited search method avoids the difficulty of depth-first search by
imposing a "cut-off' on the maximum depth of a path. This cut-off can be
implemented with a special depth-limited search algorithm, or by using the general
search algorithm with operators that keep track of the depth. With this new operator
set, the algorithm guarantees to find the solution if it exists, but it cannot guarante
find the cheapest solution first. Hence, depth-limited search is complete but not
optimal. If for any reason, the depth limit that was chosen is too small then depthlimited search is not even complete because it may never find the solution. The time
and space complexity of depth-limited search is similar to depth-first search. Let / be
the depth limit, then the time complexity of depth-limited search is 0(b) and the
space complexity is 0(b).
1.11.5 Iterative Deepening Search (Slate & Atkin, 1977)
The difficult task of depth-limited search is picking a good limit. However, for most
problems, we will not know a good depth limit until we have solved the problem.
Iterative deepening search is a method that will not choose a limit but will try all
possible depth limits: starting with depth 0, then depth 1, then depth 2 and so on.
This algorithm has combined the benefits of both the depth-first and breadth-first
search. Like the breadth-first search, the iterative deepening search is complete and
optimal but it only requires the memory storage of depth-first search. The order of
38
expansion of nodes in the search tree is similar to breadth-first search, except that
some nodes are expanded multiple times. For this matter, iterative deepening search
may seem wasteful because many nodes are expanded multiple times. However, for
most problems, the overhead of this multiple expansion is actually quite small. The
reason is that, in an exponential search tree, almost all of the nodes are in the bott
level, so it does not matter much that the upper levels are expanded multiple times. I
an iterative deepening search, the nodes on the bottom level are expanded once, those
on the next to bottom level are expanded twice, and continue up to the root of the
search tree, which is expanded d+\ times. So the total number of expansions in an
iterative deepening search is
(d+ 1)1 + (d)b + (d-l)b2 +...+ 3bd'2 + 2bd-! + \bd
The higher the branching factor, the lower the overhead of repeatedly expanded
nodes, but even when the branching factor is 2, iterative deepening search only takes
about twice as long as a complete breadth-first search. This shows that the time
complexity of iterative deepening is still 0(bd), and the space complexity is 0(bd). In
general, iterative deepening is the preferred search method when the problem has a
large search space and the depth of the solution is unknown.
1.11.6 Bi-Directional Search (Pohl, 1969,1971)
The idea behind bi-directional search is to simultaneously search both forward from
the initial state and backward from the goal state, and stop when the two searches
meet. For problems where the branching factor is b in both directions, bi-directional
search can make a big difference. If we assume as usual that there is a solution of
depth d, then the solution will be found in 0(bd/2) steps, because the forward and
39
backward searches each have to go only half way. This is a very efficient search
method compared to those methods mentioned before. But, several issues must be
addressed before the algorithm can be implemented:
• We must define the predecessors of a node n to be all those nodes that have n as a
successor. Searching backwards means we must generate predecessors
successively starting from the goal node.
• When all operators are reversible, the predecessor and successor sets are identical
for some problems. As a result, calculating predecessors can be very difficult.
• If there are many possible goal states, then we may apply a predecessor function to
the state set. If we only have a description of the set, it may be possible to derive
the possible descriptions out of sets of states that would generate the goal set, but
this is very difficult to accomplish.
• There must be an efficient way to check each new node to see if it already appears
in the search tree of the other half of the search.
• Before the search takes place, a decision needs to be made as to what kind of
search is going to take place in each half.
The bi-directional search method has time complexity of 0(b ). In order for the two
searches to meet, the nodes of at least one of them must be retained in memory. This
A/0
means that the space complexity of uninformed bi-directional search is 0(b ).
Although the bi-directional search method is very efficient, it is very difficult to m
all the criteria needed to set up the search. This is especially true, if the depth of
search tree is unknown and the goal state is an abstract property rather than an
explicitly enumerated state.
40
1.11.7
Comparison of Uninformed Search Methods
(Russell & Norvig, 1995)
Table 1-4 provides a comparison of all the uninformed search methods described
previously. The comparison is based on the four evaluation criteria for state-space
search methods, with b as the branching factor, d the depth of solution, m the
maximum depth of the search tree and / the depth limit.
Table 1.4: Comparison of the uninformed search methods
Criterion
Complete?
Breadthfirst
Yes
UniformCost
Yes
Depthfirst
No
Depthlimited
Yes,
Iterative Bidirection
deepening
Yes
Yes
if l>d
d
d
m
b>
bd
bm
Time
b
Space
bd
bd
bm
bl
bd
bd/2
Yes
Yes
No
No
Yes
Yes
Optimal?
1.12
b
b
Informed (Heuristic) Search Methods
1.12.1 Introduction to Heuristics
Heuristics are commonly referred to as the criteria, methods or principles for decidi
the most effective course of action among several promising alternatives in order to
achieve some goals (Pearl, 1984). Heuristics are a compromise between striving for
simple criteria and accuracy by eliminating good and bad choices. Heuristics play an
effective role in problems that require the evaluation of a large number of possibili
to determine an exact solution. The time required to find an exact solution using
conventional mathematical methods is often more than a lifetime. Heuristics are able
to solve such problems by indicating a way to reduce the number of evaluations and to
41
obtain solutions within reasonable time limits. Heuristics are able to provide a simple
means of indicating which is the preferred action among several courses of action. In
most cases, heuristics do identify the most effective course of action but there is no
guarantee that is always the case. In other words, heuristics do not always produce an
optimal solution (Pearl, 1984).
The following section describes some of the well-known informed or heuristics search
methods.
1.12.2 Best-First Search (Cohen & Faigenbaum, 1982)
The name "best-first search" is not an accurate description for the method. In fact, if
we could actually expand the best node first, it would be a straight march to the goal.
In the "best-first search", all we are doing is choosing the node that "appears" to be
best according to an evaluation function. If the evaluation function is accurate, then
this will be the best node. If the evaluation function is inaccurate, then the search ca
go astray.
Similar to the General-Search algorithm with a whole family of different ordering
functions, there is also a whole family of Best-First-Search algorithms with different
evaluation functions. These algorithms aim to find low-cost solutions. Like the
depth-first search using the path cost g to decide which path to extend, the best-first
search methods typically use some estimate of the cost of the solution and try to
minimise it. However, this does not direct the search toward the goal. In order to
guide the search toward the goal, the measure of the cost must incorporate some
estimate of the cost of the path from a state to the closest goal state.
42
W e look at two basic approaches that will guide the search toward the goal state. The
first approach tries to expand the node closest to the goal. The second approach tries
to expand the node on the least-cost solution path.
1.12.3 Minimising the Estimated Cost to Reach a Goal: Greedy
Search (Hart et al, 1968)
One of the simplest best-first search strategies is to minimise the estimated cost to
reach the goal. In short, the node whose state is judged to be closest to the goal state
is always expanded first. Unfortunately for most problems, the cost of reaching the
goal from a particular state can be estimated but cannot be determined exactly. A
function that calculates such cost estimates is called a heuristic function, and is
usually denoted with the letter h:
h(n) = estimated cost of the cheapest path from the state at node n to a goal state.
Greedy search is the name given to the best-first search that uses h to select the next
node to expand. In this case, h can be any function. The requirement for n to be the
goal is h(n) = 0.
Greedy search is similar to depth-first search in the way that it prefers to follow a
single path all the way to the goal, but will only back up when it hits a dead end. It
suffers from the same defects as depth-first search, it is not optimal, and it is
incomplete. As with depth-first search, Greedy Search can start down an infinite path
and never return to try other possibilities. The worst case time complexity for Greedy
search is 0{bm), where m is the maximum depth of the search space. Greedy search
retains all nodes in memory, therefore its space complexity is the same as its time
complexity. If careful effort is put in to developing a good heuristic function, the
43
space and time complexity can be reduced substantially. The amount of reduction in
space and time complexity depends on the particular problem and quality of the
heuristic function.
1.12.4 Minimising the Total Path Cost: A* Search
(Hart et al, 1968)
Greedy search minimises the estimated cost to the goal, h(n), and hence reduces the
search cost significantly. Unfortunately, the algorithm is not optimal and is
incomplete. On the other hand, Uniform-cost search minimises g(n) which is the cost
of the path that has been followed so far. The algorithm is optimal and complete but
it can be very inefficient. By combining the Greedy search and the Uniform-cost
search, we can have the advantages of both the algorithms. We can do that by simply
summing the two evaluation functions together:
f(n) = g(n) + h(n)
If g(n) gives the path cost from the start node to node n, and h(n) is the estimated
of the cheapest path from n to the goal then,
f(n) = estimated cost of the cheapest solution through n
As a result, if we try to find the cheapest solution, a reasonable thing to try first
node with the lowest value off Given a simple restriction on h, Korf (1985a, 1985b)
has proved that the strategy is complete and optimal. The restriction is to choose a
function that never overestimates the cost to reach the goal. Such an h is called
admissible heuristics. Admissible heuristics are by nature optimistic, because they
think the cost of solving the problem is less than it actually is. This optimism also
44
transfers to the /function. If h is admissible,/^ never overestimates the actual cost
of the best solution through n. The Best-first search using/as the evaluation functi
and an admissible h function is known as the A * search algorithm.
The following is a simple proof that demonstrates that the A* search algorithm is an
optimal and complete algorithm (Korf, 1985a, 1985b).
Let G be an optimal goal state, with path cost/*. Let G2 be a sub-optimal goal state
with path cost g(G2) > /*. Imagine that the A* search method has selected G2 to be
the goal state, this will terminate the search with a sub-optimal solution. The
following will show that this leads to a contradiction.
Consider a node n that is currently a leaf node on an optimal path to G. Because h i
admissible, we must have /* > /(«).
If n is not chosen for expansion over G2, we must have f(n) > f(G2).
Combining these two conditions together, we will have /* > f(G2).
Because G2 is a goal state h(G2) = 0; hence/fG^ = g(G2). Thus we have proved from
the assumptions that /* > g(G2).
This contradicts the assumption that G2 is sub-optimal, therefore the A* search
algorithm never selects a sub-optimal goal for expansion. Hence, because it only
returns a solution after selecting it for expansion, the A* search method is an opti
algorithm.
Because A* expands nodes in order of increasing path cost/ it must eventually reach
a goal state unless there are infinitely many nodes with f(n) </*. The only way ther
45
could be an infinite number of nodes is either there is a node with an infinite
branching factor, or there is a path with finite path cost/ but with an infinite numbe
of nodes along it. Therefore, A* is complete provided that the branching factor of an
expansion node is finite and a path with a finite path cost must also have a finite
number of nodes.
Although, the A* search algorithm is complete and optimal, it does not mean that the
A* search algorithm is the answer to all our searching problems. For most problems,
the number of nodes within the goal contour search space is still exponential in
relation to the length of the solution. Unless the error in the heuristic function gro
no faster than the logarithm of the actual path cost, this exponential growth will
always occur. The condition for sub-exponential growth is that,
\h(n)-h\n)\<0(\ogh*(n)),
where h (n) is the true cost of getting from n to the goal. For most heuristics in
practical use, the error is at least proportional to the path cost, and the resulting
exponential growth will eventually surpass the capability of any computer. However,
compared to uninformed search methods, the use of a good heuristic search method
still provides tremendous savings in terms of computation time.
The main drawback of the A* search algorithm is the amount of memory needed to
store generated nodes, because it keeps all generated nodes in memory, the method
usually runs out of memory storage space long before it runs out of time.
46
1.12.5
Iterative Deepening A * Search (IDA*) (Korf, 1985a, 1985b)
The Iterative Deepening A* (IDA*) search algorithm addresses the problem of
memory storage space in the original A* search algorithm. The Iterative Deepening
A* (IDA*) search algorithm is derived from the same iterative deepening search
principle described earlier. In this algorithm, similar to the regular iterative
deepening, each iteration is a depth-first search. The depth-first search is modified
use an/cost limit instead of the usual depth limit. As a result, each iteration expan
all the nodes inside the contour for the current /cost, investigating and testing ove
the contour to find out where the next contour lies. Once the search inside a given
contour has been completed, a new iteration is started using a new/cost for the next
contour.
IDA* is complete and optimal with the same characteristics as A* search, but because
it is depth-first, it only requires space proportional to the longest path that it ex
If S is the smallest operator cost and/* the optimal solution cost, then in the worst
bf *
case, I D A * will require — — nodes of storage. In most cases, M i s a good estimate of
8
the storage requirements.
The time complexity of IDA* depends strongly on the number of different values that
the heuristic function can have. Commonly,/only increases two or three times along
any solution path. As a result, IDA* only goes through two or three iterations, and it
efficiency is similar to that of the A* search method. Furthermore, IDA* does not
insert and delete nodes on a priority queue during searching as compared to the
original A* search method, its overhead per node can be much less than that for A*.
47
Optimal solutions for m a n y practical problems were first found by IDA*, which for
several years was the only optimal, memory-bounded heuristic algorithm.
Unfortunately, IDA* has difficulty in more complex domains. For example, in the
two-machine flow-shop problem and the travelling salesperson problem, the heuristic
value is different for every state. This means that each search path contour only
includes one more state than the previous search path contour. If A* expands N
nodes, IDA* will have to go through N iterations and will therefore expand 1 + 2 + ...
+ N = 0(N2 ) nodes. Now if N is too large for the computer's memory, then N2 is
almost certainly too long to wait.
One way around this is to increase the /cost limit by a fixed amount e on each
iteration, so that the total number of iterations is proportional to g_1. This can red
the search cost, at the expense of returning solutions that can be worse than optimal
by at most e. Such an algorithm is called e-admissible.
1.12.6 SMA* Search (Russell, 1992)
Some of IDA*'s difficulties in certain problem spaces can be attributed to using too
little memory. In between iterations, IDA* retains only a single number, the current/
cost limit. Because it cannot remember its history, IDA* is destined to repeat it agai
in future. This is especially true in state spaces that are graphs rather than trees.
IDA* can be modified to check the current path for repeated states, but it will not be
able to avoid repeated states generated by alternative paths.
The SMA* (Simplified Memory-Bounded A*) algorithm can make use of all
available memory to carry out the search. Using more memory can improve search
48
efficiency. It is better to remember a node than to have to regenerate it w h e n needed.
SMA* has the following properties:
• It will make use of whatever memory is available to it.
• If given enough memory, it will avoid repeated states.
• If given enough memory to store the shallowest solution path, the SMA* algorithm
is complete.
• If given enough memory to store the shallowest solution path, the SMA* algorithm
is optimal. If there is insufficient memory to store the solution path, SMA* will
return the best solution that can be reached with the available memory.
• If given enough memory to store the whole search tree, the SMA* search is
optimally efficient.
The design of SMA* is simple. When it needs to generate a successor but has no
more memory left, the algorithm will make space on the storage queue by dropping a
node from the storage queue. Nodes that are dropped are called forgotten nodes. The
SMA* algorithm will drop nodes with high/cost. To avoid re-exploring sub-trees
that it has dropped from memory, the algorithm retains in the ancestor nodes
information about the quality of the best path in the forgotten sub-tree. In this wa
the algorithm will only need to regenerate the sub-tree when all other paths have be
shown to look worse than the path it has forgotten.
Given a reasonable amount of memory, the SMA* search method can solve
significantly more difficult problems than the A* search method without incurring
considerable overhead in terms of extra nodes generated. SMA* performs well on
problems with highly connected state spaces and real-valued heuristics, on which
IDA* has difficulty. On very difficult problems, very often SMA* is forced to
49
continually switch back and forth between a set of candidate solution paths. In this
case, the extra time required for repeated regeneration of the same nodes means that
problems that could be solved by A* easily, given unlimited memory, become
intractable for SMA*. Therefore, memory limitations can make a problem intractable
for SMA* from the point of view of computation time. The only way out is to drop
the optimality requirement.
1.12.7 Searching and Learning A* (SLA*) (Zamani, 1995)
Searching and Learning A* algorithm (SLA*) is an algorithm which is capable of
learning from past heuristic estimates in order to improve on future heuristic
estimates. This improvement is guaranteed by carrying out repetitively updates to the
heuristic estimate of any state when backtracking the search path (Zamani, 1995).
The method is an A* algorithm which means that it leads to an optimal solution for
any state-space problem in which a heuristic estimate (h(n)) of the state does not
overestimate the optimal value (/*).
SLA* can be modified to become Learning and Controlled Backtracking A*
(LCBA*) which finds solutions within a specified range of the optimal solution.
LCBA* is useful if there is a constraint on computation time and optimality is not an
issue.
Korf (1990) developed a Learning Real Time A* algorithm (LRTA*) which is similar
to the SLA* algorithm. The major difference betweeen SLA* and LRTA* lies in the
implementation of the backtracking process which occurs when updating the heuristic
estimate of any state (Zamani, 1995). In addition, the SLA* algorithm will update the
heuristic estimate more often whenever a state is close to the optimal path. This
50
improved characteristic means that only the close neighbourhood to thefinaloptimum
path will need to be considered seriously. This improvement reduces the search space
significantly in comparison to the LRTA* algorithm and the original A* algorithm.
In the SLA* algorithm, the process of learning from past heuristic estimates removes
the disadvantage of large computation time associated with most backtracking
algorithms. The worst scenario for the SLA* algorithm in terms of node expansions
and memory usage is ns, where n is the number of states and s is the final cost from
the initial state to the goal state (Zamani, 1995). In this case, the branching factor
the depth of the expanded tree nodes may not be the best measurements for the
complexity and memory usage of the algorithm, because the search space may only be
expanded along the optimal path. In that case, the number of states involved will be
small and memory usage will be minimal.
SLA* can be implemented as follows (Zamani, 1995):
Let k(x,y) be the positive edge cost from state x to a neighbouring state y, and h(x)
the non-overestimating heuristic estimate of the cost from state x to the goal state.
StepO: Apply an heuristic function to generate a non-overestimating initial
heuristic h(x) for every state x to the goal state.
Step 1: Put the root state on the backtrack list called OPEN.
Step 2: Call the top-most state on the OPEN list x. If x is the goal state, stop.
Step 3: If x is a dead-end state, replace its h(x) with a very large value. Remove x
from OPEN list, and go to Step 2.
51
Step 4:
Evaluate the compound
value
f(x,y) = k(x,y) + h(y)
for every
neighbouring state y ofx, and find the state with the minimum value. Call
this state x'. If ties occur, break ties randomly.
Step 5: If h(x)>k(x,x') + h(x'), then add x' to the OPEN list as the top-most
state and go to Step 2.
Step 6: Replace h(x) with k(x,x') + h(x').
Step 7: If x is not the root state, remove x from the OPEN list.
Step 8: Go to Step 2.
Note that Step 3 of the algorithm takes care of problems with dead-end states. When
the dead-end state is reached, no further expansion is possible. Thus by assigning a
very large value to the heuristic estimate of a dead-end state, this will ensure that n
future visit to the same state is possible.
The process of selecting a neighbouring state with the minimum compound value of
fk(x,y)+h(y)J continues until the heuristic estimate of the current state is less than
minimum of the compound values of its neighbouring state. In this case, the heuristic
estimate of the current state is updated to this minimum compound value. The
learning component of this algorithm is that a state may learn from the heuristic
estimate of its neighbouring states. Hence the improvement of the heuristic estimate
for one of its neighbours may lead to the improvement of its own heuristic estimate.
In addition, when the heuristic estimate of a selected state is changed, this state may
no longer be the most appropriate state to be selected for the search path. Thus, the
process of backtracking is carried out by applying the learning principle to the states
in the reverse order of the search path. This is designed to see if the heuristic estim
of an earlier selected state can be improved. Any state whose heuristic estimate has
52
changed during the backtracking process is removed from the search path. This
learning process continues until either the root state is reached or the first state wh
heuristic estimate remains unchanged after the revision is reached. After that, the
search path resumes from this state onwards. When the search path is resumed, it is
not necessary that the same state will be selected again. This is because the heuristic
estimates of all the preceding states have changed as a result of learning.
The SLA* algorithm forms the backbone of the three-machine flow-shop heuristic
algorithm developed in this research. In chapter 2, we discuss and illustrate the
development of the three-machine flow-shop heuristic algorithm.
53
Chapter 2: The Development of the Three-Machine
Heuristic Flow-Shop Algorithm
2.1 Introduction
In this chapter we develop a new heuristic algorithm for solving a flow-shop problem
involving three machines and n jobs. The development proceeds in stages where we
begin with the problem involving two machines. Next, we consider an important
improvement related to the calculation of heuristic estimates during the search proces
and we demonstrate this improvement for the two-machine problem. Finally, we
consider the three-machine problem and present the algorithm incorporating the
improved procedure for calculating heuristic estimates during the search process.
We introduce definitions and notations as we proceed through these stages of
development in the context where they are required and we employ search path
diagrams, panels and narrative descriptions in combination as aids to presenting the
development of the algorithm. Our presentation spirals in the sense that we revisit
previously introduced ideas and develop them gradually as we progress through the
chapter.
The algorithm developed in this chapter is based on the SLA* (Search and Learning
A*) algorithm (Zamani, 1995) which is a modified and improved version of the
LRTA* algorithm (Korf, 1990). Both these algorithms are introduced and discussed
in chapter 1 (Section 1.12.7). However, the SLA* algorithm presented there needs to
be further modified and interpreted in order to be applied to flow-shop scheduling
problems.
54
2.1
The State Transition Process
SLA* is a state based search algorithm and consequently for flow-shop scheduling we
must identify the possible states and specify how state transitions occur. In the
process we begin to relate the terminology, definitions and notations of SLA* to flowshop scheduling problems.
We recall that machines (labelled Mi, M2, M3) perform operations on jobs (labelled
Ji, h, ••• , Jn) so that each job is subject to an operation on each machine. We identi
these operations using MjJi to represent machine Mj operating on job Ji where for
example if/ = 1, 2, 3 ; / = 1, 2, ... , n then there are 3n different operations. The
number of units of time required for each of these operations is known for any given
problem and is typically represented in the form of a table where for i = 1, 2, ... , n
the times required for the operations Mi Ji, M2Ji, M3J1 are ai, bi, Ci respectively.
^\Machines
Jobs
N.
Mi
M2
M3
Ji
ai
bi
Cl
h
a2
b2
C2
•
•
•
•
Jn
an
b„
Cn
W e n o w define three states so that at any time during the scheduling process each of
the operations will be in one and only one of these states. We use the "finished" state
to include all operations that have been completed at that time; the "in-progress" stat
to include all operations that have started but are not completed at that time and the
"not scheduled" state to include all operations which have not commenced at that
time. At any time we can specify the state level (0 of the scheduling process by
counting the number of operations in the "finished" state.
55
A state transition occurs whenever one or more operations is completed on a machine
which means that at that time those operations move from the "in-progress" state to
the "finished" state.
We now illustrate the state transition process associated with a scheduling problem by
describing the scheduling of jobs Ji, J2, J3 (in that order) on machines Mi and M2 for
the problem specified in Table 2.1.
Table 2.1: T w o Machines, Three Jobs
^^^Machines
Jobs
Mi
M2
Ji
3
6
h
h
5
2
1
2
^v.
W e note that this problem will be used as an example in the following sections of this
chapter.
Initially (at time t = 0) operations MiJi, MiJ2, M1J3, M2Ji, M2J2, M2J3 are in the "not
scheduled" state. We start M1J1 and it will enter the "in-progress" state where it
remains until time t = 3 when it is completed and enters the "finished" state and this
triggers a state transition. The state level changes from 0 to 1 at t= 3 and MiJ2, M2Ji
enter the "in-progress" state. The next state transition occurs at t = 8 when MiJ2 is
completed with edge cost 5. The state level changes from 1 to 2 at t= 8 and M1J3
starts. At t = 9 the next state transition occurs since both M1J3 and M2Ji are
completed and M2J2 starts. The state level changes to 4. A transition occurs next at
t = 11 when M2J2 is completed and M2J3 starts and the state level is now 5. The last
state transition occurs at t = 13 when M2J3 is completed and the final state level,
56
which is equal to the total number of operations, is 6. The makespan for this job
sequence Ji, J2, J3 is 13.
Changing this job sequence will produce a different state transition pattern and
makespan. Finding the optimal solution means finding the state transition process
which leads to the minimum makespan. In Table 2.2 we summarise the narrative
description presented above and Figure 2.1 is a common way to represent the state
transition process against a time line.
Table 2.2: State Transition Process
Time
(t)
t= 0
State
Level (1)
0
Finished
State
empty
In-Progress
State
empty
Not Scheduled
State
M1J1, MiJ 2 , M1J3
M 2 Ji, M 2 J 2 , M 2 J 3
0<t<3
0
M1J1
empty
MiJ 2 , M1J3
M 2 Ji, M 2 J 2 , M 2 J 3
3<t<S
1
MiJi
M 2 Ji, MiJ 2
M1J3
M2J2, M 2 J 3
S<t<9
2
MiJi, MiJ 2
9<t<U
4
MiJi, MiJ2, M1J3 M 2 J 2
M 2 J h M1J3
M2J2, M 2 J 3
M2J3
M 2 Ji
11<?<13
5
M1J1, MiJ2, M1J3
empty
M2J3
M 2 Ji, M 2 J 2
t = \3
6
M1J1, MiJ2, M1J3
M 2 Ji, M 2 J 2 , M 2 J 3
57
empty
empty
M^
M,J 2
M,J3
M 2 Jj
1
2
3
4
5
6
M2J3
M2J2
7
8
9
10
11
12
13
Figure 2.1: Time Line for the State Transition Process
2.1.2 Heuristic Estimates and Backtracking
Throughout the process of searching for the optimal solution we are guided by
considering heuristic estimates associated with operations. The heuristic estimate
associated with an operation estimates the time that will be required in order to
complete that operation and all operations that have not yet finished. Since we are
concerned with finding the job sequence with the minimum makespan we will focus
on the operation that has the smallest heuristic estimated associated with it unless tha
estimate together with the time required to complete the operation exceeds the
minimum heuristic estimate that we selected when we started the previous operation.
In this case we backtrack and update (increase) the previous minimum heuristic
estimate.
The guiding principle is that along the final search path representing the optimal
sequence that leads from scheduling the first operation on the first machine to
completing the last operation on the last machine the minimum heuristic estimates
form a decreasing sequence and the heuristic estimate associated with scheduling the
last operation on the last machine in this optimal sequence will have a value of zero.
58
In order to ensure that w e find an optimal sequence it is necessary that the m i n i m u m
heuristic estimate that we select to identify the next stage of the search process has a
value which underestimates the minimum makespan (is admissible). This follows
from the proof (presented in chapter 1, section 1.12.4, Korf, 1985a, 1985b) that with
this restriction on the minimum heuristic estimate the algorithm that we develop will
be optimal and complete.
The time we take to find the optimal sequence will be reduced if we can start the
search with a minimum heuristic estimate that is not only admissible but is as close as
possible to the minimum makespan. There may be several ways in which we can
calculate admissible heuristic estimates and the selection of the best of these in term
of its proximity to the minimum makespan is an important issue which we discuss in
detail in chapter 3. In presenting the first version of our algorithm for the two
machines problem in section 2.2 of this chapter we propose and justify one method for
calculating heuristic estimates.
We conclude this subsection by introducing some further definitions and notations in
preparation for the next section 2.2.
At state level /:
(i) h MjJi represents the heuristic estimate associated with the operation MjJi and
/MjJi = h MjJi + k where k (the edge cost) is the time that has elapsed since the
preceding state transition occurred.
(ii) The smallest heuristic estimate amongst all the heuristic estimates associated
with possible operations at state level / is represented by h(l) and the
corresponding/value is represented by f(I) = Kl) + k. When there is no
ambiguity we use/and h for simplicity.
59
(iii) The smallest heuristic estimate at the preceding state level is represented by h'.
We note that if the problem involves m machines then
h}& {h(l -1),h(l - 2),...,h(l - m)} because at least one and possibly as many as m
operations entered the "finished" state and triggered the state transition to bring
us from the preceding state level to the current state level /.
(iv) t MjJi is the time needed to complete the operation MjJj. If we are going to start
MjJi then t MjJj is one of ai, bi or cj. If MjJi is already in progress and
consequently was started at the time of a previous state transition then t MjJj will
be the remaining time needed for MjJi to finish.
We are now in a position to introduce our algorithm for the two-machine problem.
2.2 The Algorithm for the Two-Machine Flow-Shop Problem
In Tables 2.3 (a), (b) we represent the two-machine problem involving n jobs and the
particular example we use in this and subsequent sections to illustrate our algorithm.
Table 2.3 (a): T w o Machines, n Jobs
Table 2.3 (b): T w o Machines, 3 Jobs
v
v
Machines
Jobs^v
Ji
h
'•
Jn
x
Machines
Jobsx.
Mi
M2
ai
bi
J,
a2
b2
•
:
an
bn
60
Mi
M2
3
6
h
5
2
h
1
2
^
W h e n w e commence the search process at state level 0 it is necessary to find initial
heuristic estimates for all the operations which involve machine Mi. We propose
using:
/*MiJi = ai+ |>M2Jj = ai+ Zfy; for/ = 1, 2, ..., n. (2.1)
Hence we choose h(0), the smallest of these, to be :
h(0) = min(ai, a2, ..., an) + £Dj , (2.2)
y'=i
which identifies MJk, where ak = min(ai, ..., an), as the operation with the smallest
heuristic estimate.
Intuitively we see that h(0) is admissible since it assumes that there is no idle time
machine M2 and we have started the search with the operation on machine Mi which
requires the least time of all operations on that machine.
As the search progresses we will need to calculate or update heuristic estimates. We
consider this in detail in section 2.2.3 after we have presented our algorithm and
introduced search path diagrams and panels.
2.2.1 The Algorithm (Version 1)
Step 1: At the current state level calculate heuristic estimates h for all the possibl
nodes at this level using (2.3) (see section 2.2.3 below) provided the
heuristic estimate has not been updated by backtracking in which case use
the updated value. [Note: h = h Mi Ji]
61
Step 2:
Select the node with the smallest heuristic estimate h amongst all those in
Step 1. [Note: h = h(l), Break ties randomly.]
Step 3: Calculate/= h + k where h was found in Step 2 and A: (edge cost) is the time
that has elapsed since the preceding state transition occurred. [Note: f=f[l)]
Step 4: If / > h' where h' is the minimum heuristic estimate calculated at the
preceding state level then backtrack to that preceding state level and
increase the value of h' to the current value of/ at the previous node.
Repeat Step 3 at the preceding state level.
Step 5: If / < K then proceed to the next state level and repeat from Step 1.
Step 6: If/= 0 and h = 0 then Stop.
We have introduced the idea of "nodes" in this first version of the algorithm. This
idea is related to search path diagrams which we introduce next followed by the
development of (2.3) to be used in Step 1.
2.2.2 Search Path Diagrams and Panels
The algorithm describes the procedure which will take us from one state level to
another. This can be illustrated using a search path diagram which has the following
characteristics:
(i) A search path consists of nodes drawn at each state level / connected by
arcs to nodes at the next state level. At each state level (I) we select a node
which has the minimum heuristic estimate, amongst all the nodes at that
state level, associated with it and we backtrack to the preceding state level
or we move forward to the next state level by expanding the selected node
to arrive at new nodes at the next state level. The expanded node is
62
connected to these new nodes by arcs. Since only one of these n e w nodes
at the next state level is selected for the path it follows that on a searc
path each node is connected by an arc to at most one node at the next state
level and at most one node at the preceding state level.
Each node contains cells where the number of cells corresponds to the
number of machines. In the first cell we record information about a
possible operation on machine Mj. The second cell is used for information
about a possible operation on machine M2 and so on for each machine.
The information recorded in a cell identifies a possible operation MjJi and
the remaining time needed to complete that operation t MJi.
Near the nodes at any state level (/) we record the heuristic estimate
associated with that node (h) and f = h + k. Near the node with the
minimum value of h at level / we record h(l) and f(l) = h(l) + k and we
show the result of the comparison between f[l) and h' which will determine
whether we backtrack to the previous state level or expand this node to
produce new nodes at the next state level. If there is only one node at sta
level / then h(l) = h,f[l) =/and we record only h(l) and/7)- If we expand
a node at state level / then it is connected to the possible nodes at the n
state level by arcs and just below the node that we are expanding we
record the value of A: (the edge cost) which is the remaining time needed b
the operation which will finish next thus taking us to the next state level
Thus k will be the smallest of the t MJi values at the node.
When we are at state level 0 there will be n possible nodes which we call
root nodes (one for each job that could be started on machine Mi) but there
63
will only be information recorded in cell 1 of each node since no operation
can start on any machine other than machine Mi.
In order to be able to demonstrate the use of our algorithm and the development of the
final search path we start a new search path diagram wherever backtracking stops on
the current search path. This leads to a set of search path diagrams and we indicate on
each how it is connected to the previous diagram.
The final search path diagram represents the optimal solution and traces a path from
one of the root nodes to a final node (called the terminal node) where h(l) =J{1) = 0
and / has a value which is one more than the total number of operations in the
problem. At the root node on this path the value of h(0) =f(0) will be the minimum
makespan which can also be calculated by summing the times (k) recorded below the
nodes on the path. The optimal sequence can be read by sequentially recording the
operations completed as we move down the path from the root node.
Associated with each search path diagram there is a panel in which we record the full
details of the application of the algorithm some of which we include on the associated
search path diagram. The search path diagrams provide a graphical representation of
the search procedure while the panels provide a fully detailed narrative description
showing all calculations.
2.2.3 Calculating Heuristic Estimates at Nodes on a Search Path
Before presenting an example in section 2.2.4 we need to continue our discussion of
the procedure for calculating heuristic estimates at nodes along a search path. We
started this discussion at the beginning of section 2.2.
64
At the root nodes, at the start of the search, w e calculate heuristic estimates for these
nodes based on a method which we have selected. In the current discussion we have
selected a method specified by (2.1). As the search path develops from a root node,
selected using (2.2), we expand that root node. If we arrive at a node at state level /
then we will either backtrack to the preceding state level or we will expand the node
and thus proceed to the next state level.
Backtracking updates the heuristic estimate (h") at the preceding state level with the
current value of/ Consequently, we need to specify a procedure for calculating
heuristic estimates at nodes at a state level that we have reached by expanding a node
at the preceding state level. The basic idea behind the procedure, which we detail in
(2.3) below, is to calculate the time that is needed to complete the current operation
machine Mi plus the total time still needed for this and subsequent operations on
machine M2. This procedure uses the rationale which lead to (2.1) and ensures that
the minimum heuristic estimate amongst all the nodes is admissible.
65
In detail w e have:
At each node calculate an heuristic estimate h, where
A
(i) If cell 1 is labelled with MiJi use,
ai + By; for MiJi in the "not scheduled" state,
h = h MiJi =
t M ^ + bj + BIJ; for MiJj in the "in-progress" state,
where B y = E b j for ally such that MiJj is in the "not scheduled" state, y (2.3)
(ii) If cell 1 is blank and cell 2 is labelled with M 2 Jj use,
r,
B 2 J; for M 2 Jj in the "not scheduled" state,
h = h M 2 Ji =
t M 2 Ji + B 2 J; for M 2 Ji in the "in-progress" state,
where B 2 j = Z b j for ally such that M 2 Jj is in the "not scheduled" state
J
If both cells are blank then h = 0 and the node will be expanded to reach the terminal
node next.
The procedure (2.3) is used in Step 1 of our algorithm (Version 1). W e n o w apply our
algorithm (Version 1) to the example in Table 2.3(b) and we present a narrative
description to accompany the search path diagrams and associated panels.
66
2.2.4
Example (Table 2.3(b): T w o Machines, Three Jobs)
We start by calculating heuristic estimates for MJi, MjJ2, M1J3 (13, 15, 11
respectively) using (2.3). The minimum of these is 11 and identifies M1J3 as the one
to start. Thus h(0) =/0) = 11. On the search path Diagram 2.1, at state level / = 0
three nodes are used to represent these operations. In each node cells are used to
record operations which may start next as well as the times required for their
completion.
When operation M1J3 is finished at time t = 1 we are at state level / = 1 and M2J3
together with one of the operations Mi Ji or Mi J2 may start. This is indicated in t
diagram by two nodes at state level 1 connected to the chosen node (operation M1J3)
at state level 0 with the time required for Mj J3 (k = 1) as a label on the connect
Next we calculate heuristic estimates for M1J1 and MiJ2 guided by (2.3). Thus for
M1J1, h = t M1J1 + t M2Ji + t M2J2 = 3 + 6 + 2=11 and by similar reasoning for MiJ2,
h = 13. These have associated/values ( h + k where k = 1) of 12 and 14 respectively
We select the minimum A(l) = 11 (for MJi) and the corresponding/value/1) = 12.
Since h' = h(0) =11 was the heuristic estimate associated with the previous operati
selected (M1J3) and/1)
>
h' then from Step 4 of the algorithm we backtrack to state
level 1 = 0 and replace the value of A' = h(0) =11 with the value of/1) = 12. On th
Search Path Diagram 2.1 we have highlighted the backtracking procedure.
We are now back at state level 1=0 and the minimum heuristic estimate h(0) is 12 fo
M1J3. Again/0) = h(0)= 12.
67
M, M 2
Jl 3
h 5
h 1
3 /? = 3+6+2+2
6
2
2
h = 5+6+2+2
= 15
h(0)=12 W p ^ l l
=/
/
= f ( 0 ) / =f(0^
= 13
=/
1
h= 1+6+2+2
= 11
State Level 0
=/
k=\
M,J, h=5+2+6
=13
State Level 1
/ = 14
backtrack and update
Search Path Diagram 2.1
To document the narrative description presented above w e introduce panels. These
panels summarise and record the key characteristics of the search procedure using the
notation we have introduced so far.
Panel 2.1 documents the search procedure diagrammed in Search Path Diagram 2.1
and the narrative above.
o.
&
CO
States
CO
CO
Edge
Cost
CD
1
(k)
3
CD
-£Z
CO
8
'tz
i
CO
14=
t=0
00
CD
T3
O
-o
<D
Q.
not
scheduled
3
75
>
Time
(t)
C
Values of h, f
Smallest h and
and Operation Times Associated f.
z
<•—
0
<—
CD
M1J1
M2J1
M1J2
M2J2
M1J3
M2J3
-Q
E
68
Comments
1
2
0
t=0+,
k=0
1 t=1,
k=1
3
0 t=0+
k=0
M1J3
M1J3
M1J1
M2J3
M1J3
M1J1
M2J1
M1J2
M2J2
M2J3
M2J1
M1J2
M2J2
M1J1
M2J1
M1J2
M2J2
M2J3
flMui = 3+6+2+2 = 13
1 fMui = 0+13 = 13
tM1J1 = 3
hMU2= 5+6+2+2 = 15
2 fMU2=0+15 = 15
tM1J2= 5
hMU3= 1+6+2+2 = 11
3 fM1J3=0+11 =11
tM1J3= 1
hMui = 3+6+2 = 11
1 fM1J1= 1+11 = 1 2
tiuiui = 3
hMU2= 5+6+2 = 13
2 fiuiU2= 1+13 = 14
tM1J2=5
hMui = 3+6+2+2 = 13
1 fMui = 0+13 = 13
tM1J1 = 3
hMU2 = 5+6+2+2 = 15
2 fMU2=0+15 = 15
tM1J2=5
hM1J3=f(1) = 12
3 fMU3=0+12 = 12
tlW1J3= 1
W e select M1J3.
h(0)=h M u3=11
f(0)=fM1J3=11
h(1)=h M ui=11
f(1)=fM1J1=12
Since
f(1)>h' = h(0) = 11
backtrack.
Backtracking stops.
h(0)=h M u3=12
f(0)=fMu3=12
Panel 2.1
W e continue the search by expanding the node for M1J3 at state level 0 to the two
nodes at state level 1 in Search Path Diagram 2.2 where M1J3 is completed and using
the minimum heuristic estimate we identify M1J1 as the next operation to start along
with M2J3. Since h(l) = 11 and/1) = 12 = h' = h(0) we will not backtrack but we
will proceed to state level 2 where Mi J3 and M2J3 are finished at time t = 3 but M1J
still in progress with 1 unit of time required for it to be finished. Since MJi has
in progress for 2 units of time so far the heuristic estimate for the single cell
representing M1J1 in progress at state level 2 is its previous heuristic estimate h(
11 reduced by 2. Hence A(2) becomes 11 - 2 = 9 and the associated value of/
becomes/2) = h(2) + 2 since M2J3 required 2 units of time as recorded in the second
cell of the previous node at state level 1. No backtracking occurs since/2) = 11 = h
= h(l) = 11 and we proceed to state level 3 when all of M1J3, M2J3 and MJi are
finished at time t = 4. We may start MiJ2 and M2Ji. For MiJ2 h and /are 7 and 8
69
respectively so/3) = 8 < h' = h(2) = 9 means no backtracking and w e m a y m o v e to
state level 4 where at time t = 9 MJ2 is finished but M2Ji still has 6 - 5 = 1 unit
time until it is finished which means that the heuristic estimate for M2Ji becomes
7-5 = 2 with a corresponding / value of 2 + 5 = 7. Consequently, h(4) = 2 and
/4) = 7 = h(3) = h' and no backtracking occurs.
We move to state level 5 at time t = 10 with all operations finished except for M2J2
with heuristic estimate t M2J2 = 2 and/= 2+1=3. Thus h(5) = 2,/5) = 3 > A(4) = h'
= 2. Hence we will backtrack to the previous node at state level 4 and change h(4)
the value of/5) which means that now h(4) = 3 and/4) will become 3 + 5 = 8 > A(3)
= 7. So we need to backtrack to the previous node and h(3) =/4) = 8 and/3) = 8 + 1
= 9. Since/3) = 9 = h' = h(2) the backtracking stops and at time t = 4 we are at st
level 3 with: M1J3, M2J3 and MJi finished; MiJ2 and M2Ji in progress; M2J2 not
scheduled.
In the Search Path Diagram 2.2 and Panel 2.2, which follow, we represent the
continuation of the search from the previous diagram and panel.
70
A=13
h(0)= 12
f(0) = 12
=/
State Level 0
k=l
From Search
Path Diagram 2.1
h(l)=ll
A=5+2+6
=13
State Level 1
f(l)=ll+l
State Level 2
f(3)=8+l
State Level 3
State Level 4
State Level 5
f>h'
backtrack and update
Search Path Diagram 2.2: Continued from Search Path Diagram 2.1
71
Time
(t)
CO
States
Values of h, f
Smallest h and
and Operation Times Associated f.
Comments
O
1 Edge
Q-
CO
4
CD
_l
0>
TO
CO
Cost
(*)
CO
CO
T3
a>
x:
CO
1 t=1
M1J3
k=1
S>
a>
S
0.
1
M1J1
M2J3
•a
T3
CD
O "O
C CO
M2J1
M1J2
M2J2
a>
-Q
E
1
2
5
6
2 t=3
k=2
3 t=4
k=1
7
4 t=9
k=5
8
5 t=10
k=1
9
4 t=9
k=5
10 3 t=4
k=1
M1J3
M2J3
M1J3
M2J3
M1J1
M1J3
M2J3
M1J1
M1J2
M1J3
M2J3
M1J1
M1J2
M2J1
M1J3
M2J3
M1J1
M1J2
M1J3
M2J3
M1J1
M1J1
M1J2
M2J1
M2J1
M2J1
M1J2
M2J2
M2J2
hMui = 3+6+2 = 11
fM1J1= 1+11 = 1 2
tM1J1 = 3
hMU2= 5+6+2 = 13
fM1J2= 1+13 = 14
tM1J2=5
tMui-k=3-2 = 1
1
1
hMU2=5+2 = 7
fM1J2=1+7 = 8
tM1J2=5
1
tM2Ji-k=6-5 = 1
1
llM2J2 = 2
fM2J2= 1+2 = 3
tM2J2 = 2
M2J2
f(1) = h' = h(0) = 12
h(1)=hMui=11
f(1)=fM1J1=12
h(2)=h(1)-k
= 11-2 = 9
ff2)= 2+9= 11
h(3)= hM2Ji= 7
f(3)= fM2J1= 8
h(4)= h(3)-k
=7-5=2
f(4)= 5+2= 7
f(2) = h' = h(1) = 11
f(3) < h' = h(2) = 9
f(4) = h' = h(3) = 7
M2J2
M2J1
M1J2
M2J1
h(5)= hM2J2= 2
f(5)= fM2J2= 3
h(4)=f(5) = 3
f(4)= 5+3= 8
Since
f(4) > h' = h(3) = 7
backtrack.
h(3)=f(4)=8
f(3)= 1+8 = 9
f(3) = h' = h(2) = 9
M2J2
1
tM2Ji-k=6-5 = 1
1
tlUI1J2 = 5
M2J2
Since
f(5) > h' = h(4) = 2
backtrack.
Panel 2.2: Continued from Panel 2.1
The final Search Path Diagram 2.3 and Panel 2.3 document the steps leading to the
conclusion of the search.
72
M,J,
3
h =13
=/
M,J2
5
M,J3
A=15
=/
h(0)=12
f(0)=12
State Level 0
k=l
h(l)=ll
f(l)=12
= h'
h=5+2+6
=13
M,J,
/= 13+1
= 14
M,J,
k=2
State Level 1
h(2)=9
State Level 2
f(2)=ll
= h'
k=l
h(3)=8
f(3)=9
= h'
M,J,
From Search
Path Diagram 2.2
M 2 J,
State Level 3
6
k=5
h(4)=3
State Level 4
f(4)=8
= h'
k=l
h(5)=2
State Level 5
f(5)=3
= h'
2
k=2
h(6)=0
State Level 6
f(6)=2
= h'
k=0
h(7)=0
Terminal
Node
-• f(7)=0
= h'
Optimal Solution
h(7) = 0, f(7) = 0
M i n i m u m makespan = 1 2
Optimal sequence
•
JJJJJJ
Search Path Diagram 2.3: Continued from Search Path Diagram 2.2
73
State Level 7
Time
(t)
"53 Edge
>
CD
_l
a.
3
3
CO
11
Cost
(A)
4CO t=9
k=5
12
5 t=10
k=1
13
6 t=12
k=2
14
T3
CD
x:
CO
CO
05
£
CO
CO
7 t=12+
k=0
CO
CD
T3
O
States
M1J3
M2J3
M1J1
M1J2
M1J3
M2J3
M1J1
M1J2
M2J1
M1J3
M2J3
M1J1
M1J2
M2J1
M2J2
M1J3
M2J3
M1J1
M1J2
M2J1
M2J2
M2Jl
•a
cu
z
T3
CD
CD
-Q
o€
E
C CO
M2J2
Values of h, f
Smallest h and
and Operation Times Associated f.
Comments
=1
z
h(4)=3
f(4)= 5+3= 8
f(4) = h' = h(3) = 8
h(5)= hM2j2= 2
f(5)= fM2J2= 3
f(5) = h' = h(4) = 3
h(6)= 0
f(6)= 2+0=2
f(6) = h' = h(5) = 2
1
h(7)=0
f(7)=0
f(7) = h' = h(6) = 0
1
1
tM2Ji -k = 6 - 5 = 1
1
hM2J2=2
fM2J2=1+2 = 3
tM2J2 = 2
M2J2
Minimum makespan
= 12, and
Optimal sequence
= J3J1J2.
Panel 2.3: ContinuedfromPanel 2.2
From Search Path Diagram 2.3 w e can read the optimal sequence (from the root node
Mi J3 to the last operation to be scheduled M2J2). We can find the minimum makespan
by summing the edge costs (k) along the final path (1+2+1+5+1+2) or we can read it
from the value of/(0) = 12 at the root node of the final path. The minimum makespan
and the optimal sequence can also be read from the final Panel 2.3 in the last row o
the panel.
The fact that J3J1J2 is an optimal sequence is also easily checked by applying
Johnson's rule which was introduced and illustrated in chapter 1, section 1.6.
74
In the next section 2.3 w e formulate an improvement that w e can use in calculating
heuristic estimates once the search has commenced. This improvement will then be
incorporated into all the subsequent versions of our algorithm.
2.3 Improving Minimum Heuristic Estimates when the Search has
Commenced
The proof that our algorithm is optimal and complete was presented in chapter 1,
Section 1.12.4 and required that at any stage of the search the minimum heuristic
estimate must not exceed the minimum makespan.
For example, we see that this condition is satisfied by the values of h(l) in all Sear
Path Diagrams 2.1, 2.2 and 2.3.
We also see from these diagrams (and the associated panels) that backtracking
updates some previous minimum heuristic estimates on the path by increasing their
values. We note from the search Path Diagram 2.3, which represents the optimal
solution, that along the path the minimum heuristic estimates decrease in value from
h(0) =f(0) = minimum makespan at the root node to h =f= 0 at the terminal node.
Consequently, we are motivated to consider how we might increase the minimum
heuristic estimate at the time we first calculate it rather than returning to it as a
of backtracking. The procedure which we develop below achieves this outcome
sufficiently often for us to incorporate it into our algorithm.
The procedure in (2.3), used in Step 1 of our algorithm, calculates one heuristic
estimate at each node and is based on whether or not an operation is identified in cell
1 at the node. If both cell 1 and cell 2 are labelled with an operation then (2.3)
75
focuses on the operation in cell 1 and ignores the operation in cell 2. W e n o w modify
this procedure so that if both cells are labelled with an operation we calculate a
heuristic estimate for both cells hi (for cell 1), h2 (for cell 2) and finally we use
h = max {hi, /J2) as the heuristic estimate for the node. Heuristic estimates for nodes
where only one (or none) of the cells is labelled with an operation will be calculated
as before in (2.3) noting that if h = 0 then the node will be expanded to reach the
terminal node next. The rationale is, as for (2.3), that we calculate (at either cell) th
time required to complete the current operation (on either machine) plus the total time
required to complete this and subsequent operations on machine M2 assuming no idle
time on machine M2. Thus h will be admissible. The revised procedure is specified
in (2.4).
76
For each node at the current state level:
A
(a) For cell 1:
(i) if the cell is blank then set hi = 0,
(ii) otherwise use,
^ + By; for MiJi in the "not scheduled" state,
hj = h MiJi =
t MiJi + bi + BIJ; for MiJi in the "in-progress" state,
<_
where B y = S b j for ally such that MiJj is in the "not scheduled" state.
(b) For cell 2:
(i)
if the cell is blank then set h2 = 0,
(ii) otherwise use,
B2J; for M2Ji in the "not scheduled" state,
h2 = h M23[ =
t M2Ji + B2J; for M2J1 in the "in-progress" state,
where B2J = Z b j for ally such that M2JJ is in the "not scheduled" state. ,
j
J
If hi = h2 = 0 then the node will be expanded to give the terminal node next.
At this stage w e incorporate this procedure (2.4) into Version 2 of our algorithm:
77
Algorithm (Version 2)
Step 1: At the current state level:
If the heuristic estimate of one of the nodes has been updated by backtracking use the
updated value as the heuristic estimate h for the node and proceed to Step 2.
Otherwise, for each node calculate a heuristic estimate for each cell using (2.4). At
each node find h = max {hi, hi) and use h as the heuristic estimate for the node.
Step 2: Select the smallest heuristic estimate h amongst those resulting from Step 1
(Break ties randomly).
Step 3: Calculate/= h + k (edge cost).
Step4: If/> h' then backtrack to the previous node and replace the heuristic
estimate for that node h' with the value of/ Repeat Step 3 at the preceding
state level.
Step 5: If / < h' then expand the node to the next state level and repeat Step 1.
Step 6: If/= h = 0 then Stop.
We note that the minimum heuristic estimate in Step 2 is admissible because our
procedure for calculating /?.? assumes no idle time on machine M2 for the operations i
the "not scheduled" state using this machine and times on machine Mi are not
involved.
We present the results of using our algorithm (Version 2) to solve the same twomachine three-jobs problem (Table 2.3(b)) that we solved previously in section 2.2.4
using our algorithm (Version 1).
The search path diagrams and associated panels are presented noting that an
additional column has been included in panels to show max {hi, hi) and where
78
applicable heuristic estimates for both cells are shown on nodes in the search path
diagrams.
Example
M, M 2
Jl 3
J2 5
h 1
M,J,
3
A =13
6
2
2
h=15
h(0)=12 h ^ = l l
/»= 1+6+2+2
= 11
State Level 0
State Level 1
f>h'
backtrack and update
Search Path Diagram 2.4
79
Time
(t)
CO
CD
States
Values of h, f
and Operation Times
Largest h
and f at a
node
hMui = 3+6+2+2 = 13
fMui = 0+13 = 13
tM1J1 = 3
hMU2= 5+6+2+2 = 15
fMU2=0+15 = 15
tM1J2=5
hMU3= 1+6+2+2 = 11
fM1J3=0+11 = 1 1
tM1J3= 1
h M ui = 3+6+2 = 11
fM1J1 = 1+11 = 1 2
tM1J1 = 3
hM2J3 = 2+6+2 = 10
fM2J3= 1+10 = 11
tM2J3 = 2
hMU2= 5+6+2 = 13
fiwu2= 1+13 = 14
tM1J2=5
hM2J3= 2+6+2 = 10
fM2J3=1+10 = 11
tM2J3=2
hiwui = 3+6+2+2 = 13
fMui = 0+13 = 13
tM1J1 = 3
hMU2= 5+6+2+2 = 15
fMU2=0+15 = 15
tM1J2=5
hiwui =13
fM1J1=13
•O
Q.
3
C/3
1
Cost
(k)
3
73
CD
3
co
"c
CD
_i
t=0
x:
CO
0 t=0+,
MlJ3
k=0
O
not
scheduled
"55 Edge
>
CO
CO
M1J1
M2J1
M1J2
M2J2
M1J3
M2J3
M1J1
M2J1
M1J2
M2J2
M2J3
1
t=1,
k=1
M1J3
MiJi
M2J3
M2J1
M1J2
M2J2
E
1
2
1
2
3
0
t=0 +
k=0
MlJ 3
M1J1
M2J1
M1J2
M2J2
M2J3
Comments
0
3
2
Smallest h and
Associated f.
1
2
3
hM1J2=15
fM1J2=15
llM1J3=11
fM1J3=11
Panel 2.4
80
Since
f(1)>h' = h(0) = 11
backtrack.
hMui=11
frwiui =12
h(1)=hMui=11
f(1)=fM1J1=12
hM1J2=13
fM1J2=14
hMui=13
fM1J1 =13
hM1J2=15
fM1J2=15
hiuiu3=12
fM1J3=12
tM1J3= 1
h(0)=hMu3=11
f(0)=fM1J3=11
h(0)= h M u 3 = 1 2
f(0)=fM1J3=12
MJ,
A = 13
h = 1+6+2+2
= 11
h(0)=12
State Level 0
k=\
From Search
Path Diagram 2.4
h=3+6+2
=11
f(l)=ll+l M,J,
h=2+6+2
=12
=10
= h'
)t=2
h(l)=ll
3
M[J2
A = 5+2+6
5
=13
MJ3 h=2+6+2
2
=10
State Level 1
h(2)=9
f(2)=9+2
=11
= h'
h(3)=8
f(3)=8+l
=9
= h'
State Level 2
k=l
h = 5+2
=7
MJ, h = 6+2
=8
State Level 3
£=5
h(4)=3
f(4)=3+5
=8
= h'
State Level 4
M 2 J,
1
k=\
h(5)=2
f(5)=2+l
=3
= h'
State Level 5
k=2
h(6)=0
State Level 6
f(6)=2
= h'
k=0
h(7)=0
f(7)=0
= h'
Optimal Solution
State Level 7
h(7) = 0, f(7) = 0
Minimum makespan
12
Optimal sequence
•
JJJJJJ
Search Path Diagram 2.5: Continued from Search Path Diagram 2.4
81
Time
(t)
States
1 Edge
CD
_l
Cost
a.
3 (k)
3
TO
CO
4
1CO t=1
k=1
5
2 t=3
k=2
6
3 t=4
k=1
7
4 t=9
k=5
8
5 t=10
k=1
9
6 t=12
k=2
10
7 t=12 +
k=0
CO
CO
T3
CD
-£Z
CO
£
CD
"cz
e
M1J3
a.
MiJi
CZ
M2J3
M1J3
M2J3
MiJi
M1J3
M2J3
M1J1
MlJ2
M2J1
M1J3
M2J3
M1J1
M1J2
M1J3
M2J3
M1J1
M1J2
M2J1
M1J3
M2J3
M1J1
M1J2
M2J1
M2J2
M1J3
M2J3
M1J1
M1J2
M2J1
M2J2
M 2 Jl
T3
CD
ZJ
T3
CD
co
CD
-0
0
•z.
Values of h, f
and Operation Times
Largest h
and f at a
node
Smallest h and
Associated f.
Comments
M—
0
CD
-Q
o-g
E
CZ CO
Z3
M2J1
hMui = 3+6+2 = 11
-z.
M1J2
fM1J1 = 1 + 1 1 = 1 2
M2J2 1 tM1J1 = 3
ilM2J3= 2+6+2 = 10
fM2J3= 1+10 = 11
tM2J3=2
hMU2= 5+6+2 = 13
fM1J2= 1+13 = 14
2 tM1J2=5
hM2J3= 2+6+2 = 10
fM2J3= 1+10 = 11
tM2J3=2
M2J1
M1J2 1 tMui-k=3-2 = 1
M2J2
hMU2=5+2 = 7
M2J2
fM1J2 = 1 + 7 = 8
1 tM1J2=5
hM2Ji = 6+2 = 8
fM2J1 = 1 + 8 = 9
tM2J1 = 6
M2J2
1 tiui2ji -k = 6 - 5 = 1
f(1) = h' = h(0) = 12
hn/iui=11
fM1J1 =12
h(1)=h M ui=11
f(1)=fM1J1=12
riMU2=13
flM1J2=14
h(2)=h(1)-k
= 11-2 = 9
f(2)=2+9=11
hM2J1 = 8
fM2J1 = 9
h(3)= hM2Ji= 8
f(3)= fM2J1= 9
h(4)= h(3)-k
=8-5=3
f(4)= 5+3= 8
f(2) = h' = h(1) = 11
From this step on w e
see an improvement
over algorithm
(Version 1).
f(4) = h' = h(3) = 8
M2J2
h(5)= hiw2J2= 2
f(5)= fM2J2= 3
f(5) = h' = h(4) = 3
f(6) = h' = h(5) = 2
1
h(6)= 0
f(6)= 2+0=2
h(7)= 0
f(7)=0
f(7) = h' = h(6) = 0
1
tlM2J2=2
1 fM2J2=1+2 = 3
hM2J2=2
fM2J2 = 3
tM2J2 = 2
Minimum makespan
= 12, and
Optimal sequence
= J3J1J2.
Panel 2.5: Continued from Panel 2.4
82
In Table 2.4 w e compare our algorithm (Version 1) and our algorithm (Version 2) for
the problem in Table 2.3(b) involving two machines and three jobs.
Table 2.4: Comparison of Algorithm (Version 1) and Algorithm (Version 2)
Number of Nodes
Expanded
Number of
Backtracks
Number of Steps to
Complete Search
10
3
14
8
1
10
Algorithm
(Version 1)
Algorithm
(Version 2)
The improvement w e have introduced into version 2 of our algorithm is effective and
has reduced backtracking but not eliminated it entirely.
W e will incorporate this improvement into subsequent versions of our algorithm. The
improvement is based on the idea that having decided on a method for calculating
heuristic estimates for operations on machine Mi (cell 1 at a node) we can improve
the performance of the algorithm (reduce backtracking) by also calculating heuristic
estimates for both cells at a node where operations are possible on both machines.
The maximum of these estimates is then chosen as the heuristic estimate for that node.
Another way of improving the performance of the algorithm is to choose the "best"
method for calculating heuristic estimates for operations on machine Mi at the start o
the search and this is analysed in chapter 3.
We now extend our algorithm (Version 2) for use with problems involving three
machines.
83
2.4
The Three-Machine Problem
W e n o w consider the three-machine problem involving n jobs presented in Table
2.5(a) and we illustrate version 3 of our algorithm, which we develop in this section,
using the example in Table 2.5(b) which we continue to use in chapter 3.
Table 2.5 (a): Three Machines, n Jobs Table 2.5 (b): Three Machines, 3 Jobs
v
Machines
""N. Machines
JobsN.
Mi
M2
M3
Ji
\ ai
bi
Cl
a2
b2
•
an
h
•
Jn
Mi
M2
M3
Ji
2
3
4
C2
h
7
8
2
•
:
h
8
9
7
b„
Cn
Jobs
^v.
Since w e n o w have three machines w e need to introduce additional notation and
definitions, analogous to those used for the two-machine problem, for M3J{, t M3Ji and
h M3Ji. Also, our search path diagrams will now involve nodes consisting of three
cells (cell 1 for possible operations on machine Mi, cell 2 for possible operations on
machine M2 and cell 3 for possible operations on machine M3).
In order to incorporate the improvement, which lead to version 2 of our algorithm, we
will calculate heuristic estimates for all cells at a node. This enables us to present
revised, simpler statements analogous to (2.4) and incorporating machine M3.
Also, we introduce a new method for finding heuristic estimates at the start of the
search when we are only considering possible operations to start on machine Mi.
84
The method proposed for calculating heuristic estimates for cell 1 of the root nodes at
the start of the search uses,
/z = /jMiJi = ai + bi+ iq. (2.5)
J=i
So that the minimum value ofh amongst the heuristic estimates at the root nodes is,
h(0) = min (ai+bi, a2+b2,... , an+bn) + ECJ •
j=i
Intuitively we see that this minimum heuristic estimate h{0) is admissible since it
assumes that we start with MiJi where a; + bi is as small as possible and there is no
idle time on machine M3. In chapter 3 we prove that h{0) is admissible and compare
it to other methods for calculating an admissible heuristic estimate at the start of th
search.
We now extend the procedure for calculating heuristic estimates presented in (2.4) to
incorporate machine M3. This leads to Procedure (Version 1) which is based on using
(2.5) as the method for calculating initial heuristic estimates at the root nodes.
85
Procedure (Version 1)
For a node at the current state level:
(a) For cell 1:
(i)
if the cell is blank then set hi = 0,
(ii) otherwise use,
a; + bi + CIJ; for MiJi in the "not scheduled" state,
hi = h MiJi
t MiJi + bi + Ci + CIJ; for MiJ; in the "in-progress" state,
where Cij = £ C J for ally such that MiJj is in the "not scheduled" state.
(b) For cell 2:
(i)
if the cell is blank then set h2 = 0,
(ii) otherwise use,
bi + C2J; for M2Ji in the "not scheduled" state,
h2 = h M2Ji =
t M 2 Ji + Ci + C2j-; for M 2 Ji in the "in-progress" state,
where C 2 j = E C J for ally such that M 2 Jj is in the "not scheduled" state.
86
(c) For cell 3:
(i) if the cell is blank then set h3 = 0,
(ii) otherwise use,
C3j; for M3Ji in the "not scheduled" state,
hs = h M 3 J; =
t M 3 Jj + C 3 J; for M 3 J; in the "in-progress" state,
where C 3 j = £ C J for ally such that M 3 Jj is in the "not scheduled" state.
j
At each state level these calculations lead to values of hi, h2, A3 at each node which
ensure that minimum heuristic h{l) amongst all the nodes is admissible. We note that
at the start of the search hi in Procedure (Version 1) is the same as h in (2.5) becaus
at that time all operations are in the "not scheduled" state.
Now we present version 3 of our algorithm for the three-machine problem where we
calculate heuristic estimates at the nodes using Procedure (Version 1).
Algorithm (Version 3)
Step 1: At the current state level:
If the heuristic estimate of one of the nodes has been updated by backtracking use the
updated value as the heuristic estimate h for that node and proceed to Step 2.
Otherwise, for each node calculate a heuristic estimate for each cell using Procedure
(Version 1). At each node find h = max {hi, h2, h3) and use h as the heuristic estimate
for that node.
Step 2: Select the smallest heuristic estimate h amongst those resulting from Step 1
(Break ties randomly).
87
Step 3:
Calculate/= h + k (edge cost).
Step 4: If / > h' then backtrack to the previous node and replace the heuristic
estimate for that node {h') with the value of f Repeat Step 3 at the
preceding state level.
Step 5: If / < K then expand the node to the next state level and repeat from Step
1.
Step 6: If/= h = 0 then Stop.
We illustrate the use of version 3 of our algorithm for the problem in Table 2.4(b)
involving three machines and three jobs. The search path diagrams and panels are in
the Appendix (Al) and we note:
(i) the optimal sequence JiJ3J2, with makespan 29, was found after
expanding 22 nodes and using 12 backtracks involving a total of 35
steps.
(ii) the problem was solved also by using an exhaustive search which
considered all possible sequences of jobs and this verified that the
solution found by our algorithm (version 3) was optimal.
In chapter 3 we seek to improve the performance of version 3 of our algorithm by
considering how to choose a method for calculating heuristic estimates at the start of
the search so that the minimum of the heuristic estimates for the root nodes is not on
admissible but is as close as possible to the minimum makespan.
88
Chapter 3: A n Analysis and Comparison of Plausible
Heuristic Functions
3.1 Introduction
In chapter 2 we developed a new algorithm to solve the three-machine flow-shop
problem. One of the important aspects of this heuristic search algorithm is the initial
choice of a heuristic function. A high quality heuristic function will not overestimate
the minimum makespan but will be as close as possible to it.
A heuristic function with this property will improve the performance of the algorithm
by reducing the number of nodes expanded and/or the number of backtracks
performed thus reducing the number of steps required to complete the search for an
optimal sequence.
In this chapter we analyse three heuristic functions which are plausible in the sense
that under simple conditions they are reasonable underestimates of the minimum
makespan (ie. they are admissible). We provide proofs for the conditions under which
these heuristics are admissible and we establish a set of guidelines for choosing a hig
quality heuristic for any given three-machine problem. These guidelines are then
incorporated into our algorithm.
To facilitate our analysis and discussion we introduce some definitions and notations.
89
3.2
Definitions and Notations
The problem we are addressing is represented by Table 3.1.
Table 3.1: Three machines and n jobs
^^^Machine
Mi
M2
M3
Ji
ai
b,
Cl
h
a2
b2
c2
•
;
•
;
Jk
ak
bk
Ck
•
•
Jn
an
Jobs
^^^
•
•
b„
Cn
W e define,
ak = min(ai, a2, a3,..., an),
ap + bp = min(ai+bi, a2+b2, a3+b3,..., an+bn),
Cn = min(ci, ca, c3,..., Cn),
cij = min(ci, c2, c3,..., Cj-i, cj+i,..., cn), for j = 1,2,3,... ,n ,
T* = minimum makespan,
(/)s= Js,..., Jt a sequence of the n jobs Ji, J2, ..., Jn where job Js is scheduled fi
the sequence and Jt is last,
T( (/)st) = makespan for the sequence ^st,
S( <j) ) = the time at which all jobs in (f)st axe completed on machine M2.
For example, continuing the three-machine problem, which we have used in chapter 2,
shown again in Table 3.2, we have:
90
Table 3.2: Three Machines, Three Jobs
N
v
Machines
Mi
M2
M3
Ji
2
3
4
h
7
8
2
h
8
9
7
Jobs ^ v
ak = ai = 2, ap + b p = ai + bi = 2 + 3 = 5,
Cm = c2 = 2, cn = min (c2, c3) = c2 = 2, cc = min (ci, c3) = ci = 4,
c!3 = min (ci, c2) = c2 = 2.
As shown in chapter 2 for this problem T* = 29 and the optimal sequence is JiJ3J2.
Suppose 0st = JiJ2J3 then it is easily verified that T( ^st) = 33 and S( ^st) = 26.
3.3
Plausible Heuristics
In Figure 3.1 w e represent the process of completing the job sequence ^ - Js,..., Jt
of n jobs on three machines Mi, M2, M3.
S(^)>as+Zb
t
i
W*S(0+ct
*Ib
Figure 3.1: Sequence of n jobs ^st
91
O n Figure 3.1 w e have noted that the earliest time job Jt can be completed on machine
M2 is as + £b (which assumes no idle time on machine M2). Similarly, the earliest
time all jobs could be completed on machine M3 is S(^st) + Ct (which assumes that
job Jt, the last in the sequence, can proceed to machine 3 immediately it is finishe
machine M2).
We now formulate three plausible heuristic functions for estimating the minimum
makespan T* at the start of the search.
From Figure 3.1, we see that if job Js is scheduled first on each machine then,
assuming no idle time on machine M3, T( ^st) = as + bs + £ c. Hence, if we evaluate
as + bs + £c for s = 1, 2, ... , n and select the minimum of these values it is plau
that this will be an underestimate of the minimum makespan T*. This leads to the
heuristic function Hi where,
Hi =min(ai + bi+i;c, a2 + b2+£c,.., an+bn+Xc)
= min(ai + bi, a2 + b2,.., a„+ bn) + £c
= ap + bp+ £c .
Using this heuristic function Hi means that we would start the search for the optim
sequence by assigning job Jp first on machine Mi.
We note from (2.5) that Hi is the heuristic function that we have used in developing
the current version 3 of our algorithm in chapter 2, section 2.4.
Again from Figure 3.1, we see that if we assume no idle time on machine M2 then
T(</> ) > ^ + Ct + £b (s *• t). This suggests that if we evaluate as + Ct+Zb(s*t)
for s = 1, 2, 3, ... , n and select the minimum of these then this will be an initia
92
underestimate of the m i n i m u m makespan T*.
This gives the plausible heuristic
function H2 where,
H2 = min[ai + min(c2,c3,.., cO + £b, a2 + min(ci,c3,.., cO + Zb,..., an +
min(ci,C2,.., cn) + £b]
= aj + min(ci, c2,..., Cj.h cj+i,..., cn) + Sb
= aj + cij+Eb.
Using H2 means job Jj will be assigned first on machine Mi.
A third plausible heuristic function H3 is derived by assuming that we can assign jo
Jk first and job Jm last on machine Mi where job Jk requires the least time for
completion on machine Mi and job Jm requires the least time for completion on
machine M3 and there is no idle time on machine M2. Consequently,
H3 = min(ai, a2, a3,..., an) + min(ci, c2,..., c) + £b
= ak + Cm+ Eb.
Although these heuristic functions Hi, H2 and H3 are plausible, in the sense that it
seems reasonable that they provide initial high quality underestimates of the minimu
makespan T*, we need to prove that they are admissible and analyse the conditions
under which one or more of them may be selected as the "best" heuristic function for
a given problem. By "best" we mean admissible and the closest to T*. We begin that
analysis in section 3.4.
Note:
In our research we also considered two additional heuristic functions H4 and H5. Our
analysis of all five heuristic functions revealed that H4 and H5 may be discarded si
93
they do not emerge as the best heuristic functions under any conditions.
Consequently, for simplicity we have excluded H4 and H5 from the analysis presented
in subsequent sections. However, the reasoning behind the formulation of H4 and H5
is presented below.
The heuristic function H4 recognises that in the formulation of H3 job Jk may also b
the job which requires the least time for completion on machine M3. In which case
we assign job Jk first on machine Mi and the job requiring the next smallest
completion time on machine M3 is scheduled last on machine Mi. This gives,
H4 = ak + min(ci,.., ck.h ck+i,.., d) + Xb
= ak+cik+ Zb.
The heuristic function H5 assumes no idle time on machine M2 and assigns job Ji firs
on machine Mi where job Ji requires the least total time on machine Mi and M3 for
completion. Consequently,
H5 = min(ai+ci, a2+C2, a3+c3, ..., an+Cn) + Zb
= ai + c+ Xb-
3.4 The Admissibility of Hi, H2 and H3
From Figure 3.1 we note that,
T(^st) > S(^st) + c > as + C+ Xb. (3-1)
If T*{(j) ) = the minimum makespan for all sequences starting with Ji and ending wi
Jt where t = 2, 3, ..., n then from (3.1),
94
T*(^ x ) > ai + min(c2, c3,..., cn) + X b
Similarly,
T*(^2) > a2 + min(ci, c3,..., c) + Xb
T*(^3 ) > a3 + min(ci, c2, c4,..., cn) + Xb (3.2)
T*(^n) > an + min(ci, c2,..., cn_i) + Xb
Now T* = the minimum makespan
= min[T*(^),T*(^2),...,T*(^n)]
> min[ai + min(c2,c3,..., c„), a2+ min(ci,c3,..., c),...,
an+ min(ci,c2,..., Cn-i)]+ Xb (from (3.2))
= aj + cIj+Xb (3.3)
= H2.
Also, from Figure 3.1 we see that
T*(^s)>as + bs+Xc
So,
T*(^)>ai+bi + Xc
T*(^2)>a2 + b2+Xc
T*(^n)>an + bn+Xc
95
Hence,
T*=min[T*(^),T*(^2),...,T*(^n)]
> min[ai + bi, a2 + b2, a3 + b3,..., an + bn] + Xc
= ap + bp+ Xc=Hi.
From (3.3), since aj > ak and cy > cm for j = 1, 2,..., n , we see that,
T* >aj + cij+Xb >ak + Cm+Xb=H3.
Consequently, Hi, H2 and H3 are admissible heuristic functions. This vali
use of Hi in the development of the current version 3 of our algorithm i
We now address the question as to which of these admissible heuristic fu
closest to the minimum makespan T* for a given problem involving three m
and n jobs.
3.5 The Relative Magnitudes of H1? H2, H3
In this section we derive a set of results which in combination will ena
specify a procedure for selecting the best initial heuristic function am
Result 3.1
(a) If ck = Cn then H2 > H3
(b) Ifck>CnthenH2 = H3
96
Proofs:
(a) If ck = cm then cy > Cn fory* = k and cy = Cn fory * k. Hence,
H2 = min(ai + cn, a2 + CE,..., ak+ dk,..., an+ cin) + Xb
= min(ai + d, a2+ Cn,..., ak+ cik,..., an+ Cn) + Xb
= aj + cij+ Xb
> ak + Cm+ Xb
= H3.
(b) If ck > Cn then cy > Cm fory = m and cy = Cn fory 5= m. Hence,
H2 = min(ai+cn, a2+CLz, ..., am+cim,... ,ak+cik,..., an+cm) + Xb
= ak + Cn+ Xb
= H3.
Now we need to find the relative position of Hi with respect to H2 and H
We introduce the following definitions:
P = [Xb-(ap+bp)] + aj,
Q= Xc -Cy,
R=[Xb-(ap+bp)] + ak, (3-4)
S= XC "Cn-
97
Result 3.2
(a) If c k = Cm then H i > H 2 iff P < Q and H i > H 3 iff R < S . Also, w e see that Q < S ,
P > R and H2 > H3 from Result 3.1 (a).
Under these conditions we display in Table 3.3(a):
(i) the possible relationships amongst P, Q, R and S,
(ii) the ordering of H i, H2, H3 for each of these relationships,
(iii) the best admissible heuristic function for each of the relationships.
Table 3.3(a): Best Admissible Heuristic for ck = c,
Relationships
Ordering
Best Heuristic
P>R>S>Q
H2>H3>Hi
H2
P>S>R>Q
H 2 >Hi > H 3
H2
P>S>Q>R
H2>Hi>H3
H2
S>P>R>Q
H 2 >Hi > H 3
H2
S>P>Q>R
H 2 >Hi > H 3
H2
S>Q>P>R
Hi > H 2 > H 3
Hi
(b) If ck > Cm then Hi > H 2 iff R < S and using Result 3.1(b) we have in Table 3.3(b)
similar detail to that displayed in Table 3.3(a).
Table 3.3(b): Best Admissible Heuristic(s) for ck > c,
Relationships
Ordering
Best Heuristic
R>S
H2=H3>Hi
H 2 or H 3
R<S
Hi>H2=H3
Hi
W e n o w develop guidelines for selecting the best initial heuristic function.
98
3.6
Guidelines for Selecting the Best Admissible Heuristic
The information displayed in Tables 3.3(a) and (b) provides us with guidelines for
selecting the best initial heuristic function to incorporate into our heuristic search
algorithm.
Guidelines (Version 1):
(1) Find ck and Cn from the problem and calculate P, Q, R and S from (3.4).
(2) If ck = Cn and S > Q > P > R then select Hi. Otherwise select H2.
(3) If ck > Cn and R>S then select H2 or H3. Otherwise select Hi.
Extensive testing of these guidelines for a wide variety of problems involving three
machines and n jobs has validated their reliability and highlighted the following issu
which we emphasise and illustrate.
In the particular case where ck > Cn and R> S we are offered a choice between H2 and
H3 as the best heuristic function to use in order to start the search.
We recall from chapter 2, section 2.4, that we introduced an improvement in the
process for calculating heuristic estimates at a node which is used after we set out
from a root node along a particular search path. In this case where the guidelines
offer a choice between H2 and H3 we have found from extensive testing on a wide
variety of problems that H2 is generally better than H3 as the search progresses and
consequently in the case where ck > Cn and R> S we recommend using H2. An insight
into why H2 is generally better is discussed in the notes following Procedure 1 in
section 3.7 below. We also illustrate the better performance of H2 over H3 in the
example in section 3.9.
99
Consequently, w e arrive at the following set of guidelines for selecting the heuristic
function to use at the start of the search where for the first time we are seeking the
minimum heuristic estimate amongst heuristic estimates associated with the root
nodes.
Guidelines 1
Using ak = min(ai, a2, a3,..., an) and Cm = min(ci, c2,..., cn):
(1) Find ck and Cm and evaluate P, Q, R and S from (3.4)
(2) If ck = Cm and S > Q > P > R then select Hi. Otherwise select H2.
(3) If ck > Cm and R> S then select H2. Otherwise select Hi.
The use of these guidelines will become the first step in the next version of our
algorithm which also needs to incorporate some further modifications to Procedure
(Version 1).
3.7 Modifications to Procedure (Version 1)
Procedure (chapter 2, section 2.4) requires modification in order to incorporate
Guidelines 1 above.
Depending on which of Hi, H2 is chosen, according to these guidelines, the
calculation of hi for cell 1 at a node will need to be specified. At present Procedure
(Version 1) only specifies the calculation of hi if Hi was selected. The calculations
h2 and h3 for cells 2 and 3 will not change.
This leads to Procedure 1 which will be used in our final version of the algorithm
presented in the next section.
100
Procedure 1
For a node at the current state level:
(a) For cell 1:
(i) if the cell is blank then set hi = 0,
(ii)
otherwise,
• if Hi is being used then calculate,
ai + bi + Cy; for MiJj in the "not scheduled" state,
hi = h MiJi =
t MiJj + bi + Ci + Cy; for M J i in the "in-progress" state,
where Cij = X C J for ally* such that MiJj is in the "not scheduled" state.
j
• if H2 is being used then calculate,
lii + dii + BIJ; for MiJi in the "not scheduled" state,
h} = h MiJi=
t MiJi + du + bi + BIJ; for MiJi in the "in-progress" state,
J
*v_
where, By = Xbj and du = mine; for ally such that MiJj is in the "not scheduled"
j
j*i
state.
(b) For cell 2:
(i) if the cell is blank then set h2 = 0,
(ii) otherwise use,
'bj + C2j; for M2Ji in the "not scheduled" state,
h2 = h M2Ji =
t M2Ji + c + C2j; for M2Ji in the "in-progress" state,
where C2j = XCJ for ally such that M2Jj is in the "not scheduled" state.
101
(c) For cell 3:
(i) if the cell is blank then set h3 = 0,
(ii) otherwise use,
C3J; for M3Ji in the "not scheduled" state,
h3 = h M 3 Ji =
t M 3 Jj + C 3 J; for M 3 Jj in the "in-progress" state,
v
where C3j- = X C J for ally* such that M 3 Jj is in the "not scheduled" state.
j
Notes on Procedure 1
(i) If we were using H3 as our heuristic function to start the search then the
calculation of hi at cell 1 in Procedure 1 would be exactly as for H2 with du =
max(cy) replaced by dsi = max(c7) and in both cases we use ally* such that
j*i
J
MiJj is in the "not scheduled" state.
When we start the search H2 and H3 both expand the same root node and start
MiJk, where ak = min(ai, a2, a3,... , an). At state level 1 we see the possibility
for the two search paths to head in different directions because at the node
where we consider starting MiJm and M2Jk, where cm = min(ci, 02, ... , d),
du > dsi = Cn and so:
• for H2, hi = h MiJm = am + dK + By and,
• For H3, hi = h MiJm = am + Cn + By < hh using H2.
At all the other nodes at state level 1 using H2 or H3 gives hi = aj + Cm + BIJ;
/ ?£ m, k. Also h2 = bk + C2j at every node at state level 1 using H2 or H3. At
each node we find a heuristic estimate h using h = max {hi, h2, hi) and since
hi = 0 at the state level we are discussing it is possible that at the node for
102
M i J m the value of A will be different for H 2 and H 3 . Finally w e select a node
to expand or backtrack from and consequently a different node may be
selected using H2 than using H3.
(ii) We note that at the start of the search when all operations are in the "not
scheduled" state that:
du = dsi
=
min(ci, c2,..., c) = Cn as in Result 3.1(b).
3.8 The Algorithm (Final Version)
We now modify version 3 of our algorithm (section 2.4) to include Guidelines 1
(section 3.6) and Procedure 1 (section 3.7). This leads to the final version of the
algorithm and we conclude with an example where we use the algorithm to solve the
problem in Table 3.2 using Hi, H2 and H3.
Algorithm (Final Version)
Step 1: Select Hi or H2 following Guidelines 1.
Step 2: At the current state level:
If the heuristic estimate of one of the nodes has been updated by backtracking use the
updated value as the heuristic estimate h for that node and proceed to Step 3.
Otherwise, for each node calculate a heuristic estimate for each cell using Procedure
1. At each node find h = max {hi, h2, hi) and use h as the heuristic estimate for that
node.
Step 3: Select the smallest heuristic estimate h amongst those resulting from Step 2
(Break ties randomly).
Step 4: Calculate/= h + k (edge cost) for the node associated with h from Step 3.
103
Step 5: If / > h' then backtrack to the previous node and replace the heuristic
estimate for that node {h') with the value of f. Repeat Step 4 at the
preceding state level.
Step 6: If / < /*' then expand the node to the next state level and repeat from Step
2.
Step 7: If/= h = 0 then Stop.
We now apply the algorithm to a problem which will also enable us to provide further
detail to the discussion in section 3.6 relating to the choice between H2, H3 when
ck>CmandR>S.
3.9 An Example
For the problem in Table 3.2 we see that:
Ck
= 4 = Cl > C2 = 2 = cm, Hi = 18 < H2 = H3 = 24 and R= 17 > 11 = S.
Our guidelines propose using H2 as the method for finding the best admissible
heuristic estimate at the start of the search. We note that H2 = H3 and we see that i
Procedure 1 if we needed to calculate hi during the search process using H3 then we
would use,
"ai + Cm + BIJ; for MiJj in the "not scheduled" state,
hi = h MiJi
t M i Ji + c m + bi + BIJ; for M i Ji in the "in-progress" state.
(3.5)
For the present example for the purpose of making comparisons w e have used Hi, H 2
and H3 noting that we have already solved the problem using Hi in chapter 2 with the
detailed search path diagrams and panels provided in Appendix (Al).
104
Similarly w e provide in Appendix (A2) the search path diagrams and panels
associated with the problem using H2 and H3.
From the results in the Appendices A l , A 2 using these three different heuristic
functions we compare them in Table 3.4.
Table 3.4: Comparison of Hi, H2, H3 when ck > cm, R> S.
Heuristic Function
N u m b e r of Nodes
Expanded
N u m b e r of
Backtracks
N u m b e r of Steps
Hi
22
12
35
H2
13
3
17
H3
16
6
23
Clearly as predicted Hi is inferior to either H 2 or H 3 in this problem and H 2 is better
thanH3.
In the next chapter, we compare the use of the flow-shop heuristic algorithm
developed in chapter 3 with the use of the genetic algorithm described in chapter 1.
We discuss other improvements we can apply to the algorithm in terms of memory
usage. We will modify the flow-shop heuristic algorithm to enable us to find multiple
optimal solutions if they exist. We discuss the possibility of applying the algorithm
four-machine and five-machine flow-shop problems and some possible applications of
the algorithm to practical industrial problems.
105
Chapter 4: Related Findings and Discussion
4.1 Introduction
In this chapter, we compare the use of the flow-shop heuristic algorithm developed in
chapter 3 with the use of the genetic algorithm described in chapter 1. We discuss
other improvements we can apply to the algorithm in terms of memory usage. The
flow-shop heuristic algorithm is modified to find multiple optimal solutions for flow
shop problems that have more than one optimal solution. We then discuss the
possibility of applying the flow-shop heuristic algorithm to solve m-machine flow-
shop problems by carrying out a preliminary application of the algorithm developed in
this research to solve a four-machine and a five-machine flow-shop problem. The mmachine flow-shop problem will be a topic for future research. Finally, we discuss
the possible applications of the algorithm to practical industrial problems.
4.2 Comparison with the Genetic Algorithm
For the genetic algorithm described in chapter 1, when the number of jobs is less tha
the number of machines, the algorithm produces an optimal sequence and a minimum
makespan with good efficiency. When the number of jobs is twice the number of
machines, the efficiency decreases markedly and it takes three times as much effort t
produce the optimal solution as when the number of jobs is less than the number of
machines (Mulkens,1994; Nawaz et al., 1988). As the number of jobs increases, the
time to produce an optimal solution will be even longer and the efficiency of the
algorithm continues to decline. In practical situations, the number of jobs will alwa
be many times more than the number of machines and hence genetic algorithms will
not be suitable.
106
In comparison, the algorithm developed in this research does not demonstrate the
same deficiency. The processing time for the algorithm developed in this research
does increase as the number of jobs increases but this increase in processing time is
only gradual. No major increase in the processing time occurs when the number of
jobs becomes twice the number of the machines. Figure 4.1 illustrates the gradual
increase in processing time in response to an increase in the number of jobs in a thr
machine flow-shop problem using the algorithm developed in this research. The time
units for each problem were randomly generated and the same random seed was used
to generate each of the problems. The problem scenario is the same as the one used in
the genetic algorithm described in Chapter 1 (Mulkens,1994; Nawaz et al., 1988).
Therefore, the conditions and the requirements are the same as used by the genetic
algorithm. Table 4.1 demonstrates the behaviour of the genetic algorithm in solving
the flow-shop problem:
Table 4.1: Behaviour of Genetic Algorithm (Mulkens, 1994)
N u m b e r of Jobs
N u m b e r of
machines
5
10
Average number of
iterations
38.4
10
5
110.4
The above table illustrates the fact that as the number of jobs is double, the number of
iterations is greatly increased. It is reasonable to assume that if the number of job
increased by n times (n = 3, 4, 5, ...) then the rate of increase of the number of
iterations will be greater as n increases.
107
Number of Jobs Versus Time
30 -i
*
25-
TO
-
-
-
"
•
"
o
.
cn
*
^
••• • "
. • • •
—
~
~
"
^
3
Time in
§ 20 o
o
w 15 -
VJ
I
0
50
I
100
I
150
Number of Jobs
-•-GO — ^ G n -^-G1
Figure 4.1: Graph for Number of Jobs (n) Versus Time {i)
The three graphs in Figure 4.1 were constructed based on the "best fit" scenario using
data generated from the three-machine flow-shop algorithm. Each graph is
constructed by using 50 data points. The top graph Gl represents the "worst time"
taken to produce the optimal sequence for a randomly generated problem in response
to the number of jobs. The bottom graph GO represents the "best time" taken to
produce the optimal sequence for a randomly generated problem in response to the
number of jobs. The middle line Gn is the median of the other two lines. The median
exhibits a slope of approximately 1/8. These results show that the three-machine
flow-shop algorithm developed in this research behaves more consistently than the
genetic algorithm, noting that the comparison is only to demonstrate the behaviour of
the two algorithms. This also means that the three-machine flow-shop algorithm is
better in terms of applicability in normal practical situations where the number of jo
is always many times more than the number of machines. This algorithm can always
produce one optimal solution.
108
4.3
Memory Management.
As the number of nodes increases in accordance to the number of jobs, it is apparent
that memory management will become an important issue in relation to the number of
nodes expanded. With the current computer technology, the memory of the most
12
powerful computer is in the order of terabytes (10 ). It is an unusual condition to run
out of memory in any three-machine flow-shop problem with less than a thousand
jobs. If the number of jobs is larger than a thousand jobs then any flow-shop
algorithm will not be efficient enough to solve such a problem based on the problem
size and the number of iterations needed to perform the search. In such a case it is
more efficient to break the problem into smaller clusters and solve them individuall
In reality, it is rare to have a few thousands jobs sitting in the queue to be rearr
before being put through any processing systems. A more realistic perspective will b
the possibility of a few jobs through to a few hundreds jobs sitting in the queue
waiting to be processed. This is because, as the jobs pass through to each of the
machines, the number of jobs in the queue grows while waiting for the first job on th
queue to be processed. When the first job on the queue gets onto machine 1 then resequencing will occur. Similarly when a job is added to the queue re-sequencing will
also occur. As a result, we only need to consider the implications for memory
management of three-machine flow-shop problems that involve from a few jobs to a
few hundred jobs.
We propose the following two methods to address the memory requirements of the
three-machine flow-shop problem. The first method involves pruning (section 4.3.1),
which can be applied to the algorithm developed in this research in order to increase
the efficiency of memory usage. If the memory available is not enough to
109
accommodate the m e m o r y requirements of a three-machine flow-shop problem, and
reducing the problem into multiple clusters of smaller problems is not an option, the
the second method (section 4.3.2) which deals with limited memory will be useful to
find the near optimal solution using whatever memory capacity is available.
4.3.1 Pruning
One logical approach to improve the efficient use of memory is to reclaim the used
memory that is no longer needed. This can be achieved by pruning away the
unproductive branches in the search tree. Whenever a node is expanded and all the
subsequent nodes are visited and the goal is not among those nodes, then this node
together with all the subsequent expanded nodes is considered an unproductive
branch. An unproductive branch can be marked by assigning a very large value to the
heuristic estimate of the node. As a result of this large heuristic estimate, the
unproductive branch will not be visited again in the future. Hence, all the memory
used by the subsequent nodes in the unproductive branch is reclaimed. A node is
considered as being unproductive when the heuristic estimate of a node is larger tha
the largest heuristic estimate of a root node.
110
Mt M 2 M 3
3 4
Jl 2
7
J,
8 2
9 7
8
J3
/j = 28
h = 30
h(0)=28 hjfj^26
State Level 0
node 2
h=8+9+7+2
=26
h=3+4+2+7
=16
State Level 1
f>h'
backtrack and update
M,J,
ft = 28
State Level 2
k=4
h=8+9+7=24
A = 24
8
State Level 3
M2J2
8
h=8+2+7=17
N3^*
Figure 4.2: Unproductive nodes Nl, N 2 and N 3
For example in the above Figure 4.2, node 1 (denoted as Nl) of the left most branch
can be considered an unproductive node as the heuristic estimate h = 31 is larger tha
the largest heuristic estimate at a root node, h = 30. Hence, the memory used by the
subsequent nodes - N2 and N3 can be reclaimed. We continue to keep Nl in the
search tree in case the node is needed again due to backtracking. In the worst case
scenario, if the node is needed again due to the fact that the heuristic estimate cha
through backtracking, the node can always be expanded again. Reclaiming memory
through pruning the unproductive branches of the search tree will allow the algorithm
111
to use that reclaimed m e m o r y to expand nodes needed as the search path proceeds. If
memory is the limiting factor, pruning of unproductive branches will allow the
algorithm to solve larger problems and also will lead to more efficient use of memory.
4.3.2 Memory-Bounded Approach
Adding the memory-bounded feature as used in SMA* (Russell, 1992) to the threemachine flow-shop algorithm developed in this research is quite simple. When the
three-machine flow-shop algorithm needs to generate a successor node along the
search path but has no more memory left, the algorithm will make space on the search
tree by dropping a node from the tree. Nodes that are dropped are called "forgotten
nodes". The algorithm developed in this research will drop nodes that have large
heuristic estimates compared to the heuristic estimate of the root node on the curren
search path. To avoid re-exploring sub-trees that have been dropped from memory,
the algorithm will retain in the ancestor nodes information related to the quality of
best path in the forgotten sub-tree. In this way, the algorithm will only need to
regenerate the sub-tree when all other paths are worse than the path it has forgotten.
If all available memory has been used and it is not possible to drop nodes from the
search tree then the algorithm will return the best solution it can find. This soluti
may not be the optimal solution but will be nearly optimal.
If memory is the limiting factor, the memory-bounded three-machine flow-shop
algorithm can solve significantly more difficult problems compared to the non
memory-bounded algorithm without incurring considerable overhead in terms of extra
nodes generated. The memory-bounded three-machine flow-shop algorithm should
perform well on problems with highly connected state spaces. On very difficult
problems with a large number of jobs, the memory-bounded three-machine flow-shop
112
algorithm can be forced to continually switch back and forth between a set of
candidate solution paths similar to the SMA* algorithm. These candidate solution
paths can either be new search paths or forgotten sub-trees. In this case, the extra
time required for repeated regeneration of the same nodes means that problems that
could be solved by non memory-bounded algorithm easily, if given unlimited
memory, become intractable for the memory-bounded algorithm. Therefore, the
memory-bounded three-machine flow-shop algorithm will only be used when memory
limitations are a problem.
Similar to the SMA* algorithm, the memory-bounded three-machine flow-shop
algorithm will inherit the following properties:
• It will make use of whatever memory is available to it.
• If given enough memory, it will avoid repeated states.
• If given enough memory to store only the solution search path (the path leading to
the goal), the algorithm is complete.
• If given enough memory to store only the solution search path, it is optimal. If
there is insufficient memory to store the solution search path, it will return the b
solution that can be reached with the available memory.
• If given enough memory to store the whole search tree, it is optimally efficient.
4.4 Multiple Optimal Solutions
In practical flow-shop problems, it is sometimes necessary to find multiple optimal
sequences if the problem has more than one optimal solution. This is because
sometimes it is necessary to compare all the optimal sequences to determine which
sequence will give the shortest waiting time for jobs in the queue waiting to be
113
processed. With minor modifications to the three-machine flow-shop algorithm, other
optimal sequences can be found if the problem contains more than one optimal
solution.
The "Final Version" of the algorithm presented in chapter 3 (section 3.8) must be
modified in order to find multiple optimal solutions.
Algorithm for Multiple Optimal Solutions
Step 1: Select Hi or H2 following Guidelines 1.
Step 2: At the current state level:
If the heuristic estimate of one of the nodes has been updated by backtracking use the
updated value as the heuristic estimate h for that node and proceed to Step 3.
Otherwise, for each node calculate a heuristic estimate for each cell using Procedure
1. At each node find h = max {hi, h2, h3) and use h as the heuristic estimate for that
node.
Step 3: Select the smallest heuristic estimate h amongst those resulting from Step 2
(Break ties randomly).
Step 4: Calculate/= h + k (edge cost) for the node associated with h from Step 3.
Step 5: If / > W then backtrack to the previous node and replace the heuristic
estimate for that node {K) with the value of / Repeat Step 4 at the
preceding state level.
Step 6: If / < h' then expand the node to the next state level and repeat from Step
2.
Step 7: If/= h = 0, an optimal solution has been achieved. If there exists a node
along the optimal solution path at which a tie was randomly broken then
return to the state level of that node and repeat from Step 2 ignoring the
114
node that lead to the previous optimal solution(s). If any of the remaining
values of h at root nodes (state level 0) is less than or equal to the minimum
makespan then return to state level 0 and repeat from Step 2 ignoring the
root node that lead to the previous optimal solution(s). If all remaining
values of h at root nodes are greater than the minimum makespan then Stop.
We illustrate the algorithm for finding multiple optimal solutions using the problem
Table 4.2.
Table 4.2: Three Machines, Three Jobs
\v Machines
Jobs \ .
Mi
M2
M3
Ji
4
3
5
h
5
4
3
h
3
5
3
We have:
ck = 3 = cm=c3,P = Q = R = S = 8,Hi = H2 = H3=18.
Our Guidelines 1 propose using Hi as the method for finding the best admissible
heuristic estimate at the start of the search.
The search path diagrams and panels are in the Appendix (A3) and it is noted:
(i) two optimal sequences JiJ3J2 and hhh, with makespan 19, were found
after expanding 18 nodes and using 2 backtrack involving a total of 21
steps.
(ii) the problem was solved also by using an exhaustive search which
considered all possible sequences of jobs.
115
It can be seen that, if a flow-shop problem contained multiple optimal solutions, the
three-machine flow-shop algorithm developed in this research with minor
modifications can find all the optimal solutions.
4.5 Scalability
We now discuss the issue as to whether or not our algorithm can be extended to sol
m-machine problems where m = 4, 5, 6,.... As a preliminary study, we applied the
algorithm developed in this research to four-machine flow-shop problems and five-
machine flow-shop problems. The algorithm produced optimal solutions in both fourmachine and five-machine flow-shop problems. The outcomes from the preliminary
study show that the algorithm developed in this research is potentially scalable f
machine flow-shop problems. The following two sections demonstrate the use of the
algorithm for four-machine and five-machine flow-shop problems.
4.5.1 Four Machines Flow-Shop Problems
Given the four-machine flow-shop problem as follows:
Table 4.3 (a): Four Machines, n Jobs Table 4.3 (b): Four Machines, 3 Jobs
v
\w
Machines
Mi
M2
M3
M4
Ji
2
3
4
5
d2
h
5
4
2
3
•
h
4
5
4
2
Mi
M2
M3
M4
ai
bi
Cl
di
h
a2
b2
d
•
•
•
Jn
an
b„
Jobs X .
Ji
x
'•
Cn
Machines
Jobs
dn
116
^v
For a four machines problem additional notation and definitions, analogous to those
used for the three-machine problem, for M4Jj, t MjJ; and h M4Ji must be introduced.
Also, our search path diagrams will now involve nodes consisting of four cells (cell
for possible operations on machine Mi, cell 2 for possible operations on machine M2,
cell 3 for possible operations on machine M3 and cell 4 for possible operations on
machine M4).
We now introduce a new method to find heuristic estimates at the start of the search
when we are only considering possible operations to start on machine Mi.
The method proposed for calculating heuristic estimates for cell 1 of the root nodes
the start of the search uses,
/* = /*MiJi = ai + bi + Ci+ tdy C4-1)
y'=i
So that the minimum value of h amongst the heuristic estimates at the root nodes is,
n
h{0) = min (ai+bi+ci, a 2 +b 2 +c 2 , ..., an+bn+Cn) + iZdj •
j=i
Intuitively we see that this minimum heuristic estimate h{0) is admissible since it
assumes that we start with Mi Ji where a{ + bi + q is as small as possible and there
idle time on machine M4.
We need to modify our Procedure 1 for calculating heuristic estimates to incorporate
machine M4. This leads to the following procedure for four machines:
117
Procedure for Four Machines
For a node at the current state level:
(a) For cell 1:
(i) if the cell is blank then set hi = 0,
(ii) otherwise use,
a; + bj + Ci + Dij-; for M J J J in the "not scheduled" state,
hi = h M ^
=
t MiJi + bj + Ci + dj + DIJ; for MiJi in the "in-progress" state,
where Dij = Y, dj for ally* such that MiJj is in the "not scheduled" state.
j
(b) For cell 2:
(i) if the cell is blank then set h2 = 0,
(ii) otherwise use,
bi + Q + D2J; for M2Ji in the "not scheduled" state,
h2 = h M 2 Jj =
t M 2 Ji + Ci + di + D2j; for M 2 Ji in the "in-progress" state,
where D 2 j = £</j for ally* such that M 2 Jj is in the "not scheduled" state.
j
(c) For cell 3:
(i) if the cell is blank then set h3 = 0,
(ii) otherwise use,
^ + D3j; for M3Ji in the "not scheduled" state,
h3 = h M 3 Ji =
t M 3 Jj + dj + D3j; for M 3 Ji in the "in-progress" state,
*<_
where D 3 j =Ydi
for ally* such that M 3 Jj is in the "not scheduled" state.
118
(d) For cell 4:
(i) if the cell is blank then set h4 = 0,
(ii) otherwise use,
D4j; for M4J in the "not scheduled" state,
h4 = h M4J1 =
t M 4 Jj + D4j-; for M 4 J; in the "in-progress" state,
where D 4 j = Z d j for ally" such that M4JJ is in the "not scheduled" state.
j
At each state level these calculations lead to values of hi, h2, h3, h4 at each node and a
heuristic estimate h = max (hi, h2, h3, hi) at each node. Thus the minimum heuristic
h{l) amongst all the nodes is admissible. Now we present our preliminary algorithm
for the four-machine problem where we start the search using heuristic estimates at
the root nodes given by the method which lead to (4.1).
Preliminary Algorithm for Four-Machine Flow-Shop Problem
Step 1: At the current state level:
If the heuristic estimate of one of the nodes has been updated by backtracking use t
updated value as the heuristic estimate h for that node and proceed to Step 2.
Otherwise, for each node calculate a heuristic estimate for each cell using the
Procedure for Four Machines. At each node find h = max {hi, h2, h3, h4) and use h
as the heuristic estimate for that node.
Step 2: Select the smallest heuristic estimate h amongst those resulting from Step 1
(Break ties randomly).
Step 3: Calculate/= h + k (edge cost).
119
Step 4:
If / > hy then backtrack to the previous node and replace the heuristic
estimate for that node Qi) with the value of f Repeat Step 3 at the
preceding state level.
Step 5: If / < h' then expand the node to the next state level and repeat from Step
1.
Step 6: If/= h = 0 then Stop.
We illustrate the use of the above algorithm for the problem in Table 4.3(b) involv
four machines and three jobs. The search path diagrams and panels are in the
Appendix (A4) and we note:
(i) the optimal sequence JiJ3J2, with makespan 20, was found after
expanding 13 nodes and using 3 backtracks involving a total of 16
steps.
(ii) the problem was solved also by using an exhaustive search which
considered all possible sequences of jobs.
120
4.5.2
Five-Machine Flow-Shop Problems
Given the five-machine flow-shop problem as follows:
Table 4.4 (a): Five Machines, n Jobs
Table 4.4 (b): Five Machines, 3 Jobs
v
v Machines
Machines
Mi M 2 M 3 M 4 M 5
JobsNv
Ji
h
'•
Jn
N
4
3
3
h
5
4
2
3
4
h
2
3
4
5
3
Cl
di
ei
Ji
a2
b2
c2
d2
e2
•
•
•
an
bn
dn
Mi M 5
5
bi
Cn
M3
4
ai
:
Mi M 2
JobsV
N
-
en
Since there are n o w five machines w e need to introduce additional notation and
definitions, analogous to those used for the three-machine problem, for MsJi,
t M5Ji and h Msh Also, the search path diagrams will now involve nodes consisting
of five cells (cell 1 for possible operations on machine Mi, cell 2 for possible
operations on machine M2, cell 3 for possible operations on machine M3, cell 4 for
possible operations on machine M4 and cell 5 for possible operations on machine M5).
We need to introduce a new method for finding heuristic estimates at the start of th
search when we are only considering possible operations to start on machine Mi.
The method proposed for calculating heuristic estimates for cell 1 of the root node
the start of the search uses,
(4.2)
h = h MiJi = a{ + bi + Q + di + £ e j .
So that the m i n i m u m value of h amongst the heuristic estimates at the root nodes is,
121
h(0) = m i n (ai+bi+ci+di, a2+b2+c2+d2,..., a n +b n +Cn+d n ) + Xe,;=i
Intuitively it is seen that this m i n i m u m heuristic estimate h{0) is admissible since it
assumes that we start with Mi Ji where aj + bi + Ci + di is as small as possible and t
is no idle time on machine M5.
We need to modify our procedure for calculating heuristic estimates to incorporate
machine M5. This leads to the following procedure for five machines:
Procedure for Five Machines
For a node at the current state level:
(a) For cell 1:
(i) if the cell is blank then set hi = 0,
(ii) otherwise use,
Ii + bi + Ci + di + EIJ; for MiJi in the "not scheduled" state,
hi = h MiJi =
t MiJi + bi + Ci + di + ej + EIJ; for MiJi in the "in-progress" state,
where Ey = Sej for ally* such that MiJj is in the "not scheduled" state.
j
(b) For cell 2:
(i) if the cell is blank then set h2 = 0,
(ii) otherwise use,
bi + q + di + E2j; for M2Ji in the "not scheduled" state,
h2 = h M 2 Ji =
t M 2 Ji + c; + di + ej + E 2j ; for M 2 Ji in the "in-progress" state,
^.
where E 2 j = 2 > j for ally* such that M 2 Jj is in the "not scheduled" state.
122
(c) For cell 3:
(i) if the cell is blank then set h3 = 0,
(ii) otherwise use,
cj + dj + E3j; for M3Jj in the "not scheduled" state,
h3 = h M 3 Ji =
t M 3 Ji + dj + ei + E 3j ; for M 3 Jj in the "in-progress" state,
where E 3 j = £ e j for ally* such that M 3 Jj is in the "not scheduled" state.
(d) For cell 4:
(i)
if the cell is blank then set h4 = 0,
(ii) otherwise use,
/"-.
di + E 4j ; for M J i in the "not scheduled" state,
h4 = h MjJj =
t M J i + ^ + E 4 J; for M 4 i in the "in-progress" state,
where E4j = £fij for ally* such that M4j is in the "not scheduled" state.
j
(e) For cell 5:
(i) if the cell is blank then set hs = 0,
(ii) otherwise use,
rESJ;
r for MsJj in the "not scheduled" state,
hs = h MsJi =
t M 5 Jj + E 5j ; for M 5 Ji in the "in-progress" state,
where E 5 j = E e j for ally* such that M 5 Jj is in the "not scheduled" state.
j
At each state level these calculations lead to values of hi, h2, h3, h4, h5 at each node
and h = max {hi, h2, h3, h4, h5) is the heuristic estimate at a node. Thus the minimum
heuristic h(F) amongst all the nodes is admissible. Now we present our preliminary
123
algorithm for the five-machine problem where w e start the search using heuristic
estimates at the root nodes given by the method which lead to (4.2).
Preliminary Algorithm for Five-Machine Flow-Shop Problem
Step 1: At the current state level:
If the heuristic estimate of one of the nodes has been updated by backtracking use th
updated value as the heuristic estimate h for that node and proceed to Step 2.
Otherwise, for each node calculate a heuristic estimate for each cell using Procedure
for Five Machines. At each node find h = max {hi, h2, h3, h4, hi) and use h as the
heuristic estimate for that node.
Step 2: Select the smallest heuristic estimate h amongst those resulting from Step 1
(Break ties randomly).
Step 3: Calculate/^ h + k (edge cost).
Step 4: If / > /?' then backtrack to the previous node and replace the heuristic
estimate for that node (K) with the value of f Repeat Step 3 at the
preceding state level.
Step 5: If / < K then expand the node to the next state level and repeat from Step
1.
Step 6: If/= h = 0 then Stop.
124
The use of the above algorithm for the problem in Table 4.4(b) involving five
machines and three jobs is illustrated. The search path diagrams and panels are in the
Appendix (A5) and we note:
(i) the optimal sequence J3JiJ2, with makespan 25, was found after
expanding 19 nodes and using 6 backtracks involving a total of 25
steps.
(ii) the problem was solved also by using an exhaustive search which
considered all possible sequences of jobs.
4.6 Possible Applications of the Three-Machine Flow-Shop
Algorithm.
4.6.1 Manufacturing Production Line and Three Cells Robot
Processing.
This algorithm is naturally suited for any manufacturing line or assembly line that
involves three machines. A common scenario is where jobs are coming through on a
conveyor belt into those three machines for processing. In order to maximise the
productivity of those three machines, a special effort is needed to ensure those
machines will not be idle during the processing cycle. By applying the three-machine
flow-shop algorithm developed in this research, those jobs on the conveyor belt can
be re-arranged into an optimal sequence so that the three machines will be processing
those jobs with minimum idle time. This application will contribute significantly to
the production efficiency of the manufacturing industry.
125
The algorithm can also be applied for parts sequencing in a assembly line or
manufacturing robotic cells. Imagine a robot needing to place parts on three different
production cells. Each part needs to be processed by the three production cells in
turn, each production cell will spend different amounts of time to process the part.
The robot will be taking the part from an input station and placing the part into each
production cells. When processing is completed by the first cell, the robot will pickup
the part and place it in the second production cell. The first cell has become idle, the
robot will pickup the next part from the input station and place it in the first
production cell. When the part is finished processing in the second cell, the part will
be picked up again and placed in the third cell. It is important for the first cell to
finished processing the part as soon as possible in order for the second cell not to be
idle. When the third cell has completed the processing, the part will be picked up
once more and placed in the output station. Similarly, the part currently being
processed by second cell should be ready for pickup by the robot and placed into third
cell as soon as possible. It is quite obvious that this is a three-machine flow-shop
problem. Compared to the three machines manufacturing line, the robot is acting as
the conveyor belt in this case. In order to maximise the efficiency of the three
production cells, the three-machine flow-shop algorithm can be applied to determine
the order in which the robot takes parts from the input station.
4.6.2 Data Communication and Transportation
Many different algorithms can be used to solve logistic problems related to data
communication and transportation. One approach is to use the travelling salesman
approach where we need to find the shortest path for the salesman to visit every city
once. Although, the three-machine flow-shop algorithm developed in this research
126
m a y not be a conventional approach like that of the travelling salesman approach it
presents a possible alternative approach to logistic problems related to data
communication and transportation.
The Internet is made up of many computers connected together and some of these
computers are used purely to manage the data communication traffic of the Internet.
A computer designed to monitor and manage the traffic of an Internet site is called a
Gateway. A Gateway is also the entrance and the exit for all communication at an
Internet site.
The basic principles of Internet communication can be summarised as follows. The
information travelling on any Internet route needs to be packaged into an Internet
packet and each Internet packet contains information related to the address of origin,
address of destination, packet identity, packet size, packet content and error checking
data. The size of the packet can vary depending on the amount of information in the
packet. Each packet also carries error-checking data that need to be verified by every
gateway along the communication route. If the information is too big to fit into one
Internet packet, it will be broken into multiple packets and sent individually and wil
be reassembled at the destination using information from the packet identity.
When a Gateway receives an Internet packet, if the information is originating from a
computer within the local site then the Gateway will analyse the destination
information and forward the information to the next Gateway. The next Gateway will
carry out a number of procedures. First, it determines if the integrity of the data is
intact by going through error checking routines using the error checking data. If the
data contains an error, a "resend" request will be sent to the address of origin and th
Internet packet will be discarded. If the integrity of the Internet packet is intact t
127
the Gateway will analyse the destination information to determine whether the packet
is destined to the local site. If the Internet packet is destined to the local site, th
will be forwarded to the destination computer via the internal communication
structure of the local site. If the Internet packet is not destined to the local site,
the Gateway will forward the Internet packet to the next Gateway based on the
destination information provided by the Internet packet. The length of time for each
Internet packet to be processed by a Gateway is different depending on the size of the
Internet packet. A large Internet packet will take a longer time to deliver to the next
Gateway compared to a small Internet packet. The Internet packet will be processed
and passed on from one Gateway to the next until the Internet packet arrives at its
destination. The Internet packet processed by any Gateway is based on a first come
first serve basis.
Imagine a special scenario of three Internet Gateways - Gateway 1, Gateway 2 and
Gateway 3. Gateway 1 is responsible for handling all the Internet packets of a
province, Gateway 2 is responsible for all the Internet packets of a state and Gateway
3 is responsible for all the Internet packets nationally and is the door to internatio
destinations. All international Internet packets from a province will be processed by
Gateway 1, then forwarded to Gateway 2 and finally to Gateway 3 before they can be
deliver to an international destination. The special scenario described here is a very
simplistic view of the Internet communication. Nevertheless, it represent a threemachine flow-shop problem where each Gateway represent a machine and all Internet
packets need to go through each Gateway before they can be delivered to an
international destination. If the information is too large to fit into one Internet pac
it will be broken into multiple Internet packets and they form the basis of a job with
multiple operations. Instead of using the current first come first served strategy in
128
processing the Internet packets, which is rather inefficient, the algorithm developed in
this research can be applied to find the optimal sequence within a given "real-time" to
improve the efficiency of the Gateway in processing the Internet packets.
4.6.3 Three CPUs Multiprocessor Computer Systems
Up until now multiprocessors computers are designed in multiples of two processors
due to the difficulty involved in load balancing. Currently, load balancing in
multiprocessor systems mainly involves Johnson's two-machine flow-shop algorithm
and some other pre-emptive scheduling algorithms. It is not efficient to design
multiprocessor computers with three processors or any odd number of processors due
to the problem in load balancing. Using the three-machine flow-shop algorithm
developed in this research, it is possible to design multiprocessors computer system
with three processors or any odd number of processors. With the current
multiprocessor computer system, jobs in the job queue can be piped through the two
processors by using the two-machine flow-shop algorithm. Each job can either use
one processor entirely or a large job can be split into two operations and each
operation will be carried out by the two processors in turn. A job that uses only one
processor can use either of the processors whenever a processor is idle. The job that
needs to use two processors can be split into two operations at any point when it needs
to access the hardware. It would be ideal if both the processors have no idle time and
the load is evenly distributed across both the processors. In reality, the first proce
will always carry a heavier load compared to the second processor. Currently, a job
can only be split into two operations (even though the job can be split into more than
two operations) due to the limitation of balancing load on a two-processor system. If
multi processor computers are equipped with four processors, this is equivalent to two
129
two-machine systems and each will have their o w n job queue. With the threemachine flow-shop algorithm developed in this research, it is possible to design a
three-processor computer system that can handle jobs that will be split into three
operations and be carried out by three different processors. The possibility of
balancing load in three-processor computer systems will also make upgrading of
multiprocessor computers easier. This is because currently, we can upgrade from one
processor to two processors. If there is a need to upgrade again, the next step will b
from two processors to four processors. Currently, an upgrade from two processors to
three processors is not possible. The algorithm developed in this research will make
odd number processor upgrades a real possibility.
130
Chapter 5: Conclusions
For many years, flow-shop problems have been considered difficult to solve. We
have Johnson's algorithm for the two-machine flow-shop problem but anything more
than two machines is considered complex due to the problem being combinatorial in
nature. The undesirable characteristic of a combinatorial problem is the explosive
growth of the solution space and hence the computation time. For many years
researchers have been exploring ways of using Johnson's two-machine flow-shop
algorithm to solve more complex flow-shop problems. In order to use Johnson's
algorithm, one must be able to convert the problem into a two-machine problem or
reduce a large problem into multiple two-machine problems. The result of such
conversion may not be always accurate since the resulting multiple two-machine
problem usually does not reflect the true nature of the original flow-shop problem an
this has become a serious problem for shop floor scheduling.
In this research, we have made significant contribution by developing an algorithm
that can solve flow-shop scheduling problems beyond two machines. The threemachine flow-shop heuristic algorithm we have developed will always produce an
optimal sequence. One characteristic of the three-machine flow-shop heuristic
algorithm developed in this research is its ability to learn from its current heurist
estimate and determine accuracy by comparing it with the past history of the search i
order to improve the heuristic estimate. The process of learning from past heuristic
estimates reduces the disadvantage of the large computation time associated with most
flow-shop scheduling algorithms. In addition, the algorithm will update a heuristic
estimate more often whenever a search path is close to the optimal path. This
characteristic means that only paths close to the optimal path are considered serious
131
This significantly reduces the amount of m e m o r y used for the search. A s a result, the
algorithm uses an efficient procedure in developing the optimal solution and also uses
memory in an efficient manner.
In the process of developing the algorithm, based on the SLA* algorithm (Zamani,
1995), we have introduced two important improvements related to the calculation of
heuristic estimates at the start of the search and during the search. In chapter 3 we
identified plausible heuristic functions which may be used to start the search process
We identified the conditions under which one of these may be selected as the best
admissible heuristic function to use. The proofs that these heuristics are admissible
ensured that the algorithm was complete and optimal. In chapter 2 we introduced an
improvement to the adapted SLA* algorithm which reduces backtracking once the
search had started and consequently improves the efficiency of the algorithm.
Unlike the genetic algorithm the three-machine flow-shop heuristic algorithm behaves
quite consistently in terms of the number of jobs versus the time to find the optimal
solution. When the number of jobs in a flow-shop problem is more than twice the
number of machines, the genetic algorithm can take up to three times more effort to
find the optimal solution than when the number of jobs is less than the number of
machines. The effort required by the algorithm developed in this research does not
change drastically when the number of jobs increases beyond the number of machines
and consequently is better suited to practical industrial flow-shop problems than the
genetic algorithm.
The algorithm can be applied to many industrial problems. Production scheduling is a
major concern in manufacturing. Load balancing in odd number multi-processor
computers is an "off-limit" challenge. Reducing congestion and eliminating data
132
spamming in data communication networks requires more intelligent "switchers",
"routers" or "Gateways". Based on the discussion presented in Chapter 4, we
envisage that the algorithm developed in this research will contribute significantly
reducing current problems in the manufacturing industry, computer industry, digital
data communication and transportation industries.
We have also demonstrated that with minor modifications the three-machine flowshop heuristic algorithm can efficiently find multiple optimal solutions when they
exist. This feature is important since it provides decision makers with options.
One area for future research is the scalability of the three-machine flow-shop heuris
algorithm. Can the algorithm be applied to solve m-machine flow-shop problem? We
have demonstrated the use of the algorithm to solve a four-machine and a five-
machine flow-shop problem and the results indicated that the algorithm is potentially
scalable.
The general job-shop problem is described as nlmlGIFmax : schedule n jobs in an
arbitrary shop of m machines so that the last is finished as soon as possible. A flow-
shop is one in which all the jobs follow essentially the same path from one machine to
another. The randomly routed job-shop has no common pattern of movement from
machine to machine. As a result, the number of possible job sequences in using
machines in a job-shop scheduling problem is much larger than for the flow-shop
scheduling problem. At the moment, there is no defined mathematical solution for
job-shop scheduling similar to the algorithm developed by Johnson for flow-shop
problems. Currently, an exhaustive search can be used to find the optimal solution for
two-machine job-shop problem. Most other existing algorithms can find near optimal
solutions for two-machine job-shop problems but very few can find optimal solutions.
133
It is hoped that the research presented here and the heuristic algorithm that has been
developed can provide a basis for an approach to the complex job-shop problem.
References
Baker, K.R. (1974) Introduction to Sequencing and Scheduling, Reading, N e w York:
Wiley Publishing.
Campbell, H.G., Dudek, R.A. and Smith, M.L. (1970) "A heuristic algorithm for the
job m machine sequencing problem," Management Science, Vol. 16, No. 10, pp
B630-B637.
Cohen, P.R. and Faigenbaum, E.A. (1982) The Handbook of Artificial Intelligence,
Vol. Ill, Los Altos, California: William Kaufmann, Inc.
Conway, R.W. , Maxwell, W.L., and Miller, L.W. (1967) Theory of Scheduling,
Reading, Massachusetts: Addison-Wesley.
Dijkstra, E.W. (1959). "A note on two problems in connexion with graphs",
Numerische Mathematik, Vol. 1, pp 269-271.
Gupta, J.N.D. (1970) "Optimality criteria for flow-shop schedules," AIIE
Transactions, Vol. 2, No. 4, pp 309-314.
Gupta, J.N.D. (1971) "A functional heuristic algorithm for the flow-shop schedul
problem," Operational Research Quarterly, Vol. 22, No.l, pp 39-47.
Hart, P.E., Nilsson, N.J. and Raphael, B. (1968) "A formal basis for the heuristi
determination of minimum cost paths", IEEE Transactions on Systems Science and
Cybernetics, Vol. SSC-4, No. 2, pp 100-107.
Holland, J. (1975). Adaptation in Natural and Artificial Systems. University of
Michigan Press.
135
Hong, T.P., Huang, C M . and Y u , K.M. (1998) "LPT scheduling for fuzzy tasks,"
Fuzzy Sets and Systems, Vol. 97, pp 277-286.
Hong, T.P. and Chuang, T.N. (1999) "Fuzzy Palmer scheduling for flow shops with
more than two machines," Journal of Information Science and Engineering, Vol. 15,
pp397-406.
Hong, T.P. and Wang, T.T. (1999) "A heuristic Palmer-based fuzzy flexible flowshop scheduling algorithm," Proceedings of the IEEE International Conference on
Fuzzy Systems, Vol. 3, pp 1493-1497.
Hong, T.P., Wang, C.L. and Wang, S.L. (2000) "A heuristic Gupta-based flexible
flow-shop scheduling algorithm," Proceedings of the IEEE International Conference
on Systems: Man and Cybernetics, Vol. 1, pp 319-322.
Ignall, E., and Schrage, L.E. (1965) "Application of the branch and bound techniqu
to some flow-shop scheduling problems," Operations Research, Vol. 13, No. 3, pp
400-412.
Johnson, S.M. (1954) "Optimal two- and three-stage production schedules with setup
times included," Naval Research Logistics Quarterly, Vol.1, No. 1, pp 61-68.
Kolen, A. and Perch, E. (1994). "Genetic local search in combinatorial optimization
Discrete Applied Mathematics, Vol. 48, pp 273-284.
Korf, R.E. (1985a) "Depth-first interactive-deepening: An optimal admissible tree
search", Artificial Intelligence, Vol. 27, pp 97-109.
136
Korf, R.E. (1985b) "Iterative-deepening A*: A n optimal admissible tree search",
Proceeding of the 9th International Joint Conference on Artificial Intelligence (IJ
85), Los Angeles, California: Morgan Kaufmann, pp 1034-1036.
Korf, R.E. (1990) "Real-time heuristic search", Artificial Intelligence, Vol. 42, p
189-211.
Lomnicki, Z. (1965) "A branch and bound algorithm for the exact solution of the
three-machine scheduling problem," Operational Research Quarterly, Vol. 16, No. 1,
pp 89-100.
McMahon, C.B., and Burton, P.G. (1967) "Flow-shop scheduling with the branch and
bound method," Operations Research, Vol. 15, No. 3, pp 473-481.
Mulkens, H. (1994) "Revisiting the Johnson algorithm for flow-shop scheduling with
genetic algorithms" in Szelke, E. and Kerr, R.M. (eds.), Knowledge-based Reactive
Scheduling, IFIP Transactions B-15, Amsterdam: Elsevier Science B.V., NorthHolland, pp 69-80.
Nawaz, M., Enscore, E.E., Ham, I. (1988) "A heuristic algorithm for the m-machine
n-job flow-shop sequencing problem", Omega, Vol. 11, No.l, pp 69-80.
Palmer, D.S. (1965) "Sequencing jobs through a multi-stage process in the minimum
total time - A quick method of obtaining a Neal optimum," Operational Research
Quarterly, Vol. 16, No. l,pp 101-107.
Pearl, J. (1984) Heuristic: Intelligent Search Strategies for Computer Problem
Solving, Massachusetts: Addison-Wesley.
137
Pohl, I. (1969) "Bi-directional and heuristic search in path problems", Stanford,
California: SLAC (Stanford Linear Accelerator Center), Technical Report 104.
Pohl, I. (1971) "Bi-directional search" in, Meltzer, B. and Michie, D. (eds.), Machin
Intelligence 6, Edinburgh, Scotland: Edinburgh University Press, pp 127-140.
Russell, S. J. (1992) "Efficient memory-bounded search methods", ECAI92:
10th
European Conference on Artificial Intelligence Proceedings, pp 1-5.
Russell, S.J. and Norvig, P. (1995) Artificial Intelligence: A Modern Approach,
Englewood Cliffs, New Jersey: Prentice Hall.
Slate, DJ. and Atkin, L.R. (1977) "CHESS 4.5 - The Northwestern University Chess
Program" in, Frey, P.W. (ed.), Chess Skill in Man and Machine, Berlin: SpringerVerlag,pp 82-118.
Zamani, M.R. (1995) Intelligent Graph-search Techniques: An Application to Project
Scheduling Under Multiple Resource Constraints. PhD thesis - University of
Wollongong.
138
Bibliography
Ahuja, R.K., Goldberg, A.V., Orlin, J.B., and Tarjan, R.E. (1992) "Finding minimum
cost flows by double scaling", Mathematical Programming, Vol 53, pp 243-266.
Ahuja, R.K., Orlin, J.B., and Tarjan, R.E. (1989) " Improved time bounds for
maximum flow problem", SLAM Journal on Computing, Vol. 18, pp. 939-954.
Ashour, S. (1970) "An experimental investigation and comparative evaluation of
flow-shop scheduling techniques", Operations Research, Vol. 18, No.3, pp 541-549.
Brown,A., and Lomnicki, Z. (1966) "Some applications of the branch and bound
algorithm to the machine scheduling problem," Operational Research Quarterly, Vol
17, No. 2, pp 173-186.
Brucker, P. (1995) Scheduling Algorithms, Reading, Heidelberg, Berlin: SpringerVerlag.
Brucker, P., Jurisch, B., and Kramer, A. (1997) "Complexity of scheduling problem
with multi-purpose machines", Annals of Operations Research, Vol 70, pp 57-73.
Chakrabarti, P.P., Ghose, S., Acharya, A., and de Sarkar, S.C. (1989) "Heuristic
search in restricted memory", Artificial Intelligence, Vol. 41, No. 2, pp 97-122.
Cho, Y., and Sahni, S. (1981) "Pre-emptive scheduling of independent jobs with
release and due times on open, flow and jobs shop", Operations Research, Vol. 29,
pp. 511-522.
139
Cleveland, G.A., and Smith, S.F. (1989) "Using genetic algorithms to schedule flow-
shop" in, D. Schaffer (ed.), Proc. 3rd Conference on Genetic Algorithms, San Mateo
Morgan Kaufinann Publisher, pp 160-169.
Coffman, E.G., Graham, R.L., Kohler, W.H., Sethi, R., Steiglitz, K, and Ullman, J.
(1976) Computer and Job-shop Scheduling Theory, Reading, USA: John Wiley &
Sons, Inc.
Dannenbring, D.G. (1977) "An evaluation of flow-shop sequencing heuristics",
Management Science, Vol. 23, No. 11, pp 1174-1182.
Deo, N., and Pang, C. (1982) "Shortest path algorithms: Taxonomy and annotation",
Computer Science Department, Washington State University, Technical report CS80-057.
Freuder, E.C. (1985) "A sufficient condition for backtrack-bounded search", Journ
of ACM, Vol. 32, No. 4, pp 755-761.
Garey, M.R., Johnson, D.S., and Sethi, R. (1976) "The complexity of flow-shop and
job-shop scheduling", Mathematic of Operations Research, Vol. 1, pp 117-129.
Gheoweth, S.V., and Davis, H.W. (1991) "High performance A* search using rapidly
growing heuristics", Proceeding of International Joint Conference on Artificial
Intelligence, Sydney, Australia, pp 198-203.
Glover, F. (1989) "Tabu search: 1", ORSA Journal on Computing, Vol. 1, No. 3, pp
190-206.
Goldberg, A.V., and Tarjan, R.E. (1986) "A new approach to the maximum flow
problem", Proceeding of 18th ACM STOC, pp 136-146.
140
Gupta, J.N.D. (1972) "Heuristic algorithms for multistage flow-shop problem," AIIE
Transactions, Vol. 4, No. 1, pp 11-18.
Hong, T.P. and Wang, T.T. (2000) "Fuzzy flexible flow shops at two machine centers
for continuous fuzzy domains," Information Sciences, Vol. 129, pp 227-237.
Horn, W.A. (1974) " Some simple scheduling algorithms", Naval Research Logistics
Quarterly, Vol. 21, pp 177-185.
Jeffcoat, D.E. (1990) Algorithms for Resource-constrained Project Scheduling,
Reading, Graduate Faculty, PhD Dissertation - Auburn University.
Kaindl, H., and Khorsand, A. (1994) "Memory-bounded bidirectional search",
Proceeding of the 12th National Conference on Artificial Intelligence (AAAI-94),
Seattle, Washington: AAAI Press, pp 1359-1364.
Karger, D.R., Koller, D., and Phillips, S.J. (1993) "Finding the hidden path: Time
bounds for all-pairs shortest paths", SI AM Journal on Computing, Vol. 22, No. 6, p
1199-1217.
Kirkpatrick, S., and Selman, B. (1994) "Critical behaviour in the satisfiability o
random boolean expressions", Science, Vol. 264, No. 5163, pp 1297-1301.
Kirkpatrick, S., Gelatt, CD., and Vecchi, M.P. (1983) "Optimization by simulated
annealing", Science, Vol. 220, pp 671-680.
Korf, R.E. (1985) Learning to Solve Problems by Searching For Macro Operators,
Reading, Boston: Pittman Advanced Publishing Program.
141
Korf, R.E. (1987) "Planning as search: A
quantitative approach", Artificial
Intelligence, Vol. 36, No. 2, pp 201-218.
Korf, R.E. (1988) "Optimal path finding algorithms " in, Kanal, L.N., and Kumar, V
(eds.), Search in Artificial Intelligence, Berlin: Springer-Verlag, pp 223-267.
Korf, R.E. (1993) "Linear-space best-first search", Artificial Intelligence, Vol.
l,pp 41-78.
Kumar, V. (1991) "A general heuristic bottom-up procedure for searching AND/OR
graphs", Information Sciences, Vol. 56, No. (1-3), pp 39-37.
Laveen, K., and Kumar, V. (1988) Search in Artificial Intelligence, Reading, New
York: Springer-Verlag.
Lawler, E.L. (1983) "Recent results in the theory of machine scheduling" in, Bache
A., Grotschel, M., and Korte, B. (eds.), Mathematical Programming: The State of Art
Berlin: Springer, pp 202-234.
Lawler, EX., Lenstra, J.K., Rinnoy Kan, A.H.G., and Smoys, D.B. (1993)
"Sequencing and scheduling: Algorithms and complexity" in, Graves, S.C, Rinnoy
Kan, A.H.G., and Zipkin, P. (eds.), Handbooks in Operations Research and
Management Science, Volume 4: Logistic of Production and Inventory, NorthHolland, pp 445-522.
McMahon, G.B. (1969) "Optimal production schedules for flow-shops," CORS
Journal, Vol. 7, No. 2, pp 141-151.
142
Minton, S., Johnson, M.D., Phillips, A.B., and Laird, P. (1992) "Minimising conflicts:
A heuristic repair method for constraint satisfaction and scheduling problems",
Artificial Intelligence, Vol. 58, No. (1-3), pp 161-205.
Nunamaker, J.F. Chen and Purdin, (1991) "Systems development in information
systems research", Journal of Management Information Systems, Vol. 7, No. 3, pp 89
106.
Ogbu, F.A., and Smith, D.K. (1990) "The application of the simulated annealing
algorithm to the solution of the n/m/Cmax flow-shop problem", Computer Operations
Research, Vol. 17, No. 3, pp 243-253.
Orlin, J.B. (1988) "A faster strongly polynomial minimum cost flow algorithm",
Proceeding of 20th ACMSTOC, pp 377-387.
Patrick, B.G., Almulla, M., and Newborn, M.M. (1992) "An upper bound on the time
complexity of iterative-deepening A*", Annals of Mathematics and Artificial
Intelligence, Vol. 5, No. (2-4), pp 265-278.
Prieditis, A.E. (1993) "Machine discovery of effective admissible heuristics",
Machine Learning, Vol. 12, No. (1-3), pp 117-141.
Silver, E.A., Vival, R., and Dewerra, D. (1980) "A tutorial on heuristic methods",
European Journal of Operational Research, Vol. 5, No. 3, pp 153-162.
Smith, R.D., and Dudek, R.A. (1967) "A general algorithm for solution of the n-jo
m-machine sequencing problem of the flow-shop", Operations Research, Vol. 15, No.
l,pp 71-82.
143
Stoppler, S., and Bierwirth, C (1992) "The application of a parallel genetic Algorithm
to the n/m/P/Cmax flowshop problem" in, Fandel, G., Gulledge, T. and Jones A.
(eds.), New Directions for Operations Research in Manufacturing, Berlin: Springer,
pp 161-175.
Strusevich, V.A. (1991) Complexity Aspects of Shop Scheduling Problems, Reading,
PhD Thesis - Erasmus University Rotterdam.
Szwarc, W. (1971) "Elimination methods in the mxn sequencing problem", Bulletin of
Operations Research Society of America, Vol. 19, Suppl. 2, pp B291.
Szwarc, W. (1973) "Optimal elimination methods in the mxn sequencing problem",
Operations Research, Vol. 21, No. 6, pp 1250-1259.
Vampty, N.R., Kumar, V., and Korf, R.E. (1991) "Depth first vs best first search",
Proceeding of International Joint Conference on Artificial Intelligence, Sydney,
Australia, pp 434-440.
Whitley, D., Starkweather, T., and Fuquay, A. (1989) "Scheduling problems and
traveling salesman: The genetic edge recombination operator" in, Schaffer, D. (ed.)
Proceeding 3rd Conference on Genetic Algorithms, San Mateo: Morgan Kaufinann
Publishing, pp 133-140.
Wismer, D.A. (1972) "Solution of the flow-shop scheduling problem with no
intermediate queues", Operations Research, Vol. 20, No. 3, pp 689-697.
Zhao, W., and Ramamritham, K. (1987) "Simple and integrated heuristic algorithms
for scheduling tasks with time and resource constraints", Journal of System Softwa
Vol. 7, No. 3, pp 195-205.
144
Al
Appendices
Al:
Using Heuristic Function Hj
M, M 2 M 3
3
4
J, 2
7
8
2
h
8
9
7
J3
M,J2
A = 2+3+4+2+7
=18
-*>h(0)=26 hfQ)=18
7
/z = 28
h = 30
State Level 0
k=2
M,J2
h(l)=24
7
M 2 J,
f(l)=24+2
=26
3
A = 7+8+2+7
=24
h = 3+4+2+7
=16
MtJ3
h =8+9+7+2
=26
h =3+4+2+7
3
=16
8
A =26
State Level 1
f>h'
backtrack and update
Search Path D i a g r a m A l ( l )
Time
(t)
CD
1
3
Cost
T3
CD
(*)
co
co
co
e
CO
CO
T3
O
not
scheduled
> Edge
o.
3
to
co
States
Values of h, f
Largest h and f Smallest h and
and Operation Times at a node
Associated f.
z.
'o
L_
CO
-Q
t=0
0 t=0+
k=0
M1J1
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
E
fiMui = 2+3+4+2+7=18
1 fMui = 0+18 = 18
tM1J1 = 2
hMui=18
fMUI^IS
h M u2= 7+8+4+2+7=28
hMU2=28
2 f M u2=0+28 = 28
tM1J2=7
fM1J2=28
h M u3= 8+9+4+2+7=30
hMU3=30
3 f M u3=0+30 = 30
tM1J3=8
fMU3 =30
h(0)=hMui=18
f(0)=fMui=18
Comments
A2
2
1 t=2,
M1J1
k=2
M1J2
M2J1
M3J1
M2J2
M3J2
M1J3
M2J3
M3J3
1
2
3
0 t=0+,
M1J1
k=0
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
hMU2 = 7+8+2+7=24
f M u2=2+24 = 26
tM1J2=7
hM2Ji = 3+4+2+7 = 16
fM2Ji = 2+16 = 18
tM2J1 = 3
h M u3= 8+9+2+7=26
fMu3 = 2+26 = 28
tM1J3=8
hM2Ji = 3+4+2+7 = 16
fM2Ji = 2+16 = 18
tM2J1 = 3
hMU2=24
fwiiJ2=26
h(1)=hMu2=24
f(1)=fM1J2=26
hMU3=26
fM1J3=28
hMui=f(1)=26
fiviui =26
1 tM1J1 = 2
h M u2= 7+8+4+2+7=28
2 fMU2 = 0+28 = 28
3
Since
f(1)>h' = h(0) = 18
backtrack.
hMU2=28
fM1J2=28
h(0)= hMui= 26
f(0)=fMui=26
h = 2%
6 = 30
tM1J2= 7
hMU3 = 8+9+4+2+7=30
hMU3=30
fMU3 = 0+30 = 30
fM1J3=30
tM1J3=8
Panel A l ( l )
M,J,
h(0)=26
From Search
Path Diagram Al(l)
State Level 0
V
k=2
h= 7+8+2+7
=24
h = 3+4+2+7
=16
h(l)=31 h j ^ 2 4
/
f(l)=24+2
= 26
s=h'
f(2)=28+3
M,J,
h = 26
h = 8+9+7+2
=26
h = 3+4+2+7
=16
State Level 1
k=3
h = 24-3=21
h(2)=28 W2^21
f(2)=21+3
= 24
= h'
State Level 2
6 = 4+2+7=13
k= 4
h = 8+9+7=24
h(3)=24
8
6 = 8+2+7=17
State Level 3
f(3)=24+4 8
f>h'
backtrack and update
Search Path Diagram Al(2): Continued from Search Path Diagram Al(l)
A3
Time
(t)
CL
CD
CO
4
5
6
7
8
> Edge
CD
_ J Cost
3 (k)
3
1CO
2
3
2
1
t=2,
k=2
t=5
k=3
t=9
k=4
t=5
k=3
t=2,
k=2
co
States
•a
CD
-C
oo
"c
M1J1
M1J1
M2J1
CO
CO
CD
03
S
CL
T3
CD
T3
CD
e
co
M1J2
M2J1
M1J2
M3J1
M1J3
M2J2
M1J1
M2J1
M1J2
M3J1
Values of h, f
and Operation Times
Comments
z
0
\—
CD
M3J1
M2J2
M3J2
M1J3
M2J3
IVWs
M2J2
M3J2
M1J3
M2J3
M3J3
M3J2
M2J3
M3J3
E hMU2=7+8+2+7=24
3 fMU2=2+24 = 26
Z
1 tn/IU2 = 7
hM2ji = 3+4+2+7 = 16
fM2Ji = 2+16 = 18
tM2J1 = 3
h M u3=8+9+2+7=26
fiwu3 = 2+26 = 28
2 tM1J3=8
hM2Ji = 3+4+2+7 = 16
fM2ji = 2+16 = 18
tM2J1 = 3
hMU2 = 24 - k = 21
fiwu2=3+21 = 24
1 tMU2-k = 7-3 = 4
hM3Ji = 4+2+7 = 13
fM3Ji = 3+13 = 16
tM3J1 = 4
1
hMU3= 8+9+7= 24
fMU3 = 4+24 = 28
tM1J3=8
hM2J2= 8+2+7 = 17
fM2J2=4+17 = 21
Wj2=8
M1J3
M2J1
Largest h and f Smallest h and
at a node
Associated f.
-Q
1
M1J1
M2J1
M1J2
M3J1
M1J1
CD
T3
O
M2J2
M3J2
M1J3
M2J3
M3J3
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
hMU2 =24
fiuu2 =26
h(1)=h M u2=24
f(1)=fM1J2=26
hMU3=26
fM1J3 =28
ihMU2= 21
fM1J2- 24
hMU3 = 24
fM1J3= 28
h(2)= h M u2= 21
f(2)=fMu2=24
h(3)= h M U3= 24
f(3)= fMu3= 28
tMij2- k = 7-3 = 4
1
tM3J1 = 4
hMu2=f(3)=28 h(2)= hMU2= 28
fMU2 =3+28=31 f(2)=fM1J2=31
tM1J2=7
1
2
tM2J1 = 3
hM1J2=f(2) = 31
fMU2=2+31=33
hMU3=8+9+2+7=26
fiuiua = 2+26 = 28
hMU3=26
tM1J3=8
hM2ji = 3+4+2+7 = 16 fruiu3 =28
fM2ji = 2+16 = 18
tlM2J1 = 3
Panel Al(2): Continued from Panel Al(l)
h(1)=hMU3=26
f(1)=fM1J3=28
Since
f(3) > h' = h(2) = 21
backtrack.
Since
f(2)>h' = h(1) = 24
backtrack.
A4
30
State Level 0
6 = 31
State Level 1
From Search
Path Diagram Al(2)
f>h'
backtrack and update
Search Path Diagram Al(3): Continued from Search Path Diagram Al(2)
Time
(t)
Q.
3
CO
9
1
(*)
3
3 t=0+
0CO
k=0
T3
CD
-C
co
'c
CO
CD
05
O
Q-
not
scheduled
co
"CD
Edge
>
Cost
CD
CO
CD
T3
O
States
i
M1J1
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
tM1J1 = 2
hMui = f(1)=28
fn/iui =28
Comments
z
M—
O
l_
CD
-Q
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
E
1z
h M u2 = 7+8+4+2+7=28
2 fMu2=0+28 = 28
tM1J2=7
h M u3= 8+9+4+2+7=30
3 fMU3 = 0+30 = 30
tM1J3=8
hMU2=28
fM1J2=28
hMU3=30
fM1J3=30
Panel A 1(3): Continued from Panel A 1(2)
Break ties randomly.
h(0)= hMui= 28
f(0)= fMui= 28
A5
6 = 28
6 = 30
h(0)=29 M0)=28
State Level 0
From Search
Path Diagram Al(3)
M,J,
6 = 31
M,J,
6 = 8+9+7+2
=26
6 = 3+4+2+7
=16
State Level 1
k=3
6 = 26-3=23
State Level 2
f(2)=23+3
= 26
f(3)=20+4=h'
-(=24)
h(3)=20 h(3>49
M3J,
6 = 4+2+7=13
4
k=A
M,J,
6 = 23-4=19
1
State Level 3
k=\
6=7+8+2=17
h(4)=19 h(4>48
M,J.
6 = 9+7+2=18
State Level 4
k=l
h(5)=i2
m^n
6 = 18-7=11
f(5)=ll+7
= 18
= h'
h(6)=10
f(6)=10+2
"=1?
State Level 5
£=2
6 = 8+2=10
State Level 6
6 = 7+2= 9
backtrack and update
Search Path Diagram Al(4): Continued from Search Path Diagram Al(3).
A6
Time
(t)
1 Edge
CD
Q.
_]
CD
3
CO
10
11
CO
1
2
12 3
13 4
14 5
15
16
Cost
(k)
6
5
17 4
t=2,
k=2
t=5
k=3
t=9
k=4
CO
CD
T3
O
States
CO
CO
•a
CD
-C
.CO
"c
M1J1
M1J1
M2J1
M1J1
M2J1
M3J1
T3
CD
T3
CD
2
03
2
CL
1
MlJ
3
M 2 Jl
MlJ3
M 3 Jl
MlJ 3
t = 10 M1J1
k = 1 M2J1
M3J1
M1J3
MlJ 2
M2J3
t= 17 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
t = 19 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
t = 17 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
t = 10 M1J1
k = 1 M2J1
M3J1
M1J3
M2J3
C
E
CO
M1J2
M2J2
M3J2
M2J3
M3J3
M2J2
M3J2
M3J3
M2J2
M3J2
M3J3
tM1J2=7
hM1J2=31
fwu2=2+31=33
Comments
CD
-Q
M1J2
M2J2
M3J2
M2J3
M3J3
Largest h and f Smallest h and
at a node
Associated f.
z
*o
o-g
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
Values of h, f
and Operation Times
ID
z
1 tM2J1 = 3
hMU3=8+9+2+7=26
fMU3=2+26 = 28
2 tM1J3 = 8
hM2ji = 3+4+2+7 = 16
fM2ji = 2+16 = 18
tM2J1 = 3
hMU3=264^=23
fMU3=3+23 = 26
1 tMU3-k=8-3 = 5
hM3Ji = 4+2+7=13
fM3Ji = 3+13 = 16
tM3J1 = 4
hMU3=23-k=19
1 fMU3 = 4+19 = 23
h(1)=h M u3=26
f(1)=fM1J3=28
hMU3=26
fM1J3=28
hMU3= 23
fMU3 = 26
h(2)=h M u3=23
f(2)=fMu3=26
hiwu3= 19
fM1J3=23
h(3)=h M u3=19
f(3)=fMij3=23
hM2J3= 18
fM2J3=19
h(4)=hM2J3=18
f(4)=fM2J3=19
hM2J3= 11
fM2J3 = 1 8
h(5)=hM2J3=11
f(5)=fM2J3=18
tMU3-k= 5-4 = 1
h M u2= 7+8+2=17
fM1J2=1+17 = 18
1 tM1J2=7
hM2j3=9+7+2=18
fM2J3= 1+18 = 19
tM2J3 = 9
hM2j3=18-k=11
1 fM2J3=7+11=18
t M 2j3-k=9-7=2
M2J2
M3J3
M2J3
MlJ 2
M2J3
M3J2
M2J2
M3J2
M3J3
M2J2
M3J2
M3J3
flM2J2= 8+2=10
fM2J2=2+10 = 12
1 tlM2J2=8
h M 3J3=7+2=9
fM3J3=2+9 = 11
tM3J3=7
1 tM2J3-k=9-7=2
tM1J2=7
1
tM2J3 = 9
h(6)= hM2j2= 10
f(6)=fM2J2=12
Since
f(6)>h' = h(5) = 11
backtrack.
hM2J3 = f(6)=12 h(5)= hM2j3= 12
fM2J3=7+12=19 f(5)=fM2J3=19
Since
f(5)>h' = h(4) = 18
backtrack.
hM2J3 = f(5)=19 h(4)=hM2j3=19
fM2J3 =1+19=20 f(4)=fM2J3=20
Since
f(4) > h' = h(3) = 19
backtrack.
hM2J2= 10
fM2J2=12
A7
18
3 t = 9 M1J1
k=4
19
2 t=5
k =3
20
1 t=2
21
M1J1
M2J1
M1J1
k=2
0 t=0+
k=0
M1J3
M2J1
M3J1
M1J3
M3J1
M1J3
M2J1
M1J2
M1J2
M2J2
M3J2
M2J3
M3J3
M1J2
M2J2
M3J2
M2J3
M3J3
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
M1J1
M2J1
M3J1
M2J2
M3J2
M1J3
M2J3
M3J3
Since
h M u3 = f(4)=20 h(3)= hMij3= 20 f(3) > h' = h(2) = 23
backtrack.
fMU3 =4+20=24 f(3)=fMu3=24
1 tMU3-k=5-4= 1
h M u3 = f(3)=24 h(2)=h M u3=24
fMu3 =3+24=27 f(2)=fMu3=27
tMU3-k=8-3 = 5
1
Since
f(2)>h' = h(1) = 26
backtrack.
tM3J1 = 4
tM1J2=7
hMiJ2= 31
ft/11 j2 =2+31h(1)=h
= 3 3 M u3=27
f(1)=fMu3=29
hMu3=f(2)=27
fMU3 =2+27=29
1 tM2J1 = 3
tM1J3=8
2 tM2J1 = 3
Since
f(1)>h' = h(0) = 28
backtrack.
h M ui = f(1)=29
fMUi =29
1 tM1J1 = 2
hMU2= 7+8+4+2+7=28
2 fMU2=0+28 = 28
h(0)= h M u2= 28
f(0)= fMu2= 28
hMU2=28
fM1J2=28
tM1J2=7
h M u3= 8+9+4+2+7=30
hMU3=30
3 fMU3 = 0+30 = 30
fM1J3=30
tM1J3=8
Panel Al(4): Continued from Panel Al(3)
From Search
Path Diagram Al(4)
6 = 30
M,J,
6 = 29
2
h(0)=33 hjfj^28
State Level 0
k=l
6 = 2+3+4+7
=16
6 = 28
6 = 8+2+4+7
=21
6 = 8+9+7+4
=28
6 = 8+2+4+7
=21
State Level 1
k = 2
h = 8+9+7=24
h(2)=24
6 = 21-2=19
State Level 2
f(2)=24+2
f>h'
backtrack and update
Search Path Diagram Al(5): Continued from Search Path Diagram Al(4).
A8
Time
3:
CD
_1
Edge
Cost
o_
3
3 (k)
CO
CO
55
22
1
23
24
25
2
1
0
t=7,
k=7
t=9,
k=2
t=7.
k=7
t=0+,
k=0
CO
CD
-0
0
States
(t)
CO
CO
T3
CD
sz
CO
'c
M1J2
M1J2
M1J1
M1J2
2
2
0.
1
M1J1
M2J2
M2J2
M1J3
M1J1
M2J2
M1J2
zz
-0
CD
•5-g
C
CO
M2J1
M3J1
M3J2
MiJ3
M2J3
M3J3
M2J1
M3J1
M3J2
M2J3
M3J3
M2J1
M3J1
M3J2
M1J3
M2J3
M3J3
M1J1
M2J1
M3J1
M2J2
M3J2
M1J3
M2J3
M3J3
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
Comments
z
H—
0
I—
CD
-Q
E hMui = 2+3+4+7=16
z fMui = 7+16 = 23
1 tM1J1 = 2
hM2j2= 8+2+4+7 = 21
fM2j2 = 7+21 = 2 8
tM2J2=8
hMU3 = 8+9+7+4=28
fMU3 = 7+28 = 35
2 tM1J3=8
hM2j2= 8+2+4+7 = 21
fM2j2 = 7+21 = 2 8
tM2J2=8
h M u3= 8+9+7=24
fMU3=2+24 = 26
1 tlUI1J3= 8
hM2J2=21-k = 19
fM2J2=2+19 = 21
tM2j2-k = 8-2=6
tM1J1 = 2
1
hlW2J2 =21
fM2J2 =28
h(1)=hM2J2=21
f(1)=fM2J2=28
hiwiu3=28
flM1J3 =35
flM1J3 =24
fM1J3 =26
h(2)=h M 2j2=24
f(2)= fM2J2= 26
Since
f(1)>h' = h(0) = 28
backtrack.
hM2J2 = f(2)=26
fM2j2 =7+26=33
2
tM2J2=8
hMU3= 8+9+7+4=28
fMU3 = 7+28 = 35
tM1J3=8
hM2J2= 8+2+4+7 = 21
fM2j2=7+21=28
tM2J2=8
1
tM1J1 = 2
hMui = f(1)=29
fiuiui =29
2
tM1J2=7
hMu2=f(1)=33 h(0)= hMui= 29
f(0)=fMui=29
fiuiU2 =33
3
h M u3= 8+9+4+2+7=30
hMU3=30
fMU3 = 0+30 = 30
fwu3 =30
tM1J3= 8
h(1)=h M 2J2=26
f(1)=fM2J2=33
hMU3=28
fM1J3=35
Panel Al(5): Continued from Panel Al(4).
Since
f(2) > h' = h(1) = 21
backtrack.
A9
M,J2
h(0)=29
From Search
Path Diagram Al(5)
h = 33
M,J,
7
6 = 30
1/
State Level 0
k=2
M,J3
h(l)=27
h = 31
3
8
M,J,
State Level 1
f(l)=29
= h'
k=3
h(2)=24
f(2)=27
= h'
State Level 2
M 3 J,
4
k= 4
h(3)=20
State Level 3
f(3)=24
= h'
k=l
M[J2
h(4)=19
7
M,J,
State Level 4
f(4)=20
= h'
k=l
h(5)=12
State Level 5
M,J,
f(5)=19
= h'
\k=2
i
1
i
Continue on next page j
A10
r
1*= 2
h(6)-10
M2J2
h = 8+2= 10
State Level 6
8
f(6)=10+2
M 3 J 3 h = 7+2= 9
= 12
7
= h*
1*= 7
h(7)=3 M J
2 2
h = 10-7= 3
1
f(7)=3+7
= 10
= h'
k =1
State Level 7
h(8)=2
State Level 8
f(8)=2+l M J
3 2
h=2
= 3
2
= h'
k=2
h(9)=0
State Level 9
f(9)=0+2
= 2
= h'
k-= 0
Optimal Solution:
h(10) = 0,f(10) = 0
Minimum makespan = 29
Optimal sequence
h(10)=0
f(10)=0+0
= 0
= h'
State Level 10
fc.
JJ^JJ
Search Path Diagram Al(6): Continued from Search Path Diagram Al(5).
Time
(t)
CO
CD
States
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
I Edge
QCD
CD
_l
Cost
CO
3 CO
26
1CO t=2,
to
•o
CD
x:
to
"c
t=:
M1J1
k=2
27 2 t = 5 M1J1
k=3
M2J1
cn
2
0.
not
scheduled
-a
to
CD
1
c
M1J3
M2J1
M1J3
M3J1
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
M1J2
M2J2
M3J2
M2J3
M3J3
0
z
0
1—
CD
-Q
E tM1J2=7
Z3
Z tM2J1 = 3
1
tM1J3=8
2
tM2J1 = 3
tMU3-k=8-3 = 5
1
tM3J1 = 4
hM1J2= 31
fMU2=2+31=33 h(1)=h M u3=27
f(1)=fM1J3=29
hMU3=f(2)=27
fMus=2+27=29
h M U3 = f(3)=24 h(2)=h M u3=24
fMu3 =3+24=27 f(2)= fMu3= 27
Comments
All
28 3 t = 9
M1J1
M2J1
M3J1
M1J3
M1J1
M2J1
M3J1
M1J3
t = 17 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
t = 19 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
t = 26 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
t = 27 M1J1
k = 1 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
t = 29 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
+
t = 29 M1J1
k = 0 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
M1J2
M2J3
k=4
29 4 t = 10
k= 1
30 5
31 6
32 7
33 8
34 9
35 10
M1J2
M2J2
M3J2
M2J3
M3J3
M2J2
M3J2
M3J3
1
tMU3-k= 5-4 = 1
tM1J2=7
1
h M u3 = f(4)=20 h(3)= hMu3= 20
fMU3 =4+20=24 f(3)= fMu3= 24
hM2J3=f(5)=19 h(4)=hM2J3=19
fM2J3=1+19=20 f(4)=fM2J3=20
tM2J3 = 9
M2J3
M2J2
M3J3
M2J2
M3J2
M3J3
1
M3J2
1
M2J2
tM2j3-k=9-7=2
hM2J2= 8+2=10
fM2J2=2+10=12
tM2J2=8
hrui3J3= 7+2= 9
fM3J3 = 2+9 = 11
tM3J3 = 7
hM2J3=f(6) = 12h(5)=hM2J3=12
fM2J3=7+12=19 f(5)=fM2J3=19
hM2J2= 10
fM2J2=12
h(6)=hM2j2=10
f(6)=fM2J2=12
M3J2
1
hM2j2=10-k=3
fM2J2=7+3 = 10
tM2J2-k = 8-7=1
hM2J2=3
fM2J2 = 1 0
h(7)= hM2j2= 3
f(7)=fM2J2=10
1
hM3J2=2
fM3J2= 1+2 = 3
tM3J2=2
hM3J2=2
fM3J2 = 3
h(8)= hM3j2= 2
f(8)=fM3J2=3
1
h=0
f=2
h(9)= h = 0
f(9)=f=2
1
h=0
f=0
h(10)=h = 0
f(10)=f=0
M3J2
f(10) = h' = h(9) = 0
Minimum makespan
= 29, and
Optimal sequence
= J1J3J2.
Panel Al(6): Continued from Panel Al(5).
A12
A2:
Using Heuristic Functions H 2 , H
A2.1:
Using H 2
Mi M 2 M 3
J, 2
3
4
7
8
2
J?
8
9
7
h
A = 31
/z = 30
State Level 0
A = 31
State Level 1
backtrack and update
Search Path Diagram A2.1(1)
Time
(t)
o.
3
CO
_l
CO,
CO
W
t=0
CD
O
Qi
1=
not
scheduled
co
<o
"CD
Edge
>
CD Cost
CO
CD
T3
O
States
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
z
<*—
0
CD
E
Values of h, f
Largest h and f Smallest h and
and Operation Times at a node
Associated f.
Comments
A13
1
0 t=0+,
M1J1
k=0
2
1 t=2,
k=2
M1J1
M1J3
M2J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
1
2
3
1
2
3
0 t=0+,
k=0
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
hMui = 2+3+8+9+2=24
foui = 0+24 = 24
tM1J1 = 2
hMU2= 7+3+8+9+4=31
fMU2=0+31=31
tM1J2=7
hMU3 = 8+3+8+9+2=30
fMU3=0+30 = 30
tM1J3=8
hMU2= 7+8+9+7=31
fMU2 = 2+31=33
tM1J2=7
hM2Ji = 3+4+2+7 = 16
fM2Ji = 2+16 = 18
tM2J1 = 3
hMU3= 8+9+8+2=27
fMU3 = 2+27 = 29
tM1J3 = 8
hM2Ji = 3+4+2+7 = 16
fM2ji = 2+-16 = 18
tM2J1 = 3
1 tM1J1 = 2
h M u2= 7+3+8+9+4=31
2 fMU2 = 0+31 =31
3
hMui=24
foui =24
hM1J2=31
fM1J2=31
tlM1J3 =30
fM1J3=30
Since
f(1)>h' = h(0) = 24
backtrack.
hMU2=31
fM1J2 =33
h(1)=hMU3=27
f(1)=fM1J3=29
hiwu3=27
fM1J3 =29
Break ties randomly.
hMui = f(1)=29
fMui =0+29=29
flM1J2=31
fM1J2 =31
tM1J2=7
hMU3= 8+3+8+9+2=30
hMU3=30
fMU3=0+30 = 30
fM1J3=30
tM1J3=8
Panel A2.1(1)
h(0)=hMui=24
f(0)=fMui=24
h(0)= fiMui= 29
f(0)=fMui=29
M,J,
-,h(0>=29
A = 31
2
M,J,
A = 30
State Level 0
From Search
Path Diagram A2.1(1)
*= 2
M,J3 h = 8+9+8+2
/z = 31
M 2 J,
3
h(l)=27
f(l)=27+2
= 29
= h'
8
=27
h = 3+4+2+7
=16
State Level 1
k=3
M,J
h = 27-3=24
h(2)=24
f(2)=24+3
= 27
= h'
h(3)=20
State Level 2
h = 4+2+7=13
k = 4
h = 24-4=20
State Level 3
f(3)=20+4
= 24
f(4)=19+l = h '
= 20
= h'
£=1
/? = 7+8=15
h(4)=19 h(4>f8
A = 9+7+2=18
State Level 4
£=7
h(5)=12
H^U
^ = 18-7=11
f(5)=ll+7
= 18
= h'
h(6)=10
—2
State Level 5
k =2
/z = 8+2=10
State Level 6
A = 7+2= 9
f>h'
backtrack and update
Search Path Diagram A2.1(2): Continued from Search Path Diagram A2.1(1)
A15
3;
Time
(t)
I Edge
QCD
CO
4
5
6
7
8
9
10
11
CD
_l
3
Cost
(k)
States
co
-o
CD
CO
-a
to
CD
Q.
1=
1
CO
1CO
2
3
4
5
6
5
4
t=2,
k=2
t=5
k=3
zz
"O
CD
2
M1J1
M1J1
M2J1
MlJ 3
M 2 J1
MlJ 3
M 3 Jl
t=9
k=4
M1J1
M2J1
M3J1
MlJ 3
t=10
k=1
M1J1
M2J1
M3J1
M1J3
MlJ 2
M2J3
t= 17 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
t = 19 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
t = 17 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
t = 10 M1J1
k = 1 M2J1
M3J1
M1J3
M2J3
M2J2
M3J3
CO
CD
TD
O
MlJ 2
M2J3
Largest h and f Smallest h and
at a node
Associated f.
Comments
z
"o
CD
.a
E hwiu2= 7+8+9+7=31
6
M3J1
C CO z
M1J2
fMU2=2+31 = 3 3
M2J2 1 tM1J2=7
M3J2
hM2Ji = 3+4+2+7 = 16
M2J3
fM2Ji = 2+16 = 18
M3J3
tM2J1 = 3
hMU3= 8+9+8+2=27
fMU3 = 2+27 = 29
2 tM1J3=8
hM2Ji = 3+4+2+7 = 16
fM2ji = 2+16 = 18
tM2J1 = 3
M1J2
h M u3=27-k=24
M2J2
fMU3 = 3+24 = 27
M3J2 1 tMU3-k=8-3 = 5
M2J3
hM3Ji = 4+2+7=13
M3J3
fM3Ji = 3+13 = 16
tM3J1 = 4
M1J2
h M u3=24-k=20
M2J2
M3J2 1 f M u3=4+20 = 24
tMU3-k= 5-4 = 1
M2J3
M3J3
h M u2= 7+8=15
M2J2
fwu2=
1+15 = 16
M3J2
M3J3 1 tM1J2=7
hM2J3= 9+7+2=18
fM2J3= 1+18= 19
tM2J3 = 9
M2J2
ilM2J3=18-k=11
M3J2
M3J3 1 fM2J3=7+11 = 18
tM2J3-k=9-7=2
?
M3J2
1
M2J3
Values of h, f
and Operation Times
M2J2
M3J2
M3J3
M2J2
M3J2
M3J3
1
flM2J2= 8+2=10
fM2J2=2+10 = 12
tM2J2 = 8
JlM3J3=7+2=9
fM3J3 = 2+9 = 11
tM3J3 = 7
tM2J3-k=9-7=2
tM1J2=7
1
hM1J2 =31
fiwiu2 =33
h(1)=h M u3=27
f(1)=fM1J3=29
hMU3=27
fM1J3=29
tlM1J3 = 24
fM1J3 = 27
h(2)=h M u3=24
f(2)=fMu3=27
hwu3= 20
fiuiu3 = 24
h(3)= h M u3= 20
f(3)=fMu3=24
hM2J3= 18
fM2J3 = 1 9
h(4)= hM2j3= 18
f(4)=fM2J3=19
hM2J3 = 11
fM2J3 = 18
h(5)= hM2J3= 11
f(5)=fM2J3=18
hM2J2= 10
fM2J2= 12
h(6)= hM2J2= 10
f(6)=fM2J2=12
hM2J3=f(6) = 12 h(5)= hM2J3= 12
fM2J3=7+12=19 f(5)=fM2J3=19
hM2J3 = f(5)=19 h(4)=hM2j3=19
fM2J3 =1+19=20 f(4)= fM2J3= 20
tM2J3=9
Panel A2.1(2): Continued from Panel A2.1(l)
Since
f(6)>h' = h(5) = 11
backtrack.
Since
f(5)>h' = h(4) = 18
backtrack.
f(4) = h' = h(3) = 20
M,J,
ft = 29
h(0)=29
M,J,
h = 30
State Level 0
k=2
M,J3
A = 31
h(l)=27
8
State Level 1
f(l)=29
= h'
£=3
M,J3
h(2)=24
5
State Level 2
f(2)=27 M J,
3
= h'
4
k=A
h(3)=20
State Level 3
f(3)=24
= h'
k=\
M,J,
From Search
Path Diagram A2.1(2)
h(4)=19
M2J3
f(4)=20
= h'
9
State Level 4
k=l
h(5)=12
f(5)=19
= h'
M2J3
2
\k = 2
Continue on next page
State Level 5
A17
[ Continue from previous diagram !
\k = 2
h(6)=10
h = 8+2= 10
f(6)=10+2
= 12
= h'
State Level 6
h = 7+2= 9
k=7
h(7)=3
h = 10-7= 3
f(7)=3+7
= 10
= h'
State Level 7
£=1
h(8)=2
State Level 8
f(8)=2+l
= 3
= h'
h=2
k=2
h(9)=0
State Level 9
f(9)=0+2
=2
= h'
k=0
Optimal Solution:
h(10) = 0,f(10) = 0
Minimum makespan = 29
Optimal sequence
•
J1J3J2
h(10)=0
f(10)=0+0
= 0
= h'
State Level 10
Search Path Diagram 2.1(3): Continued from Search Path Diagram A2.1(2)
>
Q.
CD
_l
3
3
CO
CO
to
CD
States
Values of h, f
and Operation Times
Largest h and fSmallest h and
at a node
Associated f.
tM2J3-k=9-7=2
hM2J3 = f(6) = 12h(5)=hM2J3=12
fM2J3=7+12=19 f(5)=fM2J3=19
-a
Edge
Cost
(k)
CO
CO
T3
CD
x:
y>
'c
12 5CO t = 17 MiJi
k = 7 M2J1
M3Jl
MlJ3
MlJ2
a?
£
M2J3
not
scheduled
<5
"CD
Time
(t)
M2J2
M3J2
M3J3
0
z
O
CD
-Q
E
zz
Z
1
Comments
A18
13
6 t = 19
M1J1
M2J1
M3J1
M1J3
M1J2
M2J3
/ t = 26 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M1J1
t
=
27
8
k = 1 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
t
=
29
M1J1
9
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
10 t = 29 + M1J1
k = 0 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
k=2
14
1b
16
17
M2J2
M3J3
M2J2
M3J2
1
hM2j2= 8+2=10
fM2J2=2+10 = 12
tM2J2 = 8
hM3J3=7+2=9
fM3J3 = 2+9 = 11
tlUI3J3=7
1
1
hlM2J2= 10
fM2J2=12
h(6)=h M 2j2=10
f(6)=fM2J2=12
hM2j2=10-k=3
fM2J2=7+3 = 1 0
tM2J2-k = 8-7= 1
hM2J2=3
fM2J2=10
h(7)= hM2j2= 3
f(7)=fM2j2=10
hM3J2=2
fM3J2 = 1 + 2 = 3
tlUI3J2 = 2
hM3J2=2
fM3J2 = 3
h(8)= hM3J2= 2
f(8)= fM3J2= 3
h=0
f=2
h(9)=h = 0
f(9)= f = 2
M3J2
M3J2
1
f(10) = h' = h(9) = 0
1
h=0
f=0
h(10)=h = 0
f(10)=f=0
Minimum makespan
= 29, and
Optimal sequence
= J1J3J2.
Panel A2.1(3): Continued from Panel A2.1(2).
A19
A2.2:
Using H 3
M{ M 2 M 3
3 4
J, 2
7
8 2
h
h 8 9 7
MA
£ = 2+3+8+9+2
2
=24
-> h(0)=28 h(p)=24
M,J2
7
6 = 29
6 = 30
State Level 0
it =2
h(l)=26
f(l)=26+2
=28
h = 7+8+9+2
=26
A = 3+4+2+7
=16
1
M t J3 h = 8+9+8+2
h = 27
8
=27
A = 3+4+2+7
3
=16
State Level 1
backtrack and update
Search Path Diagram A2.2(l).
"CD
>
Time
(t)
States
Edge
Cost
to
co
CD
o»
o.
CD
—1
3
•S (*)
CO
CO
co
1
0
t=0
t=0+
k=0
T3
CD
.C
co
'c
O
Q.
i —
i
1=
M1J1
CO
CD
O
-o
CD
-a
CD
-5-5
c: to
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
Values of h, f
and Operation Times
Largest h and f Smallest h and Comments
at a node
Associated f.
z
**—
0
CD
E
zz
Z
hMui = 2+3+8+9+2=24
1 fMui = 0+24 = 24
tM1J1 = 2
h M u2= 7+3+8+9+2=29
2 f M u2=0+29 = 29
tM1J2=7
h M u3= 8+3+8+9+2=30
3 fMU3=0+30 = 30
tM1J3= 8
IIMIJI =24
fn/iui =24
hMU2=29
fM1J2=29
hMU3=30
fM1J3=30
h(0)= hmnji= 24
f(0)=fMui=24
A20
2
1 t=2
M1J1
k=2
M1J2
M2J1
M3J1
M2J2
M3J2
M1J3
M2J3
M3J3
hMU2 = 7+8+9+2=26
fMU2=2+26 = 28
tM1J2=7
hM2Ji = 3+4+2+7 = 1 6
fM2Ji = 2+16 = 18
tM2J1 = 3
hMU3=8+9+8+2=27
fMU3=2+27 = 29
tM1J3=8
hM2Ji = 3+4+2+7 = 16
fM2Ji = 2+16 = 18
tM2J1 = 3
1
2
3
U t=0+
M1J1
k=0
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
hMU2=26
fM1J2=28
h(1)=hMu2=26
f(1)=fM1J2=28
hMU3=27
fM1J3=29
hMui = f(1)=28
fMui =0+28=28
1 tM1J1 = 2
hMU2 = 7+3+8+9+2=29
2 fMU2=0+29 = 29
hMU2=29
fM1J2=29
tM1J2=7
hMU3=8+3+8-+9+2=30
hMU3=30
fMU3 = 0+30 = 30
fM1J3=30
tM1J3=8
3
Since
f(1) > h' = h(0) = 24
backtrack.
h(0)=hMui=28
f(0)=fMui=28
Panel A2.2(l).
M,J,
h(0)=28
h = 29
2
M,J3
8
6 = 30
State Level 0
From Search
Path Diagram A2.2(l)
k=2
M,J,
h(l)=31 hO)=26
M,J,
f(l)=26+2
= 28
= h'
h = 7+8+9+2
=26
6 = 3+4+2+7
=16
6 = 27
6 = 8+9+8+2
=27
6 = 3+4+2+7
=16
State Level 1
k=3
h = 26-3=23
h(2)=28 ht2^23
State Level 2
f(2)=23+3
= 26
4
= h'
h(3)=24
f(3)=24+4
6 = 4+2+7=13
i= 4
8
h = 8+9+7=24
M2J;
8
6 = 8+2+7=17
State Level 3
f>h'
backtrack and update
Search Path Diagram A2.2(2): Continued from Search Path Diagram A2.2(l)
A21
Time
(t)
1
3
CO
4
5
6
7
8
CD
_l
CD
co
Edge
Cost
(k)
CO
1
2
3
2
1
t=2,
k=2
t= 5
k= 3
t=9
k= 4
t= 5
k= 3
t=2,
k=2
co
States
<D
XZ
CO
"c
M1J1
M1J1
M2J1
to
CO
CD
D>
s
0.
1
M1J2
M2J1
M1J2
M3J1
M1J1
M2J1
M1J2
M3J1
M1J3
M2J2
M1J1
M2J1
M1J2
M3J1
M1J1
M1J3
M2J1
CD
T3
O
T3
a>
O
CD
-O
M3J2
M2J3
M3J3
M2J2
M3J2
M1J3
M2J3
M3J3
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
Comments
«+—
T3
CD
M2J2
M3J2
M1J3
M2J3
M3J3
Largest h and f Smallest h and
at a node
Associated f.
Z
zz
o-g
M3J1
c to
M2J2
M3J2
M1J3
M2J3
M3J3
Values of h, f
and Operation Times
L_
E hMU2=7+8+9+2=26
=5
Z fMU2=2+26 = 28
1 tM1J2=7
hM2ji = 3+4+2+7 = 16
fM2ji = 2+16 = 18
tM2J1 = 3
hMU3= 8+9+8+2=27
fMU3=2+27 = 29
2 tM1J3=8
hM2ji = 3+4+2+7 = 16
fM2Ji = 2+16 = 18
tlM2J1 = 3
h M u 2 = 2 6 - k = 23
fMU2 = 3+23 = 26
1 tMU2-k = 7-3 = 4
hM3Ji = 4+2+7 = 13
fM3Ji = 3+13 = 16
tM3J1 = 4
hMU3= 8+9+7= 24
fMU3 = 4+24 = 28
tM1J3=8
1 hM2J2= 8+2+7 = 17
fM2J2=4+17 = 21
tM2J2=8
tMU2-k = 7-3 = 4
1
hMU2=26
fM1J2 =28
h(1)=h M u2=26
f(1)=fM1J2=28
tlM1J3 =27
fM1J3=29
hwiu2= 23
fM1J2 = 26
hMU3= 24
fwiu3 = 28
h(2)= h M u2= 23
f(2)=fMij2=26
h(3)=h M u3=24
f(3)=fMu3=28
hMu2=f(3)=28 h(2)= h M u2= 28
fMU2 =3+28=31 f(2)=fM1J2=31
Since
f(3) > h' = h(2) = 23
backtrack.
Since
f(2)>h' = h(1) = 26
backtrack.
tM3J1 = 4
tM1J2=7
1
2
Since
f(1)>h' = h(0) = 28
backtrack.
hM1J2=f(2) = 31
fMU2=2+31=33
tM2J1 = 3
h M u3=8+9+8+2=27
f M u3=2+27 = 29
hMU3=27
tM1J3=8
hM2Ji = 3+4+2+7 = 16 fM1J3 =29
fM2Ji = 2+16 = 18
tlW2J1 = 3
Panel A2.2(2): Continued from Panel A2.2(l).
h(1)=h M U3=27
f(1)=fM1J3=29
A22
6 = 29
h = 30
State Level 0
k=2
M;J2
6 = 31
M,J3
7
\
M2J!
hp=27
3
8
M2J!
State Level 1
3
\
From Search
f(l)=27+2
^—C=29")
Path Diagram A2.2(2)
f>h'
backtrack and update
Search Path Diagram A2.2(3): Continued from Search Path Diagram A2.2(2).
Time
(t)
States
>
o.
3
CO
9
CD
1
Edge
Cost
3 (*)
3 t=0+
0CO
k=0
xz
CO
CO
CD
O)
O
to
Q-
'c
i
•o
CD
M1J1
not
scheduled
eZT-_
"CD
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
CO
CD
"O
O
Largest h and fSmallest h and Comments
Values of h, f
Associated f.
and Operation Times at a node
z
"o
CD
-O
E
1zz WIJI = 2
Z
h M u2 = 7+3+8+9+2=29
2 f M u2=0+29 = 29
tM1J2=7
h M u3 = 8+3+8+9+2=30
3 fMu3 = 0+30 = 30
tM1J3=8
Break ties randomly.
hMui = f(1)=29
fMui =0+29=29
hMU2=29
fM1J2=29
hMU3=30
fM1J3=30
Panel A2.2(3): Continued from Panel A2.2(2).
h(0)= hMui= 29
f(0)=fMui=29
6 = 29
M,J,
6 = 30
h(0)=29
State Level 0
From Search
Path Diagram A2.2(3)
k=2
6 = 31
6 = 8+9+8+2
=27
h=3+4+2+7
=16
h(l)=27
f(l)=27+2
= 29
= h'
State Level 1
k=3
h = 27-3=24
h(2)=24
State Level 2
f(2)=24+3
= 27
= h'
6 = 4+2+7=13
k = 4
M,J:
h(3)=20
6 = 24-4=20
1
State Level 3
f(3)=20+4
= 24
f(4)=19+l = h '
= 20
k=\
M,J
= h' y
6 = 7+8+2=17
h(4)=19 h(4>48
6 = 9+7+2=18
/ f(4)=18+l
= 19
<h'
k=l
f(5)=12+7
h(5)=12 hp)=ll
6 = 18-7=11
f(5)=ll+7
= 18
= h'
h(6)=10
f(6)=10+2
State Level 4
State Level 5
k=2
MoJ,
6 = 8+2=10
State Level 6
6 = 7+2= 9
f>h'
backtrack and update
Search Path Diagram A2.2(4): Continued from Search Path Diagram A2.2(3)
A24
3
Time
(t)
1 Edge
in
to
Cost
3 (*)
CO
xz
2>
8
to
0.
CD
CL.
CD
CO
10
CD
M1J1
k=2
11 2 t = 5
k=3
12 3 t = 9
M1J1
M2J1
M1J1
M1J3
M2J1
M1J3
M3J1
M1J3
k = 4 M2J1
M3J1
13 4 t = 10
M1J1
k = 1 M2J1
M1J2
M2J3
M3J1
M1J3
14 5 t = 17
M1J1
M2J3
k = 7 M2J1
M3J1
M1J3
M1J2
t = 19 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
16 5 t = 17
M2J2
M3J3
17
0
Z
M2J3
0
i_
CD
JD
o-g E tu/MJ2=7
M3J1
3
c to
M1J2
z
M2J2 1 tlM2J1 = 3
M3J2
hMU3=8+9+8+2=27
M2J3
fMU3=2+27 = 29
M3J3 2 tM1J3=8
hM2Ji = 3+4+2+7 = 16
fM2ji = 2+16 = 18
tM2J1 = 3
M1J2
hMU3=27-k=24
M2J2
fMU3 = 3+24 = 2 7
M3J2 1 tMU3-k=8-3 = 5
M2J3
hM3Ji = 4+2+7=13
M3J3
fw3Ji = 3+13 = 16
tM3J1 = 4
M1J2
M2J2
h M u3=24-k=20
M3J2 1 fMU3=4+20 = 24
M2J3
tMU3-k= 5-4 = 1
M3J3
M2J2
hMu2= 7+8+2=17
M3J2
fM1J2= 1+17 = 18
M3J3
tlM1J2= 7
1 hM2J3= 9+7+2=18
f(Ul2J3= 1+18 = 19
tlW2J3=9
M2J2
M3J2
hM2J3= 18-k=11
M3J3 1 fM2J3=7+11 = 18
tM2J3-k=9-7=2
M3J2
W2J2
VI3J2
M3J3
Largest h and f'Smallest h and Comments
at a node
Associated f.
*+—
1
k=7
M2J1
VI3J1
V11J3
VI1J2
•\ 1 = 10 VI1J1
( = 1 IM2J1
IVI3J1
1VI1J3
T3
3
=s
-a
CD
'c
CO
1 t=2,
15 6
Values of h,f
and Operation Times
. -a
CO
CD
States
1
hM2J2= 8+2=10
fM2J2=2+10 = 12
tM2J2 = 8
1M3J3 = 7+2= 9
fM3J3=2+9 = 11
!M3J3 ~ 7
k>j3-k= 9 - 7= 2
•hM1J2 = f(2) = 31
fMU2=2+31=33
h(1)=h M u3=27
f(1)=fM1J3=29
hMU3 =27
fM1J3 =29
hMU3=24
fruiU3 = 27
f(2)= ( M U 3 = 27
hMU3=2Q
fiwiu3 = 24
h(3)= h M u3= 20
f(3)=fMij3=24
t>M2J3 = 1 8
fM2J3 = 1 9
h(4)=h M 2J3=18
f(4)=fM2J3=19
flM2J3 =11
fM2J3= 18
h(5)= hM2j3= 11
f(5)=fM2J3=18
1M2J2 = 10
fM2J2= 12
h(2)=h M u3=24
h(6)=h M 2j2=10
f(6)=fM2J2=12
Since
f(6)>h' = h(5) = 11
backtrack.
Since
f(5)>h' = h(4) = 18
hM2J3 = f(6)=12 h(5)= hM2j3= 12 backtrack.
fM2J3 =7+12=19 f(5)=fM2J3=19
VI1J2
VI2J3
VI2J2
V13J2
VI3J3
M1J2=7
1
QM2J3=f(5)=19 h(4)=h M 2j3=19
fM2J3 =1+19=20 f(4)=fM2J3=20
1M2J3=9
Panel A2.2(4): Continued from Panel A2.2(3)
f(4) = h' = h(3) = 20
A25
M,J2
7
h(0)=29
6 = 29
M,J3
8
6 = 30
State Level 0
k=2
M,J,
6 = 31
M,J,
h(l)=27
State Level 1
State Level 2
State Level 3
From Search
Path Diagram A2.2(4)
State Level 4
State Level 5
1
Continue on next page '
Continue from previous diagram |
\k-= 2
h(6)=10 M J
2 2 6 = 8+2= 10
f(6)=10+2 8
M 3 J, h = 7+2= 9
= 12
7
= h'
k-= 7
h(7)=3 M 2 J 2
f(7)=3+7
= 10
= h'
State Level 6
6=10-7=3
State Level 7
1
k= 1
h(8)=2
State Level 8
f(8)=2+l M J
3 2
6 =2
=3
2
= h'
k=2
h(9)=0
State Level 9
f(9)=0+2
=2
= h'
h(10)=0
f(10)=0+0
=0
= h'
""*
=0
Optimal Solution:
h(10) = 0,f(10) = 0
Minimum makespan = 29
Optimal sequence
State Level 10
•
JJJJJJ
Search Path Diagram A2.2(5): Continued from Search Path Diagram A2.2(4).
A27
Time
(t)
to
CD
States
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
tM2J3-k=9-7=2
hM2J3 = f(6) = 12h(5)= hM2j3= 12
fM2J3=7+12=19 f(5)=fM2J3=19
-a
"I Edge
CDCD
CO
18
19
20
21
22
23
CD
__l
3
CO
CO
b
6
7
8
9
10
Cost
(k)
to
to
T3
CD
XZ
CO
"<z
t = 17 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
t = 19 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
t = 26 M1J1
k = 7 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
t = 27 M1J1
k = 1 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
t = 29 M1J1
k = 2 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
t = 29 + M1J1
k = 0 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
•0
CD
3
T3
CD
s>
03
2
c
0.
M2J3
•
M2J2
M3J3
to
M2J2
M3J2
M3J3
0
z
M—
O
I—
CD
-Q
E
3
1z
M3J2
1
M2J2
Comments
hM2j2= 8+2=10
fM2J2=2+10 = 12
tM2J2 = 8
hM3J3=7+2=9
fM3J3 = 2+9 = 11
tM3J3 = 7
hM2J2=10
fM2J2=12
h(6)= hM2j2= 10
f(6)=fM2J2=12
M3J2
1
hM2j2=10-k=3
fM2J2 = 7+3 = 10
tM2J2-k = 8-7=1
tlM2J2 = 3
flW2J2=10
h(7)= hM2J2= 3
f(7)=fM2J2=10
1
tlM3J2=2
fM3J2= 1+2 = 3
tM3J2= 2
hM3J2=2
fM3J2=3
h(8)= hM3J2= 2
f(8)= fM3J2= 3
h=0
f=2
h(9)= h = 0
f(9)=f=2
M3J2
1
f(10) = h' = h(9) = 0
1
h=0
f=0
h(10)=h = 0
f(10)=f=0
Panel A2.2(5): Continued from Panel A2.2(4).
Minimum makespan
= 29, and
Optimal sequence
= J1J3J2.
A28
A3:
Finding Multiple Optimal Solutions
First O p t i m a l Solution
M, M 2 M 3
J, 4
h 5
h 3
3
4
5
5
3
3
h = 20
h(0)= =19
hjP)48
h=\9
State Level 0
State Level 1
flr=15
State Level 2
f>h'
backtrack and update
Search Path Diagram A3(l)
Time
(t)
Edge
Cost
(k)
o.
CD
_l
3
3
3 t=0
CO
CO
States
CO
to
TD
CD
e
xz
O)
CO
"c
1=:
o
L-
Q.
i
c
not
scheduled
"35
>
to
CD
TD
O
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
z
"5
53
E
3
Largest h and fSmallest h and Comments
Values of h, f
Associated f.
and Operation Times at a node
A29
1
0 t=0+,
M1J1
k=0
2
1 t=4,
MiJi
k=4
M1J3
M2J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
M3J1
M2J2
M3J2
M1J2
M2J3
M3J3
hMui=4+3+5+3+3=18
1 fMui = 0+18 = 18
2
3
1
2
3
2 t=7,
k=3
M1J1
M1J3
M2J1
M1J2
M2J3
M3J1
M2J2
M3J2
M3J3
1
4
1 t=4,
k=4
M1J1
M1J3
M2J1
M3J1
M2J2
M3J2
M1J2
M2J3
M3J3
1
tM1J1 = 4
hwu2= 5+4+5+3+3=20
fMU2=0+20 = 20
tM1J2=5
hMU3= 3+5+5+3+3=19
fMU3= 0+19=19
tM1J3= 3
hMU2= 5+4+3+3=15
fMU2 = 4+15 = 19
tM1J2=5
hM2Ji = 3+5+3+3 = 14
fM2ji = 4+14 = 18
tM2J1 = 3
hMU3= 3+5+3+3=14
fMU3 = 4+14 = 18
tM1J3=3
hM2ji = 3+5+3+3 = 14
fM2Ji = 4+14 = 18
tM2J1 = 3
hMU2= 5+4+3=12
fMU2=3+12 = 15
tM1J2=5
hM2J3= 5+3+3=11
fM2J3=3+11=14
tM2J3 = 5
hM3Ji = 5+3+3=11
fM3J1 = 3 + 1 1 = 1 4
tlW3J1 = 5
hMU2 = 5+4+3+3=15
fMU2=4+15 = 19
tM1J2=5
hM2ji = 3++5+3+3 = 14
fM2Ji = 4+14 = 18
tM2J1 = 3
tvi1J3=3
2
hMui=18
fM1J1 =18
h M u2=20
fM1J2=20
h(0)= hMui=18
f(0)=fMui=18
hM1J3=19
fM1J3=19
flM1J2=15
flW1J2=19
h(1)= h M u3= 14
f(1)=fM1J3=18
hMU3=14
fM1J3=18
Since
f(2) > h' = h(1) = 14
backtrack.
tlM1J2=12
fM1J2=15
h(2)=hMU2=12
f(2)=fM1J2=15
Since
f(1)>h' = h(0) = 18
backtrack.
hM1J2=15
fM1J2=19
h(1)=h M u3=15
f(1)=fM1J3=19
hM1J3=f(2)=15
fM1J3=19
tM2J1 = 3
5
0 t=0+,
k=0
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
hruiui =19
fM1J1=19
hMU2= 5+4+5+3+3=20 hMU2=20
fM1J2=20
f M u2=0+20 = 20
tM1J2=5
h M u3= 3+5+5+3+3=19
hM1J3=19
fain = 0+19=19
fM1J3=19
tM1J3= 3
Break ties randomly.
1 twui = 4
2
3
Panel A3(l).
h(0)= hMui=19
f(0)=fMui=19
M,J2
5
h = 19
6 = 20
M,J3
3
h =19
State Level 0
From Search
Path Diagram A3(l)
15
State Level 1
State Level 2
f(2)=12+3
= 15
k=5
h(3)=7
M,J,
f(3)=7+5
= 12
= h'
h = 4+3=7
State Level 3
h=3+3=6
\k = 3
h(4)=4
h = 7-3=4
f(4)=4+3
=7
= h'
£=1
h(5)=3
f(5)=3+l
=4
= h'
State Level 4
State Level 5
/z = 3
k=3
T
First Optimal Solution:
h(7) = 0, f(7) = 0
Minimum makespan = 1 9
Optimal sequence
•
JjJ 3 J 2
Search Path Diagram A3(2): Continued from Search Path Diagram A3(l).
A31
Time
(t)
Q.
CD
3:
"35
Edge
>
CD Cost
_l
(k)
CO
3
B
1CO
CO
t=4,
k=4
to
CD
TD
O
States
TD
CD
to
'c
M1J1
to
to
CD
D)
2
O.
M1J3
M2J1
TD
CD
3
TD
CD
o-g
c: co
M3J1
M2J2
M3J2
M1J2
M2J3
M3J3
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
z
"o
1—
CD
-Q
E hMU2=5+4+3+3=15
3
z fMU2=4+15 = 19
1
Break ties randomly.
tM1J2=5
hMU2=15
fM1J2=19
hM2ji = 3++5+3+3 = 14
fM2ji = 4+14 = 18
h(1)=h M u3=15
f(1)=fM1J3=19
tM2J1 = 3
tM1J3=3
2
tM2J1 = 3
7
2
t=7,
k=3
M1J1
M1J3
M2J1
M1J2
M2J3
M3J1
M2J2
M3J2
M3J3
hMu3=f(2)=15
fM1J3=19
hMU2= 5+4+3=12
fMU2=3+12 = 15
tM1J2=5
1
hM2J3= 5+3+3=11
fM2J3=3+11=14
flU1J2=15
h(2)=h M u2=12
f(2)=fM1J2=15
tlM2J2=7
fM2J2=12
h(3)= flM2J2= 7
f(3)=fM2J2=12
hM2J2=4
fM2J2 =7
h(4)= hi«2J2= 4
f(4)=fM2J2=7
hM1J2=12
tM2J3=5
hM3ji = 5+3+3=11
fM3J1 = 3 + 1 1 = 1 4
tM3J1 = 5
8
9
3
4
t=12,
k=5
t=15,
k=3
M1J1
M2J1
M3J1
M1J2
M1J3
M2J3
M1J1
M2J1
M3J1
M1J2
M1J3
M2J3
M3J3
M2J2
M3J3
M3J2
1
M2J2
Comments
h M 2j2=4+3=7
fM2J2=5+7 = 12
tM2J2=4
tlM3J3= 3+3=6
fM3J3=5+6 = 11
tM3J3=3
M3J2
hM2J2-k= 7-3=4
fM2J2=4+3 = 7
tM2J2-k= 4-3 = 1
1
A32
10 5 i=16.
M1J1
M2J1
M3J1
M1J2
M2J2
M1J3
M2J3
M3J3
t = 19 M1J1
k = 3 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
t=19 + M1J1
k = 0 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
M3J2
k=1
11 6
12 7
1
1M3J2 = 3
fM3j2 = 1 + 3 = 4
tM3J2 = 3
1M3J2=3
FM3J2 =4
l(5)= flM3J2= 3
f(5)= fM3J2= 4
h=0
f=3
h(6)=h = 0
f(6)=f=3
1
f(7) = h* = h(6) = 0
1
h= 0
f=0
h(7)=h = 0
f(7)=f=0
Minimum makespan
= 19, and
Optimal sequence
= J1J3J2.
This is the first Optimal
| Solution.
Panel A3(2): Continued from Panel A3(l).
A33
Second Optimal Solution
M,JL
M,J,
h = 20
h=l9
A =19
State Level 0
From Search
Path Diagram A3(l)
k=3
First Optimal Solution
Search Path Diagram A3(2).
M,J,
h(l)=16
f(l)=16+3
= 19
= h'
M2J3
5
h = 4+3+5+3
=15
h =5+5+3+3
=16
h= n
M,J,
State Level 1
k=4
M,J2
h=5+4+3=12
5
h(2)=12 M,J
h = 16-4=12
f(2)=12+4
= 16
State Level 2
= h'
M,J.
h(3)=ll
h= 12-1=11
M,J
h = 3+5+3=11
f(3)=ll+l
M,X ^ = 5+3+3=11
= 12
= h'
k=3
State Level 3
M,J
A =11-3=8
h(4)=8
State Level 4
f(4)=8+3
= 11
= h'
h(5)=7
M,J,
h = 5+3=8
k=\
M,J,
h = 4+3=7
f(5)=7+l
= 8
= h'
State Level 5
MJ, h = 8-1=7
k=4
h(6)=3
State Level 6
f(6)=3+4
= 7
= h'
MJ, h = 3
--3
Second Optimal Solution:
h(8) = 0, f(8) = 0
Minimum makespan = 1 9
Optimal sequence
•
Search Path Diagram A3(3): Second Optimal Solution Continued from
Search Path Diagram A3(2)
A34
Time
(t)
o.
CD
CO
"55
Edge
>
CD Cost
_l
(k)
3
CO
13
14
0CO
1
t=0+,
k=0
t=3,
k=3
CO
CD
T3
O
States
TD
CD
XZ
CO
CO
CO
CD
OJ
s
"c
Q.
IE
1
M1J3
T3
CD
3
"O
CD
8%
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M2J3
M3J3
MlJ3
M1J1
M2J3
M2J1
M3J1
M2J2
M3J2
M1J2
M3J3
2
16 3
t=7,
k=4
t=8,
k=1
MiJi
MlJ3
MiJi
MlJ3
M2J3
M1J2
M2J3
M1J2
M2J1
M3J3
M2J1
M3J1
M2J2
M3J2
M3J3
18
4
5
t=11,
k=3
t=12,
k=1
MiJi
M 2 J1
MlJ3
M2J3
M3J3
M1J2
M3J1
MiJi
M 2 J1
MlJ2
MlJ3
M2J3
M3J3
M2J2
M3J1
Comments
L—
CD
-O
E
z
1 tM1J1 = 4
2
3
1
1
M3J1
M2J2
M3J2
1
17
Largest h and f Smallest h and
at a node
Associated f.
z
"o
2
15
Values of h, f
and Operation Times
M2J2
M3J2
1
M3J2
1
hMui=19
fM1J1=19
hMU2= 5+4+5+3+3=20 tlM1J2 =20
fM1J2=20
fMU2=0+20 = 20
tM1J2=5
h M u3 = 3+5+5+3+3=19
hM1J3=19
fMU3= 0+19=19
fM1J3=19
tM1J3= 3
hMui = 4+3+5+3=15
fMui = 3+15 = 18
hM2J3=16
tM1J1 = 4
hM2J3 = 5+5+3+3 = 16 fM2J3=19
fM2J3=3+16 = 19
tM2J3=5
hMU2=5+4+5+3=17
fMU2 = 3+17 = 20
hwu3=17
tM1J2=5
hM2J3= 5+5+3+3 = 16 fiwu3 =20
f M 2J3=3+16 = 19
tM2J3=5
hMU2= 5+4+3=12
fMU2 = 4+12 = 16
hM1J2=12
tM1J2=5
fM1J2=16
hM2J3-k=16-4 = 12
fM2J3 = 4 + 1 2 = 1 6
tM2J3-k = 5-4 = 1
r>MU2-k= 12-1=11
fM1J2=1+11=12
tMU2-k=5-1 = 4
tlM1J2=11
hM2Ji = 3+5+3 = 11
fM1J2=12
fM2J1=1+11=12
tM2J1 = 3
hM3J3= 5+3+3 = 11
fM3J3= 1+11 = 12
tM3J3=3
h M u2-k= 11-3=8
fMu2 = 3+8 = 11
hM1J2=8
tMU2-k=4-3 = 1
fM1J2=11
hM3Ji = 5+3 = 8
fM3J1 = 3+8 = 11
tM3J1 = 5
h M 2J2=4+3=7
fM2J2 = 1 + 7 = 8
hM2J2 =7
tM2J2 = 4
fM2J2=8
hM3Ji-k=8-1= 7
fM3J1 = 1+7 = 8
tM3ji-k = 5 - 1 = 4
h(0)=h M u3=19
f(0)=fM1J3=19
h(1)=hM2J3=16
f(1)=fM2J3=19
h(2)=h M u2=12
f(2)=fM1J2=16
h(3)=h M u2=11
f(3)=fM1J2=12
h(4)= hMU2= 8
f(4)=fM1J2=11
h(5)= hM2j2= 7
f(5)= fM2J2= 8
Since w e have found
thefirstoptimal
solution using M1J1.
W e now use M1J3
because hMU3 is equal
to the minimum
makespan of the first
optimal solution
A35
19 6
20
21
t=16,
k=4
M1J1
M2J1
M3J1
M1J2
M2J2
M1J3
M2J3
M3J3
/ t = 19 M1J1
k = 3 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
8 t = 19 + M1J1
k = 0 M2J1
M3J1
M1J3
M1J2
M2J3
M3J3
M2J2
M3J2
M3J2
1
tlM3J2 = 3
fM3J2=4+3 = 7
tM3J2=3
hlUI3J2 =3
fM3J2=7
h(6)= hM3J2= 3
f(6)=fM3J2=7
h=0
f=3
h(7)= h = 0
f(7)=f=3
1
f(8) = h' = h(7) = 0
1
h=0
f=0
h(8)= h = 0
f(8)= f = 0
Panel A3(3): Second Optimal Solution Continued from Panel A3(2)
Minimum makespan
= 19, and
Optimal sequence
= J3J1J2.
This is the Second
Optimal Solution.
A36
A4:
Solving Four-Machine Flow-Shop Problem
M, M 2 M 3 M 4
J2
2
5
3
4
h
4
5
h
h = \9
4
2
4
5
3
2
M,J,
M,J2
2
5
A = 21
M,J3
4
A = 23
State Level 0
it = 2
h= 5+4+2+3+2
=16
h = 3+4+5+3+2
'=
17
M,J3
4
M,J,
A =18
h=4+5+4+3+2
=18
h = 2+4+5+3+2
=17
State Level 1
State Level 2
State Level 3
f>h'
backtrack and update
Search Path Diagram A4(l).
A37
Time
(t)
f Edge
Q.
3
CO
CD
_l
3
3
CO
1
Cost
(k)
TD
CD
XZ
to
to
to
-a
e
CD
3
T3
CD
Q.
'c
0 t=0+
M1J1
k=0
M1J1
c to
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M1J3
M2J3
M3J3
M4J3
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M1J3
M2J3
M3J3
M4J3
Largest h and f Smallest h and
at a node
Associated f.
z
"o
k_
CD
-Q
O-g
t=0
Values of h, f
and Operation Times
CO
CD
TD
O
States
E
zz
Z
1
2
hiuiui =2+3+4+5+3+2
=19
fMui = 0+19 = 19
tiviui = 2
hMU2=5+4+2+5+3+2
=21
fiwu2=0+21 =21
flM1J1=19
fMui =19
hM1J2=21
fM1J2=2l
h(0)=hMui=19
f(0)=fMui=19
tM1J2=5
3
hMU3=4+5+4+5+3+2
=23
fMU3= 0+23=23
hMU3=23
fM1J3 =23
tiVI1J3= 4
2
1 t=2,
M1J1
k=2
M1J2
M2J1
M3J1
M4J1
M2J2
M3J2
M4J2
M1J3
M2J3
M3J3
M4J3
hMU2= 5+4+2+3+2=16
fMU2 = 2+16 = 18
1 tiVI1J2= 5
hM2ji =17
hM2Ji = 3+4+5+3+2=17 fM2J1=19
fM2ji = 2+17 = 19
h(1)=hM2ji=17
f(1)=fM2J1=19
tiVI2J1 = 3
hMU3=4+5+4+3+2=18
fMU3=2+18 = 20
2 tlM1J3 = 4
hlM1J3=18
hM2ji = 3+4+5+3+2=17 fM1J3=20
fM2Ji = 2+17 = 19
tM2J1 = 3
3
2 t=5,
k=3
M1J1
M2J1
M1J2
M3J1
M4J1
M2J2
M3J2
M4J2
M1J3
W2J3
VI3J3
VI4J3
1
hMU2-k= 16-3=13
fMU2=3+13 = 16
tMU2-k = 5-3 = 2
hM3Ji = 4+5+3+2=14
fM3Ji = 3+14 = 17
toJi = 4
hM3J1=14
fM3J1=17
h(2)= hM3ji= 14
f(2)=fM3J1=17
Comments
A3 8
4
3 t=7
k=2
5
2 t=5
k=3
6
1 t=2
k=2
M1J1
M2J1
M1J2
M1J1
M2J1
M1J1
M1J3
M2J2
M3J1
M1J2
M3J1
M1J3
M2J1
M4J1
M3J2
M4J2
M2J3
M3J3
M4J3
M4J1
M2J2
M3J2
M4J2
M1J3
M2J3
M3J3
M4J3
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M2J3
M3J3
M4J3
1
hMU3=4+5+4+2=15
fMU3=2+15 = 17
tM1J3 = 4
hM2Ji = 4+2+3+2=11
fM2J1 = 2+11 = 13
tiVI2J1 = 4
hM3Ji-k= 14-2=12
fM3Ji = 2+12 = 14
tM3Ji-k = 4-2 = 2
Since
f(3) > h' = h(2) = 14
backtrack.
hMU3=15
fM1J3=17
h(3)= hMU3=15
f(3)=fM1J3=17
Since
f(2) > h' = h(1) = 17
backtrack.
tMU2-k=5-3 = 2
1
hM3J1=f(3)=17 h(2)=hM3Ji=17
f(2)= fM3ji= 20
fM3J1 =20
tM3J1 = 4
tiVI1J2=5
1
2
hM2Ji =f(2)=20
=22
h(1)=hMU3=18
fwi2Ji
tM2J1 = 3
f(1)=fM1J3=20
hMU3= 4+5+4+3+2=18
fMU3=2+18 = 20
hM1J3=18
tM1J3= 4
hM2Ji = 3+4+5+3+2=17 fM1J3=20
fM2Ji = 2+17 = 19
tM2J1 = 3
Since
f(1)>h' = h(0) = 19.
backtrack.
Panel A4(l).
h = 2\
h = 23
h(0)=20
State Level 0
k=2
h = 20
h=4+5+4+3+2
=18
h = 2+4+5+3+2
=17
State Level 1
From Search
Path Diagram A4(l)
backtrack and update
Search Path Diagram A4(2): Continued from Search Path Diagram A4(l).
A3 9
"CD
>
QCD
CO
7
CD
—1
Edge
Cost
(k)
3
CO
+
0CO t=0 ,
k=0
to
CD
-O
O
States
T3
CD
XZ
CO
"c
ti=
CO
CO
CD
o>
s
Q.
M1J1
1
not
scheduled
2;
Time
(t)
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M1J3
M2J3
M3J3
M4J3
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
Z
O
CD
-Q
E
3
1z tM1J1 = 2
2
hMui=f(1)=20
fiwui =20
hwiu2= 5+4+2+5+3+2 tlM1J2=21
fM1J2=2l
=21
fMU2=0+21 =21
tlW1J2=5
3
h(0)= h M ui=20
f(0)=fMui=20
hMU3=4+5+4+5+3+2
hMU3=23
=23
fMU3 =23
fMU3= 0+23=23
tM1J3= 4
Panel A4(2): Continued from Panel A4(l).
Comments
A40
M,J 2
6 = 20
2
5
6 = 21
M,J3
4
6 = 23
State Level 0
* =2
M,J,
M 2 J,
6 = 20
h(l)=18
3
6 = 4+5+4+3+2
=18
6 = 2+4+5+3+2
=17
f(l)=18+2
= 20
= h'
State Level 1
From Search
Path Diagram A4(2)
k=3
M,J3
16=18-3=15
h(2)=15
f(2)=15+3
= 18
= h'
State Level 2
M 3 J,
4
6=4+5+3+2=14
k=\
M.X
h 5+4+2+3=14
h = 5+4+2+3=14
h(3)=14
f(3)=14+l
= 15
= h'
State Level 3
M 3 J,
3
6 = 14-1=13
k=3
M,J2
2 6=14-3=11
M,J.
6=14-3=11
h(4)=ll
f(4)=ll+3
= 14
= h'
6 = 5+3+2=10
k=2
M2J2
h(5)=9
f(5)=9+2
= 11
= h'
State Level 4
4
M,J,
MJ,
6 = 4+2+3=9
State Level 5
6 = 4+3+2=9
6 = 10-2=8
Jfc = 3
' Continue on next page ]
A41
J Continue from previous diagram |
\k = 3
M2J2
h = 9-3=6
1
h(6)=6
State Level 6
6 = 9-3=6
f(6)=6+3
=9
= h'
k=\
h(7)=5
State Level 7
h = 2+3=5
f(7)=5+l
=6
6 = 2+3=5
= h'
k=2
h(8)=3
State Level 8
f(8)=3+2
M4J2 6 = 3
=5
3
= h'
k=3
Optimal Solution:
h(10) = 0,f(10) = 0
Minimum makespan = 20
Optimal sequence
JjJ 3 J 2
Search Path D i a g r a m A4(3): Continued from Search Path D i a g r a m A4(2).
Time
(t)
3
CO
8
co
1 t=2,
k=2
co
to
TD
CD
XZ
CO
"c
M1J1
2
Q.
MlJ 3
M2Jl
not
scheduled
Edge
Cost
3 (k)
CO
CD
_l
to
CD
TD
O
States
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M2J3
M3J3
M4J3
Values of h, f
Largest h and f Smallest h and Comments
and Operation Times at a node
Associated f.
z
CD
E
3
tM1J2=5
1
2
tM2J1 = 3
hM2Ji=f(2)=20
fn/i2Ji =22
hMU3 = 4+5+4+3+2=18
fMU3= 2+18 = 20
hMU3=18
tM1J3 = 4
fM1J3=20
hM2Ji = 3+4+5+3+2=17
fM2Ji = 2+17 = 19
tiV2J1 = 3
h(1)=flM1J3=18
f(1)=fM1J3=20
A42
9
2 t=5,
k=3
10 3 t=6,
k=1
10
4 t=9,
k=3
11
5 t=11,
k=2
12
6 t=14,
k=3
13
7 t=15,
k=1
M1J1
M2J1
M1J1
M2J1
M1J3
M1J3
M3J1
M1J2
M2J3
M3J1
M1J1
M2J1
M3J1
M1J3
M1J2
M2J3
M4J1
M1J1
M2J1
M3J1
M1J2
M2J3
M1J3
M2J2
M3J3
M4J1
M1J1
M2J1
M3J1
M4J1
M1J2
M2J3
M1J3
M2J2
M3J3
M1J1
M2J1
M3J1
M4J1
M1J2
M2J2
M2J3
M1J3
M3J3
M3J2
M4J3
M4J1
M1J2
M2J2
M3J2
M4J2
M2J3
M3J3
M4J3
M4J1
M2J2
M3J2
M4J2
M3J3
M4J3
M2J2
M3J2
M4J2
M3Ja
M4J3
1
1
1
M3J2
M4J2
M4J3
1
M3J2
M4J2
M4J3
1
hMU3-k= 18-3=15
fMU3=3+15 = 18
tMU3-k=4-3 = 1
hM3ji = 4+5+3+2=14
fM3Ji = 3+14 = 17
tiVI3J1 = 4
hMU2=5+4+2+3=14
fM1J2= 1+14 = 15
tiVI1J2= 5
hM2J3= 5+4+2+3=14
fM2J3= 1+14 = 15
tiM2J3 = 5
hM3Ji-k= 14-1=13
fM3J1= 1+13 = 14
tM3Ji-k=4-1 = 3
h M u2-k=14-3=11
fM1J2 = 3+11 = 1 4
tMU2-k = 5-3 = 2
hM2J3-k= 14-3=11
fM2J3=3+11 = 14
tM2J3-k = 5-3 = 2
hM4ji = 5+3+2=10
fM4ji = 3+10 = 13
tM4J1 = 5
hM2J2 =4+2+3=9
fM2J2 = 2+9 = 11
tM2J2 = 4
flM3J3= 4+3+2=9
fM3J3=2+9 = 11
tiVI3J3 = 4
hM4ji-k= 10-2=8
fM4Ji = 2 + 8 = 10
tM4Ji-k=5-2 = 3
hM2J2-k =9-3=6
fM2J2 = 3+6 = 9
tM2J2-k = 4 - 3 = 1
tlM1J3=15
fM1J3=18
h(2)=h M u3=15
f(2)=fMij3=18
hM1J2=14
fM1J2=15
h(3)= h M u2= 14
f(3)=fM1J2=15
hM1J2=11
flM1J2=14
h(4)=h M u2=11
f(4)=fMij2=14
hM2J2 =9
fiVI2J2=11
h(5)= hM2J2= 9
f(5)=fM2J2=11
hM2J2=6
fM2J2=9
h(6)= hM2j2= 6
f(6)= fM2J2= 9
flM3J2 =5
fM3J2 =6
h(7)= hM3J2= 5
f(7)= fM3J2= 6
hM3J3-k =9-3=6
fivi3J3 = 3+6 = 9
tM3J3-k = 4-3=1
M4J2
1
I1M3J2 =2+3=5
fM3J2= 1+5 = 6
tM3J2=2
1M4J3 =2+3=5
fM4J3 = 1 + 5 = 6
tM4J3 = 2
A43
14
15
16
8
9
10
t=17,
k=2
M1J1
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M2J3
M1J3
M3J3
M4J3
t=20, M1J1
k=3
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M2J3
M1J3
M3J3
M4J3
t=20+, M1J1
k=0
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M4J2
M2J3
M1J3
M3J3
M4J3
M4J2
1
1
llM4J2=3
flM4J2=2+3 = 5
tM4J2 = 3
hM4J2=3
fM4J2=5
h(8)= hM4J2= 3
f(8)= fM4J2= 5
h=0
f=3
h(9)= h= 0
f(9)=f=3
f(10) = h' = h(9)=0
1
h=0
f=0
Panel A4(3): Continued from Panel A4(2).
h(10)=h=0
f(10)=f=0
Minimum makespan
= 20, and
Optimal sequence
= J1J3J2.
A44
A5:
Solving Five-Machine Flow-Shop Problem
M, Mi M 3 M 4 M 5
J, 4
J2 5
J3 2
5
4
3
4
2
4
3
3
3
5
4
3
M,J,
M,J2
M,J3
4
5
2
A = 26
h = 24
h(0)=25 Wp)=24
State Level 0
k= 5
M,J,
A =\4+5+4+3+3+3
4
M2J2
4
h = 2+3+4+5+3+:
=20
M 2 J 2 h = 4+2+3+3+4+:1
4
=19
M,J3
2
h = 4+2+3+3+4+3
=19\
\
A = 22
h(l)=20
State Level 1
\ f(l)=20+5
—C£H)
backtrack and update
Search Path Diagram A5(l).
states
"CD
O-
3
CO
Edge
>
CD Cost
_l
(k)
3!
-#—'
CO
re-
to
CD
TD
O
T3
CD
XZ
to
'c
to
co
<D
L.
cn
o
not
scheduled
3
Time
(t)
z
"S
M1J1
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M1J3
M2J3
M3J3
M4J3
M5J3
E
Q.
i
C
CD
X3
3
z
Largest h and f Smallest h and
Values of h, f
Associated f.
and Operation Times at a node
Comments
A45
1
0 t=0+,
M1J2
k=0
2
1 t=5,
k=5
M1J2
M1J3
M2J2
M1J1
M2J1
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M1J3
M2J3
M3J3
M4J3
M5J3
M1J1
M2J1
M3J1
M4J1
M5J1
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
1
2
3
hmji =4+5+4+3+3+4+3
hiwui =26
=26
fMui =26
fMui = 0+26 = 26
tM1J1 = 4
hMU2=5+4+2+3+3+4+3 hMU2 =24
=24
fMU2 =24
fMU2=0+24 = 24
tM1J2 = 5
hMU3=2+3+4+5+3+4+3
hMU3=24
=24
fiuiu3 =24
fMU3= 0+24=24
tiVI1J3= 2
himui =4+5+4+3+3+3
=22
1 foui = 5+22 = 27
tM1J1 = 4
hM2J2 =4+2+3+3+4+3
Break ties randomly.
h(0)= h M U2=24
f(0)= f M u2=24
Since
f(1)>h' = h(0) = 24
backtrack.
hiuiui =22
fMui =27
=19
2
fM2J2=5+19 = 24
tM2J2 = 4
hMU3= 2+3+4+5+3+3
=20
fMU3 = 5+20 = 25
tM1J3 = 2
hM2J2 =4+2+3+3+4+3
h(1)=hMU3=20
f(1)=fM1J3=25
hMU3=20
fn/iu3=25
=19
3
0 t=0+,
k=0
M1J3
M1J1
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
1
fM2J2=5+19 = 24
tM2J2=4
hMui=4+5+4+3+3+4+3
hMui=26
=26
fMui =26
foui = 0+26 = 26
tiwui = 4
2 tiV1J2 = 5
3
hMu2=f(1)=25
fn/!1J2=25
hMU3=2+3+4+5+3+4+3
hMU3=24
=24
fM1J3=24
fMU3= 0+24=24
tiVI1J3= 2
Panel A5(l).
h(0)= h M u3=24
f(0)=f M u3=24
A46
ft = 26
ft = 25
ft = 24
State Level 0
k=2
From Search
Path Diagram A5(l)
h = 4+5+4+3+3+4
=23
ft = 3+4+5+3+4+3
=22
ft= 5+4+2+3+4+3
=21
ft = 3+4+5+3+4+3
=22
State Level 1
k=3
h = 21-3=18
h=4+5+3+4+3
=19
State Level 2
k=2
h = 4+5+4+3+3=19
/? = 4+2+3+4+3=16
ft =19-2=17
State Level 3
backtrack and update
Search Path Diagram A5(2): Continued from Search Path Diagram A5(l).
A47
Time
(t)
o.
CD
CO
4
5
6
7
> Edge
CD
_ l Cost
3 (k)
3
1CO
2
3
2
t=2.
k=2
t=5,
k=3
t=7,
k=2
t=5,
k=3
to
CD
TD
O
States
to
to
TD
CD
XZ
to
"c
t=
M1J3
M1J3
M2J3
M1J2
M1J3
M2J3
M1J3
M2J3
s?
Q.
1
C
MlJ 2
M2J3
MlJ 2
M3J3
MiJi
M2J2
M3J3
MlJ2
M3J3
TD
CO
ZJ
T3
CD
0-6
c to
M1J1
M2J1
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M3J3
M4J3
M5J3
M1J1
M2J1
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M4J3
M5J3
M2J1
M3J1
M4J1
M5J1
M3J2
M4J2
M5J2
M4J3
M5J3
M1J1
M2J1
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M4J3
M5J3
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
Comments
z
**—
0
CD
_Q
E
z
1
2
hMui=4+5+4+3+3+4
=23
ftnui = 2+23 = 25
tiviui = 4
hM2j3 =3+4+5+3+4+3
=22
fM2J3=2+22 = 24
tM2J3 = 3
hMU2=5+4+2+3+4+3
=21
fMU2=2+21=23
tM1J2 = 5
hM2J3 =3+4+5+3+4+3
=22
fM2J3=2+22 = 24
tM2J3=3
hMui=23
fMui =25
h(1)=h M2 j 3 =22
f(1)=fM2J3=24
hM2J3 =22
fM2J3 =24
hMU2-k=21-3 = 18
fMU2=3+18 = 21
tMU2-k= 5-3=2
1
hM3J3=19
fiVI3J3 =22
h(2)=hM3J3=19
f(2)=fM3J3=22
hM3J3 =4+5+3+4+3=19
fM3J3=3+19 = 22
tM3J3=4
1
hMui=4+5+4+3+3= 19
fMui = 2+19 = 21
tM1J1 = 4
hM2J2 =4+2+3+4+3= 16
hMui=19
fM2J2=2+16 = 18
fM1J1 =21
tM2J2 = 4
hM3J3-k=19-2=17
fM3J3=2+17 = 19
tM3J3-k=4-2 = 2
Since
f(3)>h' = h(2) = 19
backtrack.
h(3)=h M ui=19
f(3)=fMui=21
Since
f(2)>h' = h(1) = 22
backtrack.
tMU2-k= 5-3=2
hM3J3=f(3)=21
fM3J3 =24
1
tfc!3J3=4
h(2)= hM3J3= 21
f(2)=fM3J3=24
A48
t=2
k=2
M1J3
M1J1
M2J3
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M3J3
M4J3
M5J3
Since
f(1)>h' = h(0) = 24.
backtrack.
hMui=4+5+4+3+3+4
=23
fMui = 2+23 = 25
tM1J1 = 4
hM2J3 =3+4+5+3+4+3
hMui=23
fMui =25
h(1)= hMui=23
f(1)=fMui=25
=22
fM2J3=2+22 = 24
tM2J3 = 3
hM2J3=f(2)=24
fM2J3 =26
tM1J2= 5
tl\/12J3=3
Panel A5(2): Continued from Panel A5(l).
M,J,
ft = 26
ft = 25
h(0)=25 h(j))=24
State Level 0
k= 2
h=4+5+4+3+3+4
=23
4
ft = 3+4+5+3+4+3
=22
M,J,
h(l)=23
ft = 24
5
M2J3
3
State Level 1
f(l)=23+2
= 25
f>h'
backtrack and update
From Search
Path Diagram A5(2)
Search Path D i a g r a m A5(3): Continued from Search Path Diagram A5(2).
A49
Time
(t)
CL
3
CO
9
CD
_l
Cost
(k)
3
3 t=0+,
0CO
k=0
to
CO
T3
CD
XZ
CO
'cz
(<=
S>
D)
2
not
scheduled
3;
"55
Edge
>
to
CD
T3
O
States
i
M1J2
M1J1
M2J1
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M1J3
M2J3
M3J3
M4J3
M5J3
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
Comments
Z
**—
O
u
CD
-O
hMiji =4+5+4+3+3+4+3
hiuiui =26
=26
fiuiui =26
z fMi JI = 0+26 = 26
1E3
tM1J1 = 4
2
tiVI1J2 = 5
hMu2=f(1)=25 h(0)= h M u2=25
f(0)=fMu2=25
fMU2=25
3
tM1J3= 2
hMU3=f(1)=25
fMU3=25
Panel A5(3): Continued from Panel A5(2).
Break ties randomly.
From Search
Path Diagram A5(3)
State Level 0
ft = 22
State Level 1
h(l)=21 h^)=20
f(l)=20+5
= 25
= h'
k= 2
h=4+5+4+3+3
4 =19
ft =19-2=17
h(2)=19
State Level 2
f(2)=19+2
f>h'
backtrack and update
Search Path Diagram A5(4): Continued from Search Path Diagram A5(3).
A51
Time
(t)
States
to
"CD
o.
CD
CO
10
Edge
>
CD Cost
_l
CD
W
*—*
13 t=5,
CO
TD
CO
ZD
TD
CD
<n
TD
CD
CO
S>
'cz
S
ot3
CL
1=
t=
MlJ 2
k=5
03
1
M1J3
CZ
M2J2
CO
M1J1
M 2 J1
M3Jl
M4J1
M5Jl
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
CO
TD
O
Z
0
IValues of h, f
and Operation Times
_argest h and f Smallest h and
Associated f.
at a node
Comments
w_
CD
XZt
E
3
z
IIMUI =4+5+4+3+3+3
=22
1 fiaui = 5+22 = 27
tM1J1 = 4
hM2J2 =4+2+3+3+4+3
hiviui =22
FMUI =27
=19
2
fM2J2=5+19 = 24
tiM2J2 = 4
hMU3=2+3+4+5+3+3
=20
f M u3=5+20 = 25
tiuiU3= 2
hM2J2 =4+2+3+3+4+3
h(1)=hMU3=20
f(1)=fM1J3=25
htvnj3=20
fM1J3=25
=19
fM2J2=5+19 = 24
tM2J2=4
11
2 t=7,
k=2
12
1 t=5,
k=5
MlJ 2
MlJ 3
MlJ 2
M1J1
M2J2
M1J3
M2J2
M 2 J1
M3J1
M4J1
M5J1
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
MiJi
M 2 J1
M 3 Jl
M4J1
M5J1
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
1
hMui=4+5+4+3+3=19
fMui = 2+19 = 21
hMui=19
tM1J1 = 4
fM1J1 =21
h M 2J2-k=19-2=17
fM2J2=2+17 = 19
tM2j2-k=4-2 = 2
Since
f(2)>h' = h(1) = 20
backtrack.
h(2)=h M ui=19
f(2)=fMui=21
Since
f(1)>h' = h(0) = 25.
backtrack.
JIMUI =4+5+4+3+3+3
=22
1 fwiui = 5+22 = 27
tiviui = 4
hM2J2 =4+2+3+3+4+3
=19
f M 2j 2 =5+19 = 24
tM2J2=4
hMui=22
fiviui =27
tiVI1J3= 2
hwu3=f(2)=21
fMU3 =26
2
tll/l2J2=4
h(1)= hMU3= 21
f(1)=fM1J3=26
A52
13
0 t=0+,
k=0
M1J3
M1J1
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
=4+5+4+3+3+4+3
hMui=26
=26
fMui =26
frwui = 0+26 = 26
tM1J1 = 4
IIMIJI
1
2 tM1J2=5
3 tM1J3= 2
hMu2=f(1)=26
fMU2=26
hMu3=f(1)=25
fM1J3=25
Panel A5(4): Continued from Panel A5(3).
h(0)= hMU3=25
f(0)=fMij3=25
A53
4
ft = 26
MA
2
M[J 2
5
h = 25
ft = 26
State Level 0
fc = 2
h = 4+5+4+3+3+4
=23
ft = 3+4+5+3+4+3
=22
M,J2
5
M,J,
ft = 24
State Level 1
From Search
Path Diagram A5(4)
ft = 4+5+3+4+3
=19
h(2)=20
f(2)=20+3
= 23
= h'
State Level 2
k=\
5
ft = 5+4+2+3+4=18
M,Jh= 5+4+3+3+4=19
ft =19-1=18
h(3)=19
f(3)=19+l
= 20
= h'
State Level 3
k=3
M.J.
^ = 18-3=15
ft = 19-3=16
State Level 4
h(4)=16
MA
h =5+3+4+3=15
f(4)=16+3
= 19
= h'
,
1
\k = 2
,
Continue on next page J
Continue from previous diagram !
1*
=2
M2J2
4 ft = 4+2+3+4=13
h(5)=14
M3J,
4 ft = 4+3+3+4=14
M 4^3
f(5)=14+2
= 16
= h'
3
State Level 5
ft = 15-2=13
k= 3
M2J2
1 ft = 13-3=10
h(6)=ll
M3J,
1 ft = 14-3=11
State Level 6
f(6)=ll+3
M 5J3
ft = 3+4+3=10
= 14
3
= h'
k=1
h(7)=10
M3J2
2
M .J,
f(7)=10+l
= 11
= h'
M,h
ft = 2+3+4=9
ft = 3+3+4=10
ft = 10-1=9
I
k=2
State Level 8
h(8)=8
M4J,
f(8)=8+2
= 10
= h'
i
1
State Level 7
1
ft = 10-2=8
1*
= 1
i
Continue on next page '
A55
Continue from previous diagram
*=1
h(9)=7
State Level 9
h = 3+4=7
f(9)=7+l
= 8
= h'
M 5 J,
h = 3+4=7
3
k=3
h(10)=4
f(10)=4+3
= 7
= h'
State Level 10
MJ, ft = 4
k =4
1
Optimal Solution:
h(12) = 0,f(12) = 0
Nfcnirnum makespan = 25
Optimal sequence
•
J3JjJ2
Search Path Diagram A5(5): Continued from Search Path Diagram A5(4).
Time
(t)
>
a.
CD
_l
CD
CO
3
13
0CO
co
Edge
Cost
(k)
t=0+,
k=0
-o
CD
XZ
CO
CO
CO
CD
03
e
o.
M1J3
not
scheduled
"CD
to
CD
TD
O
States
M1J1
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M2J3
M3J3
M4J3
M5J3
Values of h, f
and Operation Times
Largest h and f Smallest h and
at a node
Associated f.
z
"o
CD
XZt
E
hMui=4+5+4+3+3+4+3
liM-m =26
1z
=26
fMui =26
fMui = 0+26 = 26
tM1J1 = 4
hMu2=f(1)=26
flM1J2 =26
2 tM1J2 = 5
3
3
tM1J3= 2
hMU3=f(1)=25
fM1J3=25
h(0)= h M u3=25
f(0)= fMu3=25
Comments
A56
14
1 t=2.
M1J3
k=2
1b
1 t=5,
k=3
16
3 t=6,
k=1
17
4 t=9,
k=3
18
5 t=11,
k=2
19
6 t=14,
k=3
M1J3
M2J3
M1J1
M1J3
M2J3
M1J1
M1J3
M2J3
M3J3
M1J1
M2J3
M1J1
M3J3
M1J2
M2J1
M3J3
M1J2
M2J1
M4J3
M1J1
M2J1
M1J2
M1J3
M2J3
M3J3
M2J2
M3J1
M4J3
M1J1
M2J1
M1J2
M1J3
M2J3
M3J3
M4J3
M2J2
M3J1
M5J3
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M3J3
M4J3
M5J3
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M4J3
M5J3
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M4J3
M5J3
M3J1
M4J1
M5J1
M2J2
M3J2
M4J2
M5J2
M5J3
M4J1
M5J1
M3J2
M4J2
M5J2
M5J3
M4J1
M5J1
M3J2
M4J2
M5J2
1
hniui =4+5+4+3+3+4
=23
ftnui = 2+23 = 25
tM1J1 = 4
hM2J3 =3+4+5+3+4+3
=22
fM2j3=2+22 = 24
tM2J3=3
tM1J2= 5
2
hiviui =23
fMui =25
h(1)=h M ui=23
f(1)=fMui=25
hM2J3=f(2)=24
fM2J3 =26
tiM2J3 = 3
hiiui-k =23-3=20
fwi JI = 3+20 = 23
tMui-k = 4-3=1
1
hiviui =20
fiviui =23
h(2)= h M ui= 20
f(2)=fMui=23
hM2J1 =19
fM2J1 =20
h(3)= hM2Ji= 19
f(3)= fM2ji= 20
tlM2J1 =16
fM2J1 =19
h(4)=h M2 ji=16
f(4)=fM2J1=19
hiu3Ji =14
fiVI3J1 =16
h(5)= hM3Ji= 14
f(5)=fM3J1= 16
hiVI3J1 =11
flW3J1=14
h(6)= hwi3Ji= 11
f(6)=fM3J1=14
hM3J3 =4+5+3+4+3=19
fM3J3=3+19 = 22
tM3J3=4
1
1
1
1
hMU2=5+4+2+3+4=18
fM1J2= 1+18 = 19
tM1J2=5
hM2ji =5+4+3+3+4=19
frwi2Ji = 1+19 = 20
tM2J1 = 5
hM3J3-k=19-1=18
fM3J3= 1+18 = 19
tM3J3-k = 4-1 = 3
hMu2-k =18-3=15
fMU2=3+15 = 18
tMU2-k = 5-3 = 2
hM2Ji-k =19-3=16
fM2Ji = 3+16 = 19
tM2Ji-k = 5-3 = 2
hM4j3 =5+3+4+3=15
fM4J3=3+15 = 18
tM4J3=5
hM2j2 =4+2+3+4=13
fM2J2=2+13 = 15
tiVI2J2 = 4
hM3Ji =4+3+3+4=14
fM3Ji = 2+14 = 16
tiVI3J1 = 4
hM4J3-k =15-2=13
fM4J3=2+13=15
tM4J3-k = 5-2= 3
hM2J2-k=13-3=10
fM2J2=3+10= 13
Wj2-k=4-3 = 1
hM3ji-k=14-3=11
fM3J1 = 3 + 1 1 = 1 4
tM3Ji-k= 4-3 = 1
hM5J3 =3+4+3=10
fM5J3=3+10 = 13
tiVI5J3 = 3
A57
20
21
22
23
7
8
9
t=15,
k=1
t=17,
k=2
t=18,
k=1
10 t=21,
k=3
M1J1
M2J1
M3J1
M1J2
M2J2
M1J3
M2J3
M3J3
M4J3
M1J1
M2J1
M3J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
M4J3
M5J3
M1J1
M2J1
M3J1
M4J1
M1J2
M2J2
M3J2
M1J3
M2J3
M3J3
M4J3
M5J3
M1J1
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M1J3
M2J3
M3J3
M4J3
M5J3
M3J2
M4J1
M5J3
M5J1
M4J2
M5J2
1
M4J1
hM4J1=10
fM4J1=11
h(7)=h M 4Ji=10
f(7)=fM4J1=11
tlM4J1 =8
fM4J1=10
h(8)= hM4Ji= 8
f(8)=fM4J1=10
tlM4J2 =7
fM4J2=8
h(9)= hM4j2= 7
f(9)= fM4J2= 8
hM5J2=4
fM5J2=7
h(10)=h M 5j2=4
f(10)=fM5J2=7
M5J1
M4J2
M5J2
1
M4J2
M5J1
hM3J2 =2+3+4=9
flUI3J2= 1+9 = 10
tM3J2 = 2
hM4Ji =3+3+4=10
fM4J1= 1+10 = 11
tM4J1 = 3
hM5J3-k=10-1=9
fM5J3 = 1 + 9 = 10
tM5J3-k=3-1 = 2
hM4ji-k =10-2=8
frui4ji = 2+8 = 10
tM4ji-k= 3-2 = 1
M5J2
hM4J2 =3+4=7
fM4J2 = 1 + 7 = 8
tM4J2 = 3
1
hnisji =3+4=7
fwi5Ji = 1+7 = 8
tM5J1 = 3
M5J2
1
hM5J2=4
fwi5J2 = 3+4 = 7
tM5J2=4
A58
24
11 t=25,
M1J1
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M1J3
M2J3
M3J3
M4J3
M5J3
t=25+, M1J1
k=0
M2J1
M3J1
M4J1
M5J1
M1J2
M2J2
M3J2
M4J2
M5J2
M1J3
M2J3
M3J3
M4J3
M5J3
k=4
25
12
1
h=0
f=4
h(11)=h=Q
f(11)=f=4
f(12) = h' = h(11)=0
1
h=0
f=0
h(12)=h=Q
f(12)=f=0
Minimum makespan
= 25, and
Optimal sequence
= J3J1J2.
Panel A5(5): Continued from Panel A5(4).