!"#"$$%$ &"'()*% +'(%,-$)*. /#01$%&+ 2)3( /"#3)"$
)*40#&"3)0*5 6)+3#)1-3%, ,%')+)0* &0,%$+ "*,
"$.0#)3(&+
L. Adacher ¤, P. Detti
y
January 24, 2001
Abstract
Manufacturing scheduling problem characteristics generally traslate into large-scale, discrete, dynamic, and/or stochastic formulations, which are generally considered impossible
to solve in most practical situations. The logical strategy is to pursue scheduling methods which generate good solutions e±ciently. In this paper a °exible distributed decision
architecture for a parallel machine scheduling problem is presented. The aim is to investigate the performance degradation caused by a distributed decision scheme with partial
information, respect to a centralized, full information, approach. The implementation
issues and the e®ectiveness of distributed decision algorithms, based on di®erent system
knowledge degrees and information exchanges, are analyzed by means of a simulation
model. Extensive experimental results are reported, allowing to evaluate the trade{o®
between knowledge degree and system performance.
Keywords: distributed decision models, scheduling, Lagrangian relaxation, simulation.
1
Introduction
Manufacturing scheduling systems are characterized by a high complexity degree and
su®er of dynamic changes in the system, such as new jobs arrive, machines break down,
job processing times may be higher or lower than originally anticipated. As consequence,
in manufacturing, traditional control models, based on scheduling and control theory, are
not always applicable in practice.
The aim of this paper is to investigate the performance degradation of scheduling
algorithms, caused by a distributed, partial information, decision scheme respect to a
centralized, full information, approach. Distributed decision paradigms have been receiving an increasing amount of attention because of the wide range of applications, from
!
Dipartimento di Informatica e Automazione - Universitµ
a di Roma Tre, via della vasca navale, 79 00146 Roma (Italy), e{mail: [email protected]
"
Dipartimento di Ingegneria dell'Informazione - Universitµ
a degli Studi di Siena, via Roma 56 - 53100
Siena (Italy), e{mail: [email protected]
1
biology to computer science, logistics and manufacturing. Several studies showed that a
distributed decision system is competitive with that of a centralized system with respect
crucial issues like robustness, modularity and simplicity. On the other hand, the major
drawback is basically due to communication congestion and sub{optimality.
In this paper, we consider a parallel machine scheduling problem, in which a set of
independent jobs, J1 ; : : : ; Jn , have to be processed on m parallel machines, M1 ; : : : ; Mm .
Machines can handle at most one job at the time, and each job can be executed on at
most one machine at time without interruption. A solution (schedule) is an assignment
of each job to exactly one machine. The machines are assumed to be unrelated, i.e., the
execution of job Jj on machine Mi requires a processing time pij , which we assume a
positive integer. The objective is to minimize the maximum machine completion time.
Referring to the three{¯eld deterministic scheduling classi¯cation, this problem is known
as RjjCmax .
We suppose that all information on the system is not completely available at a centralized level, but it is distributed, at local levels, among local decision makers (DMs), and we
analyze, via simulation, the performance degradation caused by lack of information and
communication among DMs. In particular, a comparison study of distributed algorithms
characterized by di®erent local knowledge degrees of the system, is presented, allowing the
evaluation of the trade{o® between the degree of information and system performance.
The proposed algorithms are based on the Lagrangian relaxation and decomposition
technique. The Lagrangian approach is fully exploited to meet requirements of limited
information accessibility and autonomy of decision makers.
In Section 2, the literature reviews on parallel machine scheduling and on distributed
scheduling models are respectively presented. In Section 3, we show a problem decomposition based on a integer linear programming formulation and on the Lagrangian relaxation
technique. In Section 4 and 5, the basic framework and di®erent distributed algorithms
are respectively presented. Section 6 describes the simulation experiments that have been
carried out to evaluate our algorithms. Finally, in Section 7, some conclusions are drawn.
2
2.1
Literature Review
Parallel Machine Scheduling
The problem of scheduling jobs on parallel machines, minimizing the maximum machine
completion time, is already N P {hard in the case of two identical machines. As consequence, a polynomial{time algorithm cannot be guarantee to produce the optimal solution, and the traditional problem is, then, to balance solution quality with running
time.
On parallel scheduling, research e®orts have been invested in the development of approximation algorithms with guaranteed accuracy. An algorithm is ½{approximated if it
asymptotically never delivers a solution value of more than ½ times the optimal solution.
Graham [10], Chen and Vestjens [4] show that, when the machines are identical, with the
presence or not of release dates of the jobs, assigning the jobs according to the longest
processing time rule (LP T ) is an approximation algorithm (½ = 4=3 ¡ 1=m, and ½ = 3=2
with release dates of the jobs). Unrelated parallel machine problems are signi¯cantly
2
harder than identical machine problems. Ibarra and Kim [15] present some heuristics for
general parallel machine with dynamic machine availability but static arrivals. Potts [24]
and Lenstra et al. [16] present 2{approximation algorithms for RjjCmax , polynomial only
for ¯xed m. Lenstra et al. [17], using Potts algorithm as the basis, present an approximation algorithm polynomial in m, and demonstrate that no ½{approximation algorithm
exists for any ½ < 3=2. Van de Velde [28], following Potts algorithm main ideas, presents
a duality{based 2{approximation algorithm that employes Lagrangian relaxation.
2.2
Distributed Decision Architectures
Manufacturing systems su®er of problems caused by unplanned events, uncertainty and
complexity. Distributed decision architectures (DDAs) has been recently introduced in
manufacturing to overcome these problems guaranteeing an high degree of modularity,
simplicity, °exibility and robustness.
A recent approach providing a decomposed and modular framework is based on the
paradigm of Autonomous Agents (AA), in which the decisions are not established by
a global controller, but are the result of a co{operation process among several entities
(called agents). In the AA approach the focus is on the coordination and the negotiation among intelligent Autonomous Agents (Maturana and Norrie [22]). When several
agents (sub-systems, or resources) are able to execute the same tasks or operations, a
negotiation mechanism (e.g. Contract Net Protocol (Smith, [27])) is needed to establish
relationships between a seller agent and a buyer agent. Malone and Smith [21] developed
basic models for comparing the performance of coordination structures that appear in a
wide variety of systems, including human organizations and computer systems. Ramos
and Sousa [25] propose a distributed architecture for dynamic scheduling of manufacturing systems. They also present a suitable negotiation protocol for the dynamic scheduling
of tasks. Bongaerts et al. [3] describe a multi-agents architecture, where the autonomy of
the agents provides the system with the ability to react to disturbances, while a certain
degree of hierarchical control still allows global optimization. Lin and Solberg [18] present
a framework for integrating shop °oor control using autonomous agents. Their framework
utilizes distributed decision making and information °ow and highlights competition and
cooperation. Maturana and Norrie [22] develop a multi-agent architecture with a coordinator between intelligent components of the distributed system. The focus is on the
integration of activities across heterogeneous environments and real-time adaptation of the
system to environmental changes. Sousa and Ramos [26] deal with a new multi-agent architecture and negotiation mechanism for the dynamic scheduling of production systems.
Each agent pursues its individual objective, under a set of constraints, on the ground
of the local available information. Adacher et. al. [1] present di®erent implementations
of the AA concept in °exible manufacturing and investigate several control architectures
by an extensive simulation study. Lucertini et. al. [19] present an optimization{based
co-ordination protocol among autonomous workstations in a furniture production process.
Known approaches for optimization problems have been successfully employed in the
designing of distributed decision algorithms. Arbib and Rossi [2] propose a distributed
heuristic based on primal{dual method for the solution of a generalized covering problem
arising in a manufacturing environment. The Lagrangian decomposition is well known in
3
literature as an useful technique to approach large scale mixed-integer linear programming
problem. Della Croce et al. [7], Graves [11], Gou et al. [12] and Luh [20] employ
Lagrangian relaxation to develop distributed and hierarchical decision architectures for
manufacturing scheduling and production planning problems. In particular, in Gou et al.
[12] and Luh [20], the Lagrangian technique is employed to decompose the problem and
to realize a co{ordination level among subproblems. In this framework, a subproblem
identify a subsystem having decisional autonomy. Decisions are taken on the ground of
the current values of the Lagrangian multipliers imposed by a supervisor.
3
Problem formulation and Lagrangian decomposition
The parallel machine scheduling problem RjjCmax can be formulated introducing the
following integer variables:
xij =
(
1 if job Jj is scheduled on machine Mi
0 otherwise
(1)
A mixed{integer formulation of the problem, in the following referred to as P M , is
then:
min Cmax
(2)
subject to
n
X
j=1
pij xij ∙ Cmax
m
X
xij
=1
i = 1; : : : ; m
(3)
j = 1; : : : ; n
(4)
i = 1; : : : ; m j = 1; : : : ; n
(5)
i=1
xij 2 f0; 1g
where Cmax is a real positive variable containing the maximum machine completion time.
The conditions (3) ensure that the completion time of each machine is less than or equal
to the length of the schedule, the conditions (4) are assignment constraints and guarantee
that each job is assigned. The conditions (5) ensure that each job is scheduled on exactly
one machine, thereby precluding preemption.
The Lagrangian relaxation of constraints (3) yields the following Lagrangian problem
L(¸):
minfCmax ¡
m
X
i=1
¸i (Cmax ¡
n
X
j=1
pij xij )g = minf(1 ¡
m
X
i=1
¸i )Cmax +
m
X
i=1
¸i
n
X
j=1
pij xij g(6)
subject to
m
X
xij
=1
j = 1; : : : ; n
(7)
i = 1; : : : ; m j = 1; : : : ; n
(8)
(9)
i=1
Cmax
xij 2 f0; 1g
¸0
4
where ¸i ¸ 0, i = 1; : : : ; m, are the Lagrangian multipliers associated to the relaxed
constraints. Problem L(¸), for any ¯xed vector ¸, provides a lower bound on the original problem P M , and the classical goal is to ¯nd the best lower bound computing the
Lagrangian dual L¤ :
L¤ = maxfL(¸)
: ¸i ¸ 0 i = 1; : : : ; mg:
(10)
In order to solve the Lagrangian problem L(¸), we note that the variable Cmax is
constrained to be not negative and it appears in the objective function (6) with coe±cient
(1 ¡
m
P
i=1
¸i ). Hence, for ¸ = ¸1 ¯xed, three cases are possible:
m
P
i
i=1
1
¸1i > 1. Then the minimization respect to Cmax requires Cmax = +1 implying
L(¸ ) = ¡1, regardless the value of the variables xij .
m
P
ii
i=1
¸1i < 1. Then Cmax = 0. Note that, since the maximum machine completion
time Cmax is equal to 0, the Lagrangian problem always yields a not feasible solution
for P M .
m
P
iii
i=1
¸1i = 1. Then Cmax can assume any positive value.
In case i, L(¸1 ) = ¡1 providing a worthless lower bound. Hence, we can restrict
to consider only ¸ vectors such that
m
P
i=1
problem L(¸) is:
L(¸) = min
n
m X
X
¸i ∙ 1. In the cases ii and iii, the Lagrangian
¸i pij xij
(11)
i=1 j=1
subject to (7) and (8).
The following Proposition suggests a simple algorithm for solving the problem (11),
(7) and (8).
Proposition 1 If
m
P
i=1
¸i ∙ 1, problem L(¸) is solvable in O(nm) steps by assigning each
job Jj to machine Mh for which ¸h phj = min1∙i∙m ¸i pij .
In fact, L(¸) is separable and can be decomposed into n independent subproblems.
The subproblem Pk (¸) associated to job k is:
min
m
X
¸i pik xik
(12)
i=1
subject to
m
X
xik
=1
(13)
i=1
xik 2 f0; 1g
i = 1; : : : ; m
(14)
5
Each problem Pk (¸), for ¯xed value of the ¸ vector, is solvable in m steps, selecting
machine i for which ¸i pik is minimum, and hence the overall complexity of the algorithm
is O(nm).
In the following, it will be shown that is possible to consider only Lagrangian multim
m 0
P
P
0
0
pliers vector such that
¸i = 1. Let ¸ be a vector such that
¸i < 1 (case ii), let x
i=1
00
0
i=1
m
P
be the resulting job assignment, and consider the vector ¸i = ¸i =
0
00
i=1
0
¸i , i = 1; : : : ; m, i.e.,
00
obtained normalizing of multipliers ¸i . Let x be the resulting job assignment in L(¸ ).
00
0
Then, ¸i > ¸i 8i = 1; : : : ; m,becouse
0
00
00
m
P
i=1
0
¸i < 1, while the job assignment is the same
0
0
(x = x ), and following that L(¸ ) > L(¸ ). In other words, for each ¸ , in case ii, it is
00
always possible consider a vector ¸ , in case iii, yielding the same job assignment and a
better lower bound value. Hence, we can restrict to consider only Lagrangian multipliers
m
P
such that
¸i = 1. The Lagrangian problem L(¸) is then:
i=1
m X
n
X
minf
i=1 j=1
¸i pij xij g
(15)
subject to
m
X
xij = 1
j = 1; : : : ; n
(16)
j = 1; : : : ; n
(17)
i=1
xij 2 f0; 1g
¸i ¸ 0
i = 1; : : : ; m
i = 1; : : : ; m
m
X
¸i = 1
(18)
i=1
Problem L(¸), for any value of ¸ provides a lower bound on the original problem P M,
and, since any assignment x obtained solving L(¸) corresponds to a job assignment to
the machines, it also provides a feasible solution for P M , in general not optimal. Usually,
integer optimization problems do not satisfy the Strong Lagrangian Duality property, and
in the case of problem P M a duality gap between L¤ and the optimal solution of problem
P M is expected.
3.1
Computing the Lagrangian Dual
One way to compute the Lagrangian dual (10) consists in solving a not di®erentiable
optimization problem. The function L :¸ ! L(¸) is, in fact, continuous in ¸ and not
everywhere di®erentiable. Moreover, if only Lagrangian multipliers ¸ in case iii are considered, L boils down to a piecewise linear and concave function.
Specialized methods for optimizing not di®erentiable functions exists, such as, for
example, the subgradient method [23]. It basically consists in iteratively generating a
series of Lagrangian multipliers by moving a speci¯ed step size µ along the direction
6
provides by a subgradient s of the function. At the generic iteration t, the Lagrangian
multipliers vector is generated by the following recursive rule:
¸t = ¸t¡1 + µ t¡1 st¡1 :
(19)
Theoretical conditions for convergence of L(¸) to the Lagrangian dual L¤ are limt!1 µ(t) = 0
P
and kt=0 µ(t) ! 1(k ! 1). A formula that has been proven in practice for µ is as follows
(Polyak [23], Held and Karp [14]):
µ(t) = ®t
U B ¡ º(L(¸t ))
jjst jj2
(20)
where ®t is a scalar satisfying 0 < ®t < 2, and U B is an upper bound value of the
Lagrangian dual (the value of a feasible solution, for instance) which can be updated over
the iterations and º(L(¸t )) is the value of the optimal solution in L(¸t ) . Usually, ®t is
decreased if there is no improvement over a certain number of iterations. The method is
terminated when there is no signi¯cant improvement of the objective value (i.e., when ®
becomes very small), or when a speci¯ed number of iterations is reached. Although the
subgradient method has low convergence speed, it is easy to implement and it has been
successful employed in a large number of problems, providing, in general, good and fast
lower bounds.
3.2
Computing feasible solutions
Lagrangian relaxation has been successfully employed in the designing of heuristic methods trying to ¯nd good feasible solutions (Fisher [8, 9]). It is not uncommon, in fact, that
the Lagrangian solution is nearly feasible for the original (not relaxed) problem, and can
be made feasible with some minor modi¯cations.
As it has been shown, a nice characterization of problem P M is that any Lagrangian
solution is also feasible for the original problem. However, although this peculiarity eliminates the making solutions feasible, it does not solve the problem of ¯nding good solutions.
In according with the requirements of limited information availability, in Session 5 we propose di®erent heuristic algorithms for ¯nding and improving solutions of the problem.
4
A Distributed Decision Architecture
The Lagrangian approach, presented in Section 3, decomposes the overall problem into
job{related subproblems, easier to solve than the original. The job{subproblems are
independent and, given a Lagrangian multipliers vector ¸, their solution can be obtained,
choosing, for each job Jj , the machine Mi such that is minimum the product ¸i pij . Each
subproblem solution is computed assigning a job Jj on the machine Mi , on the basis of
the following information:
i the processing times pkj ; k = 1; : : : ; m, that the job Jj requires to be performed
on the di®erent machines;
7
ii the current Lagrangian multipliers ¸i ;
constraints.
i = 1; : : : ; m, associated to the relaxed
In according with the hypothesis of partial information and local decision autonomy,
we suppose that the number and the processing times of the jobs, and the state of each
machine (i.e., the actual workload and the working speed) are not exactly known at a
global level, while they are supposed known at a job and a machine level. In this context,
a DDA can be developed, in which jobs and machines, throughout information exchanges,
cooperate to meet the global requirements of system e±ciency (i.e., minimizing the overall
completion time and balancing workload among machines). In Figure 1 a DDA scheme
for the problem and the information °ows are drawn. In this framework the Lagrangian
multipliers, associated to the machines, act as prices representing the cost paid by a job
for using a machine. Machines, with the local objective of minimizing their completion
times, perform the global requirements, i.e., the minimization of the objective function
(2), adjusting iteratively the Lagrangian multipliers. A job assignment can be obtained
by an iterative procedure, in which the following steps are performed, at each iteration:
1 the Lagrangian multipliers are transmitted to the job sub{problems;
2 each job solves the related sub{problems and sends the resulting assignment decision
to the chosen machine;
3 machines, on the basis of the current job assignment and by information exchange,
update the Lagrangian multipliers vector to meet the global requirements.
In particular, the multipliers updating is related to the available information about
the system. Obviously, if all system information is assumed known in a centralized way,
multipliers can be updated by a global controller, employing one of the optimization
methods known in literature to compute the Lagrangian dual (e.g. subgradient methods,
bundle methods etc: : :). Vice versa, if only local and partial information is available and
limited information exchanges are allowed, then the traditional optimization methods
cannot be used in practice.
In the next session, a simulation study of distributed algorithms based on di®erent
multipliers updating schemes is presented.
5
Decision Strategies
Di®erent distributed scheduling algorithms, based on the architecture presented in section 4, have been implemented to analyze the performance degradation due to di®erent
knowledge degrees. In all the algorithms the job assignment is performed in two phases.
Firstly, in a iterative bargaining phase, a job assignment is obtained. Then, in a second
phase, called re{contracting phase, machines throughout co{operation try to improve the
solution found in the ¯rst phase. In Figure 2, a basic scheme of the algorithm is reported.
Phase 1 basically consists in repeating steps from 1 to 3 of Section 4, until a certain number of iterations ITmax is reached. The purpose of the iterative phase 2 is, if possible, to
re {assign to other machines some of the jobs assigned to the overloaded machine Mmax .
8
λ1
machines
M1
C1
λq
λm
Mq C
q
Mm
λ
assignment
jobs
J1
Cm
Jk
J2
Jn
Machine exchange information (multipliers adjustment)
Figure 1: A DDA architecture.
Di®erent distributed algorithms, based on the algorithm of Figure 2 have been developed. In the following, the Lagrangian multipliers updating schemes (step 1.3) and the
re{contracting rules (step 2.1) of each algorithm are shown.
ALGORITHM A
step 1.3
¸itMmax = ¸it¡1
Mmax + ±
where Mmax is the overloaded machine at iteration (it ¡ 1), and ± is a suitable positive
constant value. The parameter ± is decreased if there is not solution improvement over a
certain number of iterations.
step 2.1
The overloaded machine Mmax have the possibility to trade one of its assigned jobs at time.
The rejected job Jj will have to choose a di®erent machine. This decision is taken solving
= maxg,
the job related subproblem Pj (¸), on the machine set SM = fMi ; i = 1; : : : ; m; i 6
bargaining
8i 2 SM and ¸Mmax = +1. The re{assignment of job Jj is really
where ¸i = ¸i
performed, only if the new job assignment yields a decreasing of the maximum machine
¤
¤
completion time Cmax
. Note that, in this case, Cmax
must be re-computed, in order to ¯nd
the new overloaded machine. Step 2.1 is stopped when the current overloaded machine
cannot improve the solution.
ALGORITHM B
step 1.3
The multipliers updating is performed in according to algorithm A.
step 2.1
The following two steps are iteratively repeated.
step a
Step 2.1 of the algorithm A is performed.
9
Algorithm
Phase 0 (Initialization)
¤
:= 1.
it = 0; ¸iti = 1=m 8i = 1; : : : ; m; Cmax
Phase 1 (bargaining phase)
input ¸it ;
output the job assignment xbargaining , the related multipliers vector ¸bargaining ;
while (it ∙ ITmax ) do
begin
Step 1.1
input ¸it ;
output a job assignment xit ;
Each job j solves the related sub{problem Pj (¸it ) choosing a machine k and sets
xitjk = 1;
Step 1.2;
input a job assignment xit ;
¤
it
output Cmax
, the overloaded machine Mmax
;
Each machine i computes its completion time Ciit ;
it
Cmax
is computed by machine information exchanges;
¤
¤
it
it
if (Cmax
> Cmax
) then Cmax
:= Cmax
, xbargaining := xit , ¸bargaining := ¸it ;
it := it + 1;
Step 1.3;
input ¸it¡1 ;
output ¸it ;
Lagrangian multipliers updating;
end
Phase 2 (re{contracting phase)
input the job assignment xbargaining , ¸bargaining ;
output the job assignment xre¡contracting ;
Step 2.1;
The overloaded machine Mmax contracts with the other machines in order to balance
the workloads and to improve the current solution.
Figure 2: The basic algorithm scheme.
10
step b
Step a is repeated, in which the machine that trades the jobs is M2max , i.e., the
=
machine having the second highest completion time, and SM = fMi ; i = 1; : : : ; m; i 6
max; 2maxg. The re{contracting phase is stopped when no job re{assignment, in steps a
and b, is really performed.
ALGORITHM C
step 1.3
The multipliers updating is performed in according to algorithm A.
step 2.1
The overloaded machine Mmax trades one of its assigned jobs at time. The rejected job
Jj chooses the machine Mi , i 6
= max, for which is minimum the value Ci + pij . Machine
¤
¤
Mi accepts the job only if Ci + pij < Cmax
, where Cmax
is the best current solution. The
re{contracting phase is stopped when the current overloaded machine tries to reject all
its jobs without solution improvement.
ALGORITHM D
step 1.3
Lagrangian multipliers are updated, in according to a subgradient method, (see relations
(20) and (19)). In particular, a subgradient s of the Lagrangian function at the generic
iteration it, is:
siti =
n
X
j=1
it¡1
¡Cmax
pij xit¡1
ij
i = 1; : : : ; m;
(21)
it¡1
are obtained solving the Lagrangian problem L(¸it¡1 ).
where xit¡1 and Cmax
step 2.1
It is performed in according to algorithm C.
PH ASE 1
Mul tip lier s
upd atin g
ALG ORI THM A
ALG ORI THM B
ALG ORI THM C
Info rma tion :
Info rma tion :
Info rma tion :
it
it
, M max
C max
Up date :
*
λ itM max , C max
PH ASE 2
Info rma tion :
it
it
, M max
C max
Re -con tra ctin g
Up date :
*
C max
it
it
, M max
C max
Up date :
*
λ itM max , C max
it
it
, M max
C max
Up date :
*
λ itM max , C max
ALG ORI THM D
Info rma tion :
it
, C iit i = 1,...,m
C max
sub gra dien t s
Up date :
*
λ iti i = 1,...,m C max
Info rma tion :
Info rma tion :
Info rma tion :
it
it
, M max
,
C max
it
, C iit i = 1,...,m
C max
it
, C iit i = 1,...,m
C max
it
M max
Up date :
it
M max
Up date :
C 2it max ,M 2it max
Up date :
*
C max
*
C max
*
C max
Table 1: The information degrees of the distributed algorithms.
In Table 1, the information degree required by the di®erent algorithms is shown. From
algorithm A to algorithm D an increasing information is required. For example, the
algorithm A, in phase 2, only requires to know the overloaded machine Mmax , while the
algorithm B needs information about the two overloaded machines Mmax and M2max , and
the algorithms C and D require information about the completion times of all machines.
11
6
Simulation Analysis
The four algorithms presented in Section 5 have been coded in the C computer language
and their performances have been evaluated using a single PC Pentium II, 64 Mb of
RAM, 200 Mhz, on two sets of instances. For each set, small and large instances have
been generated. In the small instances, problems with m = 3, 4, and 5, and n = 10, 20,
30, 40, and 50, while, in the large instances, problems with m = 10, 20, and 30 and n =
50, 75, 100, 125 and 150 have been respectively considered. For each combination of n
and m, 20 instances have been generated and, hence, 600 instances have been tested for
each set.
In order to quantitatively evaluate the performances of the algorithms, we compared
them with a Lagrangian lower bound computed by means a subgradient method. The
algorithms have been compared also with an improved version of the best heuristic proposed by Ibarra and Kim [15] (see the computational study of De and Morton [6]).
The basic steps of the heuristic of Ibarra and Kim are the following:
1 Find j ¤ ; i¤ = arg mini;j fCi + pij g, where Ci is the current partial completion time
of machine i.
2 Schedule job j ¤ next on machine i¤ ; Ci¤ = Ci¤ + pi¤ j ¤ .
The above algorithm has been included in a iterative procedure, associating a weight wi
to each machine, and repeating at each iteration the above steps 1 and 2, in which j ¤ ; i¤ =
argmini;j fwi (Ci + pij )g. The weights wi , play the rule of the Lagrangian multipliers of
algorithm of Figure 2, and are modi¯ed to perturb the current solution, trying to balance
machine workloads. In particular, at each iteration, the weight related to the overloaded
machine is increased of a suitable constant positive value ±. At the end of the iterative
procedure, the re-contracting phase of algorithms C and D is executed. In the following
this algorithm will be labeled with E.
In Tables 2 and 3, the simulation results on the small instances related to the ¯rst
and the second set are respectively reported. Similarly, Tables 4 and 5 show results on
the large instances related to the ¯rst and the second set. The ¯rst column of each table
contains the dimension of the problem, i.e. the number of the machines m and of the
jobs n. GAP is the optimality gap (computed as GAP = 100 ¤ (SolALG ¡ LB)=LB,
where SolALG is the solution of the related heuristic algorithm, LB is the Lagrangian
Lower Bound computed by means a subgradient method) and tav is the computational
time in seconds. In columns 2{3, 4{5, and 6{7 the computational results related to the
distributed algorithms A, C and D are respectively reported. In the last two columns
the performance of algorithm E is reported. Note that GAP and tav are average values
over the 20 generated instances. The GAP values in bold are related to the best results.
It is possible to note that the algorithm C presents a good behavior in almost all
the instances respect to both the other distributed algorithms and the algorithm E. The
most di±cult problems for the distributed heuristics are the large instances related to the
12
second set (see Table 5), in which algorithm E ¯nds the best solution in almost all the
cases. The distributed algorithms have been designed for solving problems in which the
processing times pij for i = 1; : : : ; m are really related to the particular machine. Hence, a
worse performance of the algorithms from A to Drespect to E was expected in the second
set of instances, in which the standard deviation of job length is of 10 %. However, the
algorithm C, also in this case, ¯nds a satisfactory solution in a reduced amount of time.
In Figure 3 and 4, a comparison of the optimality gaps related to algorithms A and
B is reported. Obviously, since more information is involved in algorithm B, A has, in
general, a better performance. However, it is interesting to note that the two algorithms
have a very similar behavior on the instances of the second set (Figure 4).
In Figures 5 and 6, solutions, on the large instances, of algorithms C (represented
by the black bars) and E (represented by the gray bars) related to phases 1 and 2 are
reported. For each problem dimension, the ¯rst two bars are related to the solutions of
the phase 1, and the last two bars are related to phase 2. Note that the re-contracting
phase (i.e., phase 2) signi¯cantly improves the solution.
A
m¡n
3 - 10
3 - 20
3 - 30
3 - 40
3 - 50
4 - 10
4 - 20
4 - 30
4 - 40
4 - 50
5 - 10
5 - 20
5 - 30
5 - 40
5 - 50
GAP
18.99
7.61
5.03
2.84
2.55
36.0
12.7
7.45
6.45
4.86
52.01
20.05
14.25
8.45
8.17
C
tav
0.01
0.03
0.04
0.04
0.06
0.01
0.03
0.04
0.06
0.08
0.02
0.04
0.05
0.06
0.10
GAP
18.99
7.01
5.24
3.05
2.83
29.97
12.36
7.17
5.85
4.84
48.89
19.32
12.72
8.34
7.60
D
tav
0.02
0.02
0.03
0.06
0.07
0.02
0.04
0.04
0.07
0.08
0.02
0.03
0.05
0.06
0.09
GAP
19
7.22
5.24
3.55
2.96
29.06
14.06
6.9
5.83
5.06
48.74
21.18
12.35
8.6
7.97
E
tav
0.01
0.01
0.02
0.02
0.04
0.02
0.03
0.03
0.04
0.05
0.02
0.03
0.04
0.05
0.07
GAP
18.36
7.71
5.51
3.86
2.67
26.23
13.24
7.14
6.74
5.56
49.52
19.48
12.93
8.56
7.84
tav
0.02
0.07
0.14
0.25
0.37
0.02
0.07
0.05
0.29
0.46
0.03
0.1
0.19
0.31
0.56
Table 2: Results on small uniform distributed instances
7
Conclusions
Distributed decision algorithms for the problem of scheduling jobs on unrelated parallel
machines, minimizing the maximum machine completion time, with the hypothesis of
partial information availability, have been presented. The simulation study, presented in
Section 6, shows that the proposed approach is competitive, both for solution quality and
for the reduced computation times, respect to a centralized one.
The distributed algorithms are characterized by a high degree of °exibility and robustness to system changes. They can be modi¯ed, without signi¯cant changes, to consider
release dates of the jobs and to handle machine failures. In fact, since, no centralized
information about the number of the jobs to be processed is required, when new jobs are
13
m¡n
3 - 10
3 - 20
3 - 30
3 - 40
3 - 50
4 - 10
4 - 20
4 - 30
4 - 40
4 - 50
5 - 10
5 - 20
5 - 30
5 - 40
5 - 50
A
GAP
7.38
3.18
1.91
1.63
1.18
12.10
5.14
3.10
2.34
1.83
18.84
7.47
4.15
3.19
2.41
tav
0.03
0.06
0.13
0.23
0.35
0.03
0.07
0.17
0.27
0.43
0.03
0.08
0.18
0.03
0.05
C
GAP
8.45
2.74
1.41
0.83
0.70
10.61
5.35
2.60
2.06
1.39
21.84
7.13
3.70
2.37
1.75
tav
0.02
0.03
0.03
0.04
0.05
0.01
0.03
0.04
0.06
0.06
0.01
0.03
0.04
0.06
0.07
D
GAP
7.41
3.90
1.98
1.31
0.99
13.63
5.08
2.53
2.19
1.56
22.57
8.08
3.70
2.70
2.27
tav
0.01
0.01
0.02
0.03
0.04
0.02
0.02
0.03
0.04
0.05
0.01
0.02
0.03
0.03
0.06
E
GAP
12.08
3.76
2.42
1.13
1.07
19.02
8.46
4.69
2.84
2.20
29.04
13.03
5.95
5.84
3.15
tav
0.01
0.02
0.03
0.04
0.05
0.01
0.02
0.04
0.04
0.06
0.01
0.03
0.04
0.06
0.08
Table 3: Results on small 10% instances
A
m¡n
10 - 50
10 - 75
10 - 100
10 - 125
10 - 150
20 - 50
20 - 75
20 - 100
20 - 125
20 - 150
30 - 50
30 - 75
30 - 100
30 - 125
30 - 150
GAP
20.98
15.4
8.93
7.16
5.39
63.76
39.8
29.68
25.75
15.69
118.36
71.34
52.57
39.54
30.28
C
tav
0.16
0.23
0.32
0.41
0.48
0.29
0.46
0.66
0.77
0.97
0.43
0.62
0.93
1.22
1.41
GAP
19.02
15.24
8.08
7.12
5.16
59.21
34.59
26.28
23.75
14.12
116.8
69.43
49.36
33.79
27.76
D
tav
0.16
0.23
0.31
0.42
0.47
0.31
0.52
0.66
0.78
0.98
0.41
0.64
0.93
1.24
1.66
GAP
19.74
15.72
9.32
7.57
6.91
68.69
39.8
30.53
25.08
18.26
124.6
76.43
57.15
41.77
30.91
E
tav
0.13
0.18
0.25
0.32
0.38
0.26
0.41
0.54
0.63
0.75
0.39
0.59
0.81
1.1
1.21
GAP
20.67
14.84
10.5
8.52
7.30
64.52
37.61
30.95
25.92
18.97
120.7
80.25
53.49
37.78
26.81
tav
0.99
1.99
3.71
5.83
8.35
1.57
3.47
7.4
11.9
16.92
2.1
5.07
10.32
10.36
23.63
Table 4: Results on large uniform distributed instances
14
A
m¡n
10 - 50
10 - 75
10 - 100
10 - 125
10 - 150
20 - 50
20 - 75
20 - 100
20 - 125
20 - 150
30 - 50
30 - 75
30 - 100
30 - 125
30 - 150
GAP
18.64
13.68
8.68
7.18
4.70
50.20
33.28
21.44
16.74
16.04
79.25
49.14
41.71
29.40
26.03
C
tav
0.13
0.19
0.26
0.33
0.39
0.25
0.35
0.50
0.60
0.74
0.36
0.52
0.72
0.90
1.06
GAP
7.10
3.90
2.51
1.97
1.45
22.89
13.92
8.93
5.89
5.22
26.36
26.36
16.95
13.33
8.60
D
tav
0.13
0.19
0.27
0.35
0.38
0.26
0.4
0.53
0.65
0.93
0.36
0.6
0.86
1.02
1.25
GAP
7.54
4.38
2.67
2.32
1.85
28.22
14.62
9.53
7.85
6.02
27.7
30.15
16.4
14.83
11.36
E
tav
0.12
0.18
0.22
0.27
0.3
0.22
0.32
0.41
0.53
0.58
0.32
0.5
0.6
0.76
0.9
GAP
5.85
3.76
2.76
2.08
1.78
19.95
11.46
6.97
5.80
3.87
19.87
22.22
10.05
8.76
7.92
tav
0.96
2.05
3.67
6.01
8.45
1.5
3.38
7.97
11.43
15.85
2.03
4.84
9.51
16.03
21.47
Table 5: Results on large 10% instances
30
25
20
G A P _A
15
G A P _B
10
5
0
3 -10
3 -20
3 -30
3 -40
3 -50
4 -10
4 -20
4 -30
4 -40
4 -50
5 -10
5 -20
5 -30
5 -40
5 -50
Figure 3: Gap values for algorithms A and B in uniform instances.
15
60
50
40
G A P_ A
30
G A P_ B
20
10
0
3- 10
3- 20
3- 30
3- 40
3- 50
4- 10
4- 20
4- 30
4- 40
4- 50
5- 10
5- 20
5- 30
5- 40
5- 50
Figure 4: Gap values for algorithms A and B in 10% deviation instances.
1 60
1 40
1 20
1 00
E _ P h a s e1
C_ P h a s e1
80
E _ P h a s e2
C_ P h a s e2
60
40
20
0
10 - 50
10 - 75
1 0- 1 0 0
1 0- 1 2 5
1 0- 1 5 0
20 - 50
20 - 75
2 0- 1 0 0
2 0- 1 2 5
2 0- 1 5 0
30 - 50
30 - 75
3 0- 1 0 0
3 0- 1 2 5
3 0- 1 5 0
Figure 5: Objective function values for algorithms E and C in uniform instances.
16
9 00
8 00
7 00
6 00
E _ P ha s e1
5 00
C_ P ha s e1
E _ P ha s e2
4 00
C_ P ha s e2
3 00
2 00
1 00
0
10 - 50
10 - 75
1 0-1 00
1 0-1 25
1 0-1 50
20 - 50
20 - 75
2 0-1 00
2 0-1 25
2 0-1 50
30 - 50
30 - 75
3 0-1 00
3 0-1 25
3 0-1 50
Figure 6: Objective function values for algorithms E and C in 10% deviation instances.
ready, new sub{problems are generated and a new bargaining phase and re{contracting
phase can start. When a machine failure occurs, the jobs assigned to the broken machine
can be considered as new arrivals in the system, in which the Lagrangian multiplier related to the broken machine is set to a suitable large value. Hence, the proposed approach
can be used also to solve real-time problems.
References
[1] Adacher, L., Agnetis, A., and Meloni, C. (2000). Autonomous agents architectures
and algorithms in °exible manufacturing systems. IIE Transaction, 32, (10), 941{951.
[2] Arbib, C. and Rossi, F. (1999). Optimal Resource Assignment through negotiation
in a Multi{agent Manufacturing System. IIE Transaction, to appear.
[3] Bongaerts L., Wyns J., Detand J., Van Brussel H., Valckenaers P.,(1996), Identi¯cation of Manufacturing Holons, Preprints of European Workshop on Agent-Oriented
Systems in Manufacturing, Berlin.
[4] Chen, B. and Vestjens, A.P.A. (1997). Scheduling on identical machines: How good
is LPT in an on{line setting? Operations research letters, 21, 165.
[5] Davis, E. and Ja®e, J.M. (1981). Algorithms for scheduling tasks on unrelated parallel
processors. Journal of the Association for Computing Machinery, 28, 721{736.
[6] De, P. and Morton, T.E. (1980). Scheduling to minimize the makespan on unequal
parallel processors. Decision Science, 11, 586{602.
17
[7] Della Croce, F., Menga, G., Tadei, R., Cavalotto, M. and Petri, L. (1993). Cellular
control of manufacturing systems. European Journal of Operational Research 69, 498{
509.
[8] Fisher, M.L. (1981). The Lagrangian relaxation method for solving integer programming. Management Science 27, 1, 1{18.
[9] Fisher, M.L. (1985). An applications oriented guide to Lagrangian relaxation. Interfaces 15, 10{21.
[10] Graham, R.L. (1969). Bounds on multiprocessing timing anomalies. SIAM J. Appl.
Math. 17, 416{429.
[11] Graves, S.C. (1982). Using Lagrangean Techniques to solve Hierarchical Production
Planning Problems. Management Science 28, 260{275.
[12] Gou,L., Luh, P.B. and Kyoya, Y. (1998). Holonic Manufacturing Scheduling: Architecture, Cooperation Mechanism, and Implementation. Computers in Industry, 27,
213{231.
[13] Kutanoglu,E., Wu, S.D. (1999).On combinatorial auction and Lagrangean relaxation
for distributed resource scheduling. IIE Transactions, 31, 813{826.
[14] M. Held, R.M. Karp, The traveling salesman problem and minimun spanning trees:
Part II, Mathematical Programming, 1, 62{88, 1971.
[15] Ibarra, O.H. and Kim, C.E. (1977). Heuristic algorithms for scheduling independent
tasks on nonidentical processors. J. Assoc. Comput.Mach. 24, 280{289.
[16] Lenstra, J.K., A.H.G. Rinnooy Kan and Brucker, P. (1987). Complexity of machine
scheduling unrelated parallel machines. Annals of Discrete Mathematics 1, 343{362.
[17] Lenstra, J.K., Shmoys, D.B. and Tardos, E. (1990). Approximation algorithms for
scheduling unrelated parallel machines. Mathematical Programming 46, 259{271.
[18] Lin, G.Y., Solberg, J.J. (1992). Integrated shop °oor control using autonomous
agents. IIE transactions, 24, 57{71.
[19] Lucertini, M., Nicolµo, F. and Smriglio, S. (2000) Assignment and sequencing of parts
by Autonomous workstations, Control and Cybernetics, 29,(1).
[20] Luh, P.B. (1993). Scheduling of Manufacturing Systems Using the Lagrangian Relaxation Technique. IEEE Trans. Automatic Control, 38, (6), 1066{1080.
[21] Malone T.W., Smith S.A.,(1988) Modeling the performance of organizational structures, Operation Research, (36), 3, 421-436.
[22] Maturana F.P., Norrie D.H.,(1996) Multi-agent Mediator architecture for distributed
manufacturing, Journal of Intelligent Manufacturing, 7, 257-270.
18
[23] Polyak, B.T. (1969), Minimization of unsmooth functionals, USSR Comput. Math.,
and Math. and Physics, 9,14{29.
[24] Potts, C.N. (1985). Analysis of linear programming heuristic for scheduling unrelated
parallel machines. Discrete Appl. Math. 10, 155{164.
[25] Ramos C., Sousa P.,(1996). Scheduling Orders in Manufacturing Systems using a
Holonic approach, in Pre-proceedings of European Workshop on Agent-oriented Systems in Manufacturing, Berlin.
[26] Sousa P., Ramos C.,(1999). A distributed architecture and negotiation protocol for
scheduling in manufacturing systems. Computers in Industry,(38), 2, 103-113.
[27] Smith R.G., 1980, The contract Net protocol: High Level Communication and Control in a Distributed Problem Solver, in IEEE transactions on computers,(12), 11041113.
[28] Van de Velde, S. L. (1991). Machine scheduling and Lagrangian relaxation, 4, CWI,
Amsterdam.
19
© Copyright 2026 Paperzz