Parallel Construction Heuristic Combined with Constraint

CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
10.1007/s10033-017-0083-7
Parallel Construction Heuristic Combined with Constraint Propagation
for the Car Sequencing Problem
ZHANG Xiangyang1,2,*, GAO Liang2, WEN Long2, and HUANG Zhaodong1,3
1 Faulty of Maritime and Transportation, Ningbo University, Ningbo 315211, China
2 State Key Lab. of Digital Manufacturing Equipment & Technology,
Huazhong University of Science & Technology, Wuhan 430074, China
3 National Traffic Management Engineering & Technology Research Centre Ningbo University Sub-center,
Ningbo University, Ningbo 315211, China
Received November 16, 2016; revised December 26, 2016; accepted January 5, 2017
Abstract: For the car sequencing(CS) problem, the drawbacks of the “sliding windows” technique used in the objective function have
not been rectified, and no high quality initial solution has been acquired to accelerate the improvement of the solution quality. Firstly,
the objective function is improved to solve the double and bias counting of violations broadly discussed. Then, a new method combining
heuristic with constraint propagation is proposed which constructs initial solutions under a parallel framework. Based on constraint
propagation, three filtering rules are designed to intersecting with three greedy functions, so the variable domain is narrowed in the
process of the construction. The parallel framework is served to show its robustness in terms of the quality of the solution since it greatly
increases the performance of obtaining the best solution. In the computational experiments, 109 instances of 3 sets from the CSPLib’s
benchmarks are used to test the performance of the proposed method. Experiment results show that the proposed method outperforms
others in acquiring the best-known results for 85 best-known results of 109 are obtained with only one construction. The proposed
research provides an avenue to remedy the deficiencies of “sliding windows” technique and construct high quality initial solutions.
Keywords: car sequencing problem, constraint propagation, parallel construction heuristic, filtering rule
Chinese Journal of Mechanical Engineering (2017) 30: *–*; DOI: 10.3901/***; published online January *, 2017
1

Introduction
The car sequencing(CS) problem was firstly described
by PARRELLO,et al[1]. It involves determining an order of
cars as an input for the assembly line production, which is
also the sequence that is the result of a short plan of each
day’s production. A mixed-model assembly line allows
various types of cars, each consisting of the common
components and different options, like sunroof or air
conditioner, to be assembled sequentially on different work
stations. Each of these work stations is especially designed
for the assembly of a particular option associated with
certain types, and more time is usually spent on the
* Corresponding author. E-mail: [email protected]
Supported by National Natural Science Foundation of China(Grant
Nos. 51435009, 71302085); Zhejiang Provincial Natural Science
Foundation of China (Grant No.LQ14E080002); K. C. Wong Magna
Fund in Ningbo University
© Chinese Mechanical Engineering Society and Springer-Verlag Berlin Heidelberg 2017
assembly of the option rather than that of the common
components. When each position in one sequence is
assigned one type of car, a sequence of cars is constructed
with this successive assignment. However, an assigned
sequence should not only meet diverse market requirements,
but should also satisfy capacity constraints of work stations.
To minimize overload of the work station assembling the
same options, the associated types should not be assigned
to the successive positions in the sequence. If so some
labor-intensive options would not be assembled on time
and the capacity constraint may be violated.
The CS problem aims to find a sequence of a fixed
number of cars, an explicit assignment of types of cars to
each position variable of the sequence, in order to meet all
option capacity constraints. Usually, an option ratio
constraint is used to express the option’s capacity
constraint(e.g., rk sk stands for no more than rk cars
having k option are allowed to be assigned to any
subsequence consisted of sk successive cars). In order to
avoid ratio violation, the same type(identical cars with the
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
same options) must be spaced adequately on the stations to
ensure their capacity is not exceeded. In real production,
however, overload is inevitable because more options have
to be assembled than the limited capacity number of
options. Hence, the main objective is to decrease these
violations. But when violations occur, conventional
counter-measures are either (i) using additional utility
workers to help or (ii) leaving it until to the end of the
day[2]. Usually, a mathematical model is constructed firstly
to embody the reality, and then a method is applied to solve
the problem when confronted with large-scale instances.
The CS problem has been proven NP-hard by KIS[3]. It
was modeled as a constraint satisfaction problem(CSP) by
DREXL, et al[4]. CSP concerns assigning a set of values to
a set of variables from their respective domains, in order to
satisfy a set of constraints related to the variables. The
problem is referenced as Prob001 in the CSPLib[5], serving
as a typical benchmark for CSP solvers. The CSP version of
car sequencing is established as a decision-making problem,
aiming to get a feasible solution without violation or to
prove that there is no such solution. However, if no feasible
solution can be found, the underlying decision should be
transformed into an optimization problem by relaxing these
constraints and turning violations into objective function
values[2]. However, the commonly employed objective
function based on the “sliding windows” technique has
intrinsic flaws[2, 6], which will be discussed in the next
section. The optimization version was extended to some
variants by adding other related constraints and
optimization objectives. One of these considers color
constraints in a paint shop, this aims to decrease color
changes to conserve the solvent needed to wash the spray
gun. More details about this multi-objective optimization
problem, the ROADEF’2005 challenge, and methods of
solving it are given by SOLNON, et al[6]. Other variations
integrate part usage constraints[4], level scheduling[7], buffer
constraints[8] and smooth constraints[9], or involve
two-sided assembly lines[10] and energy consumption[11].
In the published literature, the exact method is an
important direction in solving the CS problem. Related
research can be divided into CSP and other exact
approaches. However there is no clear boundary amongst
them. For the CSP, the constraint handling principle with
domain reduction and constraint propagation is critical.
This is because the evaluation of one variable would not
only reduce the domains of the associated variables by
employing systemic consistency checking, but also this
domain reduction would be propagated into their associated
variables in tree search space. Thus this constraint handling
principle discards most of the unpromising solutions
effectively in the early search[12–13]. RÉGIN, et al[14],
proposed a global filtering algorithm that acquired zero
violation solutions to some difficult instances or illustrates
nonexistence of such solutions. BUTARU, et al[15],
proposed a search algorithm based on forward checking for
the CS problem and used the fail-first and success-first
strategies in variable and value selection separately.
GRAVEL, et al[13], pointed out that the efficiency of CSP
approaches decreases with the problem size and difficulty.
For the latter, they usually use certain delimitation
strategies to reduce the solution space effectively and find
the optimization solution for certain small instances.
Integral linear program(ILP)[13], beam search(BS)[16],
branch & bound(B&B)[2] yielded some better solutions in
acceptable time. Recently, GOLLE, et al[17], proposed an
iterative beam search(IBS) algorithm that solves nearly all
known zero violation instances in the benchmark sets.
Their algorithm is superior to other exact methods based on
their new lower bound rule. However, for other instances,
the time cost exceeds one minute. Exact methods are very
effective in small or easy instances, but the performance is
not good for more difficult or larger instances.
In addition to exact methods, various heuristics are
studied that provide an opportunistic way to search
approximate optimal solutions. Among them, the initial
solution is either randomly generated or constructed using
strategies. In solution initialization, GOTTLIEB, et al[18],
compared six different heuristics based on the option
utilization rates and pointed out that the initial solution can
significantly improve the solution process. They found
DSU(dynamic sum of utilization rates, expressing the
difficulty of a certain type to be added after the partial
sequence without violation created) is the best one heuristic.
Ant Colony Optimization(ACO)[18–19] is an iterative
construction method for this problem. SOLNON[20] put
forward a special effective ACO, in which two different
pheromone structures were integrated together to select the
most promising candidate solution. One was for learning
“good” solutions acquired. The other was for learning
“critical” cars, the difficult cars to assign without violating
the associated option constraints. In addition, greedy
randomized adaptive search procedure(GRASP)[21] used a
multi-start construction framework for the CS problem, but
the results showed that it is not competitive.
Besides construction heuristics, other heuristics usually
iteratively improve initial solutions with a hybrid strategy
that takes full advantages of different search methods. This
gives them superiority to single heuristics or the exact
method. PERRON, et al[22], firstly demonstrated the
portfolio of algorithms for the CS problem. It is reasonable
given the diversity and intensity of search methods that
they cannot be in perfect harmony under the same search
principle. However, it is not complicated to coordinate two
or more complementary methods to utilize their specialty.
Actually, the more competitive algorithms usually are a
combination of different heuristics or a combination of an
exact method with a heuristic. ZINFLOU, et al[23], proposed
a genetic algorithm which incorporates a crossover using an
integral linear programming model(ILPGA) for the solution
construction. The hybrid GA outperforms others although it
costs more in time. ESTELLON, et al[24],presented an
improvement on a large neighborhood search(LNS) for the
CS problem. The improved LNS consists of very fast local
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
search(VFLS) and very large neighborhood search(VLNS),
both of which incorporate a special integral
programming(IP) technique. The hybrid LNS outperforms
VFLS, and the latter won the RODEAF’ 2005 challenge[25].
PRANDTSTETTER, et al[26], proposed a hybrid algorithm
that combines an integral linear programing and a variable
neighborhood search(ILP+VNS) for the problem. They
claimed that the hybrid algorithm out performs other
approaches in the challenge, even better than VNS/ILS[27]
which won the second prize in the challenge. Other
different algorithms presented show their efficiency of
combination for the CS problem. THIRUVADY, et al[28],
proposed an algorithm consisted of constraint programming,
beam search and ACO, and the algorithm is better than any
other combinations. Recently, they tried a hybrid algorithm
named Lagrangian-ACO[29] which even outperforms the
method of PRANDSTETTER, et al[26]. ARTIGUES, et al[30],
proposed a two exact methods’ combination and proved its
efficiency. One is SAT and the other is CP. SIALA, et al[31],
invested all combinations of four heuristics and designed a
filtering rule to prune the solution space. They proved
specific heuristics are very important for efficiently solving
the CS problem, and the filtering rule is very efficient to
implement in practice.
Although different complementary methods have shown
their efficiency in solving the CS problem, no methods
dedicated to generating high quality initial solutions are
reported in the literature. Therefore, a simple and efficient
heuristic combined with constraint propagation to construct
initial solutions is necessary, and the research has focused
mostly on this.
The organization of the paper is as follows. In section 2,
we revised the mathematical model with an improved
objective function. In section 3, we presented three filtering
rules and three greedy functions used in the solution
construction. In section 4, we proposed parallel
construction heuristic combined with constraint
propagation for the CS problem. Computational results of
benchmark instances are presented in section 5. Conclusion
appears in section 6.
2 Problem and Mathematical Model
An instance of the car sequencing is defined by a tuple
T , O,V , dv , rk sk , avk  where T denotes the number of
cars to be arranged, O is the set of all possible options,
V represents the set of types possessing the same options,
and dv is the demand of type v  V and
d
v
 T . The
binary constant avk indicates whether type v  V requires
option k  O and the ratio constraint rk sk means no
more rk cars require option k
in any subsequence
whose length is sk . The decision version CS problem
introduces xvt  0,1 v  V ; t  1,..., T , a set of binary
decision variables whose value is 1 if type v is assigned
to position t otherwise 0. The solution is to evaluate a
sequence of variables that satisfies all related constraints.
The mathematical model is shown as Eqs. (1) (5):
T
x
x
vV
j  Sk 1
 a
t j
vV
 d v v  V ,
(1)
 1 t  1,..., T ,
(2)
vt
t 1
vt
x  rk k  O; j  1,..., T  sk  1 ,
(3)
vk vt
xvt  0,1 v  V ; t  1,..., T ,
(4)
avk  0 v  V ; k  O .
(5)
Eq. (1) shows that each type must obey its output
requirement. Eq. (2) indicates that exactly only one type
can be assigned for every position of the solution sequence.
Eq. (3) requires that every subsequence  j, j  Sk  1 of
the associated option
constraint,
and
there
k O
is
must satisfy its ratio
a
total
 T  s
k O
k
 1
subsequences.
An optimized version of CS is usually formed by adding
an objective function as Eq. (6), which is derived from Eq.
(3), and the objective value equals the sum of all violated
subsequences for all options.
T

 
  t
min   min max 0,    avk xvt   rk  ,1 ,
kO t  Sk
  t  t  sk 1 vV

 
(6)
Eq. (6) uses the “sliding windows” technique to evaluate
objective value, but the flaws of the technique are obvious
in its double and bias counting of violations, which is
discussed by FLIEDNER, et al[2]. Double counting means
that one option usually appears in more than one
subsequence, so one option may yield more than one
violation. Additionally, bias counting means that a car in
the middle of the sequence involves more subsequences
than in the two sides. Therefore, more violations associated
with the car would be counted. In addition, the defects of
the “sliding windows” technique make the economic
meaning of violation times ambiguous because it does not
reflect the true violation expenditure. GAGNÉ, et al[19], and
BOYSEN, et al[32], however, have given some valuable
contribution to rationalizing the counting rule.
To remedy these flaws, we devised an improved
objective function as per Eq. (7), by introducing a set of
new binary violation variables zkt  0,1 ,  k  O ,
t  1,..., T , used in Eqs. (8)(10). Every new variable is
associated with one position and regarded as a violation
based on three conditions. First, it must satisfy zkt =xvt
which means a certain type must be assigned to the new
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
variable. Second, the number of the same option associated
to the type in the subsequence of current “sliding window”
exceeds the allowed number without violation. Lastly, in
the same subsequence the excess of the option is bigger
than the sum of the counted violations of the related option,
this also means the violating option can’t be counted twice.
Thus the new optimization model consists of Eqs. (1), (2),
(4), (5), (7)(10). The improved objective function now
computes violations dynamically based on options instead
of subsequences. Not only does it escape double and bias
counting, but it also accurately reflects the economic
meaning of the objective value, the number of utility
workers needed to deal with the violation cars.
T
 z
min
kO t 1
kt
,
(7)
t
t 1
 


zkt  max 0, 
avk xvt   rk  
zkt  


 t max

1,t  sk 1 vV
 t max1,t sk 1  , (8)
 
  avk xvt k  O; t  rk  1,..., T
vV
zkt  0,1 k  O; t  rk  1,..., T ,
(9)
zkt  0 k  O; t  1,..., rk ,
(10)
The improved objective function can remove the double
and bias counting of violations. Take Table 1 for instance.
The solution sequence consists of 12 cars of 5 types and 3
options whose constraints are 2/5, 1/3 and 1/2. The number
in Row 1 denotes different type. The next 3 rows use a sign
* to denote the type-option relationship, which means the
type possesses the option. Rows 57 show whether a
violation occurs for each option’s subsequence, and 1
means it occurs by Eq. (6). However, Rows 8-10 give each
option a value of 0 or 1 depending on if it results in a new
violation, and the violations are counted by Eq. (8). For
example, constraint 2/5, 8 out of 12 cars have the option
and 7 subsequences violate the constraint in Row 5, but
there are only 3 violations in Row 8. Theoretically, the true
violation approaches to a lower boundary, which equals
8  2  floor 12 5  min  mod 12,5 , 2  2 .
Thus an
extra 3 person-time units of utility worker are required.
Moreover, bias counting of violations is also eliminated,
since only one chance happens for each position variable of
the associated option to be counted as a violation.
Table 1.
O/V
2/5
1/3
1/2
2/5
1/3
1/2
2/5
1
*
*
*
1
1
0
0
Violations using traditional and improved
objective functions for a sequence
2
3
*
*
1
1
0
0
*
1
1
1
0
4
*
*
0
1
0
0
5
*
*
1
1
0
1
3
*
*
1
1
1
0
4
2
*
*
1
1
0
0
*
1
1
0
0
3
*
*
1
*
*
*
0
1
0
1
1
0
3
*
5
*
*
*
0
1
1
1/3
1/2
3
0
0
1
0
0
0
0
1
1
0
0
0
0
1
1
0
0
0
0
1
0
0
1
0
Three Filtering Rules Based on Constraint
Propagation
Constraint propagation is beneficial in tackling CSP
optimization. It achieves domain reduction by filtering out
some values from the domains of the associated variables
and this constraint treating mechanism is propagated
throughout related constraints[13]. Based on constraint
propagation, filtering rules are embedded in heuristics to
construct a solution to the problem[31].
In the process of the construction, it is important to rule
out undesired values from the variable domain. To ensure a
series constraints are satisfied, some types should not
appear in the domain after one assignment. The assignment
must not only satisfy constraints of the arranged position
variables, but also satisfy the unarranged position variables.
For example, in Table 1, ratio constraint 2/5 means that the
sequence
holds
at
most
6
options
2  floor 12 5   min  mod 18,5  , 2   6 . When exactly 6
options will be put in the sequence, at least 2 options must
be arranged for the first 2 positions.
Three filtering rules effectively narrow the domain of the
current position variable in order to assign a type to the
variable. Firstly, when one type is used up, it will be ruled
out from the domain of the current variable. Secondly, with
constraint propagation, the assignment of certain type to the
current position may lead to violations of the remaining
positions. Therefore, it is necessary to exclude these types
in advance to keep the process theoretically feasible for the
unarranged cars. Thirdly, as the ratio constraint implies, the
assignment should satisfy the constraints of the following
subsequence. It seems reasonable because it can avoid
double counting of violations by ruling out the next
violation in advance. The filtering rules filter some inferior
solutions out in advance and help select the favorable
solutions, but the randomized selection could not get better
solution especially for the instances with tight constraints,
for the latter lacks the anticipation for the future’s
arrangement.
The filtering rules are dynamically used as follows. After
the assignment of one variable, its domain must be updated
according to the first filtering rule, and then a greedy
function prescribed later updates the domain again,
selecting the types leading to the minimum violations of the
current subsequences. Then, the domain is updated again
according to the second rule, filtering out the violated types.
It means the domain must discard the types explicitly
leading to violation. It is possible to use the third filtering
rule to narrow the domain further. Maybe there are still two
or more types left after the three rules. Some exceptions
could interrupt the last two filtering rules when there is
none or only one type left, then greedy functions are ready


CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
to select one type to assign the variable.
The first and the third rule are easy to operate. The
second, however, is somewhat complicated, because the
last subsequence of the partial solution has influenced the
adjacent variables assignment, as shown in Fig. 1.
Fig. 1.
Remaining options filtering rule with ratio constraint
In Fig. 1, the solution consists of the following parts:
from position 1 to t  1 , current position t and the
remaining positions from t  1 to T . Suppose the current
position t variable is assigned one type having k option,
this should determine whether a violation occurs for the
remaining cars. If the remaining options are exactly integral
multiples of the subsequence length of the option, it is easy
to count the violations for the arrangement has no effect on
it. Otherwise, let the remaining variables split into two
parts. The right part consists of continuous subsequence
blocks with length floor  T  t  sk   sk , and the left part
is
an
incomplete
subsequence
with
length
T  t  floor  T  t  sk   sk . The right part could hold at
most floor  T  t  sk   rk options by putting rk options
from right to left in each subsequence. The left part and
some of the last arranged cars constitute a subsequence.
However, the arranged cars limit the number of options the
left part could hold. Combining the two parts, the
remaining subsequence holds the associated option less
than or equal to the number expressed as the right part of
Eq. (11):
T
 a
t t 1 vV
x  floor  T  t  / sk   rk 
vk vt 

 ,
t


 
min mod T  t, sk  ,max 0,  rk 
avk xvt 



  tmax1,t sk modT t ,sk  1 vV


k O
(11)
4
Parallel Construction Heuristic
4.1 An initial solution construction
An initial solution construction is fundamental to parallel
solution construction. Three greedy functions intersecting
with three filtering rules help to complete the construction,
aiming to deal with the different results of using the
filtering rules and to complete the final decision of each
assignment in the process.
An initial solution is constructed from empty slots to a
complete solution by increasing value assignments for the
position variables. Above all, the assignment follows the
natural order. One type is chosen arbitrarily and assigned to
the first variable. As for other variables, the filtering rules
must be used before the assignment. Three greedy
functions are applied to deal with different situations after a
filtering rule. These functions further narrow the domain of
the variable in a greedy fashion and help to make the
decision, in a different way from the filtering rules.
There are three situations to deal with. The first one is
that there are still some types left after the rule. If so, a
filtering rule or greedy function is applied to break the ties
since only one type will be arranged finally. The second
one shows that only one type is left after any one of the
rules, so the current variable is assigned the type. The third
situation is when no type satisfies the rule, so the domain
remains unchanged and the greedy function is applied to
select one type, with the least violations for the assignment.
The construction utilizes three greedy functions to
reduce the domain and help assign the variable correctly.
The first greedy function GF  v, t  follows the first rule
and sums up all the violations caused by the current car.
This is then used to select the types resulting in the
minimum violations. However, only the options possessed
by the type have the chance to determine whether the
options violate the constraints, as Eq. (12) shows.
Both the second and third greedy function are based on
option utilization rates[18], i.e.,


UtilRate(k )   sk rk   avk xvt  σ ,
vV tπ


a ratio meaning the number of cars requiring k option
with respect to the number of the unarranged cars, which
can be put in the σ sequence to satisfy rk sk . It shows
the scheduling difficulty of the option in the sequence. The
larger the value of UtilRate , the more difficult the
problem becomes. It is static and denotes the difficulty
level of the instance when σ is regarded as the sequence
of all cars, or is dynamic and denotes the difficulty level of
the remaining cars to assign when σ is regarded as the
subsequence of the remaining cars.
The second greedy function SR(v, t ) computes the
dynamic sum of option utilization rates(DSU) of v type at
t position shown in Eq. (13), or the static sum of the
option utilization rates(SSU[18]). It shows the difficulty of
the type to assign. According to the fail-first principle, the
type with the greatest value is set for the current variable.
When more than one type is left after the second function,
the third function HG(v, t ) completes the assignment. The
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
function aims to select the type with the greatest difficulty
with respect to changing the assignment. As shown in Eq.
(14), it expresses the change before and after the
assignment. In the expression, σ1 and σ 2 represent the
arranged subsequence and the remains before the
assignment,
σ
while
'
1
σ
and
'
2
represent
the
corresponding ones after the assignment.
t





GF v, t   min 1,max 0, avk xvt  rk   avk xvt 
 tmax1,t s 1
 , (12)
kO
k





v V;t  2,,T 

SR(v, t )   avk xvt   sk rk    av' k xvt '
k O
v' V t ' π

v  V ; t   2, , T 

σ 
,
a
k O
vk



2 solIni  randSel  O  ;
3 For j  2, , T
4
O1  fil1 O, solIni  ;
5
O2  v v  O1 ; GF  v, j   min  GF  v, j   ;
6
if O2  1

8
else
9
O3  fil 2  O2 , solIni  ;
10
if O3  1
solIni   solIni, O3  ; continue;
elseif O3  0
12
O  O2 ;
13

xvt  UtilRate k , σ 2'  UtilRate k , σ1'
else
14
  , (14)
v  V ; t   2, , T 
A heuristic is proposed to construct an initial solution.
This heuristic combines the filtering rules and greedy
functions in the construction. The pseudo-code of the
heuristic (IniCH) is shown in Procedure 1.

solIini   solIni, O 2  ; continue;
7
11
(13)
HG(v, t )   UtilRate  k , σ1   UtilRate  k , σ 2   
k O
Procedure 1[ solIni, obj ]  IniCH  solIni, T , O 
1 solIni  ;
15
O4  fil 3  O3 , solIni  ;
16
if O4  1
solIni   solIni, O 4  ;
17
elseif O4  0
18
O  O3 ;
19
20
else


21
O  v v  O4 ;SR  v, j   max  SR  v, j   ;
22
if O  1
solIni   solIni, O  ; continue;
23
24
end
end
25
26
end
27
O  v v  O4 ; HG  v, j   max  HG  v, j   ;
28
solIni   solIni, ransel  O   ;
29


end
30 end
31 obj  objFun  solIni 
32 end
4.2 Parallel construction heuristic
Parallelization of construction can yield multiple
solutions with one operation. This is beneficial in choosing
the best one to improve in the later phase. In contrast to
searching in a complete search tree by using certain exact
search method, the parallelization scheme explores in
different subspaces to get approximate solutions in
appropriate time. Therefore we can say that, in comparison
to previous methods, it obtains the optimal solution.
Parallel construction mainly embodies the robustness of
approaching the best-known objective value, compensating
partly for the shortcomings of the greedy heuristic in the
CS problem. In the schedule, though certain types satisfy
the constraints, they are rejected since only one type will be
finally selected. However, this arrangement in the early
scheduling can greatly limit the later assignment, which
easily leads to favoring some types and delaying the hard
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
choice. Therefore, violations easily accumulate and congest
in the later part of the sequence, which should be avoided
in the construction.
By compulsorily assigning different types to the first
position, parallel construction can alleviate the weakness of
an initial solution construction. By putting in turn each
different type of cars in the first position of the sequence,
the number of initial solutions is the number of types. So, a
parallel construction heuristic acts as a framework to
construct the number of types of initial solutions.
A parallel construction heuristic(ParalCH) is proposed to
generate different initial solutions and the pseudo-code is
given in Procedure 2. In line 1, all types take turns at
participating in the construction of the initial solution. At
the beginning of the construction, the solution is empty in
line 2. Each type is given a turn at the first position of the
solution in line 3. Then, each type’s construction continues
until the whole solution and its objective value are acquired
in line 4. This is completed in Procedure 2 by using the
IniCH function presented as in Procedure 1.
function values. There are no parameters required in
ParalCH. To compare the performance of using the
dynamic and the static option utilization rates, ParalCH was
combined with the DSU and SSU respectively to form
ParalCH(DSU) and ParalCH(SSU). A traditional objective
function is used to compare with other methods and the
proposed objective function is evaluated in Set 3.
The methods to be compared with ParalCH have higher
quotation by the literature. These methods are the exact
methods, the local search or a combination of both methods.
Both use CSPLib’s benchmarks for the CS problem.
Procedure 2 [SOL, OBJ]  ParalCH  SOL, T , O 
In contrast with other methods[2, 13, 22, 26], ParalCH
definitively got all the best-known results, and the time cost
is negligible. Although other methods claim that it is easy
to solve Set 1, Table 2 shows the difference in average time
cost (seconds) between our approach and others.
1 for i  1, , O
2
SOL  i   ;
3
SOL  i   i;
SOL  i  , OBJ  i    IniCH  SOL  i  , OBJ  i   ;
5 end
6 end
5.2 Test solution and performance analysis
Set 1 is easy because the proposed heuristic gets all zero
violation solutions for each instance, and the time cost is
less than one second. Two points are worth noting. Firstly,
the parallel scheme is unnecessary because a zero violation
solution was easily achieved with IniCH or Procedure 1, in
which any type can be put in the first position. Secondly,
both DSU and SSU used in the second greedy function
SR(v, t ) did not change the objective value.
4
5 Computational Experiments
5.1 Test suite and experimental setup
All instances come from the CSPLib(www.CSPLib.org)
and are divided into three sets. Set 1 consists of 70
instances of different difficulty levels ordered by their
option utilization rates[15]. Every instance consists of 200
cars, 5 options that are combined to form 17 to 30 types,
solved without violation. Set 2 consists of 9 instances with
100 cars, 5 options and 1926 types. Here, 4 out of the 9
instances have solutions with zero violations whereas 5
instances have not a solution without violation [13]. Besides
the number of cars, option utilization rates are another
important factor as mentioned. Some may exceed 0.95 or
even be equal to 1. Set 3 contains 30 instances of large size
(200, 300 and 400 cars), with the other characters as per Set
2, and even more difficult to solve. Among the 30 instances
in Set3, 7 instances have zero violation solutions. Our
approach of Procedure 2(ParalCH) is implemented in
MATLAB 12.0 and tested on Core™ i3 CPU 2.13GHz and
2.0 GB RAM. An approximate comparison was completed
to discern any obvious variations.
The tests focus on the solution quality, computational
effort, robustness and a comparison of the two objective
Table 2.
Average time (seconds) of different methods for
each instance of Set 1
methods
Rand-LNS*[22]
ILP[13]
ACO*[13]
SB&B[2]
CPLEX[2]
C-ILP[26]
k-ILP[26]
HybrVNS *[26]
ParelCH
P60
0-9
31.5
0.02
0.85
2.43
4.85
9.38
0.4
0.24
P65
1-10
65.5
0.02
1
3.25
6.65
9.2
0.5
0.27
P70
1-9
84.8
0.02
2.62
7.93
6.65
15.6
0.6
0.31
P75
2-11
106.6
0.02
4.73
12.2
13.16
22
0.8
0.34
P80
2-19
173.8
0.03
2.62
25.51
18.65
30.43
1.2
0.36
P85
2-27
174.7
0.04
5.96
38.57
18.53
37.3
1.4
0.42
P90
1-92
208.8
0.03
4.08
90.37
33.63
61.03
1.7
0.48
The data in Table 2 shows our results in the last row and
comparison can be made with results using other methods.
The name of all the methods used is shown on the left, and
the first, top, row is the name of instances groups in Set 1.
All other methods in the first column are followed by a
citation. Each cell in the remaining table is the average time
used to complete the instance. All methods except for ACO
are inferior to ParalCH, ACO actually needs less time.
Additionally, three methods marked by *, which generate
an initial solution randomly and improve it subsequently,
could not ensure consistency, repetition, of the same
solution, nor ensure acquiring zero violation solution each
time. ACO is nearly as good as our approach because no
big differences exist.
Set 2, a classical set of instances, is appropriate to
evaluate the performance of our approach, because it is
difficult to solve for the higher option utilization rates.
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
Based on the parallel scheme as in Procedure 2, ParalCH is
applied to solve Set 2. Table 3 presents the difference
between ParalCH and the methods[2, 13, 17, 22, 23, 26], which are
the most representative methods used as references for
comparison.
Table 3.
values using SSU and DSU respectively. Although 10-93
and 16-81 did not get the best-known values, there was
only one violation difference from the best-known value.
Additionally, time spent on each instance is relatively short
in getting the best-known value, and this time is
extraordinarily less than others except for ACO+LS.
The values we obtain are very close to the best for each
operation. Therefore our approach shows competitive value
in its robustness and the high quality of its solutions in less
time, several seconds. Regarding exact methods such as
IBS, SB&B, CPLEX, k-ILP, which use certain complete
search methods, they could not get more best known values
than ParalCH(DSU or SSU) except for 16-81. Their
strength maybe in focusing on searching for zero violation
solutions but the time used to acquire best none zero
violation solutions exceeds 600 seconds. For heuristics,
ACO+LS and ILPGAncpx compete well with ours. They
outperform ours in 16-81 and 10-93, but the gap is small.
Methods marked * generated an initial solution randomly.
Although LNS got all the best-known values, it required
more time than ParalCH. ACO required similar time as
ours but it could not ensure the best values each time.
HybrVNS has a lower performance in the quality of the
solution and the time cost.
Set 3 consists of instances that are much more difficult to
solve. This is because: (i) a larger number of cars are to be
scheduled, (ii) the average option utilization rates exceed
0.9 or even equals to 1, and (iii) most cannot be solved
without violation. For each, the solution quality and the
time cost of ParalCH are compared with methods like
ACO+LS[13], ILPGAncpx[23], SB&B and IBS[17], whose
results, broadly discussed in recent literature, are available
for comparison.
Table 4 presents the comparisons. The first two columns
list the names and the best-known objective values. For
each method column 3 to 14, show the average objective
value and average time. The best values are highlighted by
an asterisk *. The last four columns show our results, in
which we used SSU(Sum of Static Utilization Rate) and
DSU(Sum of Dynamic Utilization Rate) respectively.
Objective value and time cost (seconds) of different
methods for Set 2
10-93 16-81 19-71 21-90 26-82 36-92 4-72 41-66 6-76
3
0
2
2
0
2
0
0
6
3
0
2
2
0
2
0
0
6
LNS*[22]
228 296 51
38
93 241 412 23 12
4.2 0.1 2.1 2.6
0
2.3
0
0
6
ACO*[13]
10.10 4.23 9.26 8.73 0.74 8.50 0.5 0.04 7.94
3.8
0
2
2
0
2
0
0
6
ACO*+LS[13]
13.79 1.75 13.04 12.6 0.26 12.66 0.27 0.03 11.76
5
0
3
2
0
2
0
0
6
SB&B[2]
>600 8 >600 >600 6 >600 7
5 >600
5
2
4
3
0
2
0
0
6
CPLEX[2]
>600 >600 >600 >600 125 >600 27 13 >600
11
0
4
2
0
7
0
0
6
k-ILP[26]
600 280 600 600 23 600 13 36 600
3
0
2
2
0
2
0
0
6
ncpx
ILPGA
[23]
585 29 530 >600 36 406 11
1 >600
8
0
3.2 3.0
0
3.7
0
0.2
6
HybrVNS *[26]
>600 >600 >600 >600 >600 >600 >600 >600 >600
3
0
2
2
0
2
0
0
6
IBS[17]
>600 1.64 >600 >600 0.58 >600 0.27 0.94 >600
5
1
2
2
2
2
0
0
6
ParalCH(DSU)
5.23 4.84 5.13 4.66 6.46 4.65 1.41 0.17 4.00
4
2
3
2
0
2
0
0
6
ParalCH(SSU)
5.17 4.96 5.16 4.61 2.25 4.65 1.45 1.21 4.03
methods
The column on the left shows the methods used, the last
two rows are ours and use DSU and SSU respectively. The
top rows show the instance number and best known
objective values for Set 2. The remaining cells show the
average objective value and time cost (seconds) quoted or
acquired by the experiments. 7 out of 9 instances got the
best known values. Among them, 21-90, 36-92, 4-72, 41-66
and 6-76 got the best-known values with both DSU and
SSU. Moreover, 19-71 and 26-82 got the best-known
Table 4.
No.
200_01
200_02
200_03
200_04
200_05
200_06
200_07
200_08
200_09
200_10
300_01
300_02
300_03
300_04
Best
0
2
3
7
6
6
0
8
10
19
0
12
13
7
ACO+LS[13]
Val
Time
1.00
33.42
2.41
33.44
6.04
34.45
7.57
34.66
6.40
32.01
6.00*
32.52
0.00*
28.65
8.00*
31.10
10.00*
34.00
19.09
32.68
2.15
65.29
12.02
66.38
13.06
67.32
8.16
65.99
Objective value and time cost (seconds) of different methods for Set 3
SB&B[2]
Val
Time
5
>600
7
>600
14
>600
16
>600
14
>600
8
>600
1
>600
11
>600
15
>600
27
>600
7
>600
24
>600
25
>600
21
>600
ILPGA
Val
0.00*
2.20
5.30
7.00*
6.00*
6.00*
0.00*
8.00*
10.00*
19.00*
0.80
12.00*
13.00*
7.50
ncpx
[23]
Time
1147
1287
1841
1344
1092
1773
413
1605
1568
1422
1326
1461
1555
1621
IBS[17]
Val
Time
1
>600
3
>600
8
>600
8
>600
8
>600
7
>600
0*
0.58
9
>600
10*
>600
20
>600
0*
6.05
12*
>600
14
>600
10
>600
ParalCH (SSU)
Val
Time
6
0.51
2*
13.3
18
11.2
7*
13.2
10
10.59
8
11.94
3
11.39
11
8.65
16
11.13
28
7.58
0*
1.81
14
21.46
14
23.42
18
19.41
ParalCH (DSU)
Val
Time
2
12.99
2*
13.05
16
11.72
7*
13.01
10
10.71
8
12.06
0*
2.19
10
8.44
16
10.97
28
7.57
0*
1.82
13
21.16
17
24.68
16
20.15
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
300_05
300_06
300_07
300_08
300_09
300_10
400_01
400_02
400_03
400_04
400_05
400_06
400_07
400_08
400_09
400_10
27
2
0
8
7
21
1
15
9
19
0
0
4
4
5
0
32.28
4.38
0.59
8.00*
7.46
22.60
2.52
17.37
9.91
19.01
0.01
0.33
5.44
5.30
7.63
0.95
66.09
65.92
62.64
63.21
63.47
63.91
96.84
96.70
94.87
98.98
76.85
87.56
93.29
93.39
97.75
93.89
52
9
14
17
16
56
9
35
19
29
23
2
15
28
29
23
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
>600
33.00
2.60
0.00*
8.00*
7.20
22.4
1.90
18.30
9.90
19.00*
0.00*
0.00*
4.70
4.20
7.20
0.00*
In our results, ParalCH(DSU) yields better performance
than ParalCH(SSU), which is confirmed by GOTTLIEB[18].
The former acquired 5 of the best-known values as against
8 in the latter. Additionally, 14 instances got better values
by using the former than the latter, whereas 6 instances got
better values by using the latter rather than the former. It
can be noted that the operation time for the two methods
had no significant difference. Hence, SSU may be used to
supplement DSU. In some instances, the heuristic which
uses SSU can get better solutions than the heuristic using
DSU, and this is shown in the instance 16-81 of Set 2.
ParalCH(DSU) outperforms other methods in eight
instances, not only in the success rate but also in the time
cost. In Table 4, eight instances (200-02/04/07, 300-01/07,
400-01/05/10) got their best-known results and saved at
least 80% of the time for them. Compared to our method,
SB&B can be omitted because neither one of the
best-known results can be found nor was the process time
saving. Except for eight instances, ACO+LS outperforms
ParalCH(DSU) in the average solution quality. However,
because it used iterative improvement it required more time
than ParalCH(DSU). For the method IBS, 9 instances got
Fig. 2.
2602
1598
426
1680
1450
2339
1782
2081
2339
2355
20
605
1480
1203
2137
1891
32
6
0*
9
7*
25
3
19
12
20
0*
0*
4
10
11
0*
>600
>600
5.34
>600
>600
>600
>600
>600
>600
>600
0.38
7.28
>600
>600
>600
3.13
54
15
3
10
10
38
1*
37
20
24
4
1
21
15
28
0*
11.78
18.59
21.62
18.60
15.44
12.15
33.77
21.82
23.06
28.37
21.64
27.66
23.64
21.38
27.29
8.15
57
13
0*
11
11
48
1*
36
20
26
0*
1
7
10
16
0*
11.88
20.48
1.78
19.38
11.52
12.44
33.44
21.29
22.94
21.33
1.12
28.17
24.91
20.13
28.89
4.83
the best-known values with each operation, versus 8 with
ParalCH(DSU), but 21 instances got better values than
ParalCH(DSU). Though IBS could get higher solutions
than ParalCH(DSU), the gap is not so large, see Fig. 2. The
gap, however, expands more severely with increased
number of violations. ILPGAncpx got 16 best-known results
in each operation. However, although instances like
200-02/300-01/400-01 get the best-known solution easily
with ParalCH(DSU), it is more difficult for ILPGAncpx to
do so within 1000 seconds because it converged too slowly.
Therefore, once again, ParalCH(DSU) demonstrates its
superiority in obtaining good solutions quickly and exactly
under the parallelization skeleton.
Following are the comparison between the improved
objective function and the traditional one. ParalCH(DSU)
was chosen for the comparison. Two objective functions are
compared with each other using the proposed heuristic. In
addition to the difference in their economic meaning, the
two objective functions also frequently gave different
objective values for the same instance. However, both
values might be the same when there is no violation or very
low number of violations such as in Set 1 and Set 2.
Gap of the objective values between IBS and ParalCH
Following are the comparison between the improved
objective function and the traditional one. ParalCH(DSU)
was chosen for the comparison. Two objective functions are
compared with each other using the proposed heuristic. In
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
addition to the difference in their economic meaning, the
two objective functions also frequently gave different
objective values for the same instance. However, both
values might be the same when there is no violation or very
low number of violations such as in Set 1 and Set 2.
Therefore, four representative instances from Set 3 show
the relationship of the two objective function values as
illustrated in Table 5.
Table 5.
Comparison of two objective values
Instances
200-07
200-01
300-08
200-03
Values of instances
0
0
2
2
11
9
16
13
3
2
11
10
18
11
4
2
11
8
4
2
The first column shows the name of four instances, and
columns 2 to 5 show the two different objective values used
by each instance. Therefore, for each instance, the upper
values show the optimum values acquired by the traditional
objective function, the lower values are the equivalent
values acquired when using the improved objective
function. The first instance is 200-07, for which the two
functions got the same result. The second instance, 200-01,
shows different values with the traditional functions but
only one optimal value with the new function. In contrast to
the second instance, the third instance, 300-08, shows
different values with the improved objective function but
only one optimal value with the conventional objective
function. The last instance is 200-03, which shows the
optimal value with one objective function is not the optimal
value with the other objective function. As a whole, the
improved objective function got values lower than that of
the traditional one. Also note that the gap becomes wider
when the number of violations increases.
Therefore, the proposed parallel construction heuristic is
appropriate for later improvement, as a reconstruction
method, because of its high quality solutions, cost saving
time and robustness. With the tests for 3 sets, it shows
ParalHC obtains 85 best-known solutions out of 109. The
solutions are determinative for each operation and time
taken is only several seconds to get the best solutions.
subsequence. It not only eliminates the flaws but also gives
the violations a strong economic meaning, the number of
utility workers used to tackle the violation.
(2) High quality initial solutions are obtained by using
the parallel construction heuristic combined with constraint
propagation. Based on constraint propagation, three
filtering rules intersecting with three greedy functions
tackle constraints effectively and assign an adequate car to
the current decision variable when violation is inevitable.
Moreover, the parallel construction framework increases
the robustness of high quality when different types are
arranged purposely in the start positions to promote diverse
searching. As the experiment revealed, 85 best-known
results of 109 are acquired in the initial solutions in just
several seconds.
(3) The proposed heuristic can be used in later
improvement of the solution and the proposed objective
function can act as a real expression in future research.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
6 Conclusions
(1) For the optimized version of the CS problem, the
drawbacks of the “sliding windows” technique, causing
double and bias counting of violations in the traditional
objective function, is overcome by an improved objective
function. The proposed objective function counts violations
of positions rather than subsequences and it dynamically
subtracts the violations already counted in the same
[11]
[12]
[13]
PARRELLO B D, KABAT W C, WOS L. Job-shop scheduling
using automated reasoning: A case study of the car-sequencing
problem[J]. Journal of Automated reasoning, 1986, 2(1): 1–42.
FLIEDNER M, BOYSEN N. Solving the car sequencing problem
via Branch & Bound[J]. European Journal of Operational
Research, 2008, 191(3): 1023–1042.
KIS T. On the complexity of the car sequencing problem[J].
Operations Research Letters, 2004, 32(4): 331–335.
DREXL A, KIMMS A. Sequencing JIT mixed-model assembly
lines under station-load and part-usage constraints[J]. Management
Science, 2001, 47(3): 480–491.
GENT I P, WALSH T. CSPLib: a benchmark library for
constraints[C]//Principles
and
Practice
of
Constraint
Programming-CP’99, Alexandria, VA, USA, October 11–14, 1999:
480–481.
SOLNON C, CUNG V D, NGUYEN A, et al. The car sequencing
problem: Overview of state-of-the-art methods and industrial
case-study of the ROADEF’2005 challenge problem[J]. European
Journal of Operational Research, 2008, 191(3): 912–927.
DREXL A, KIMMS A, MATTHIEßEN L. Algorithms for the car
sequencing and the level scheduling problem[J]. Journal of
Scheduling, 2006, 9(2): 153–176.
BOYSEN N, ZENKER M. A decomposition approach for the car
resequencing problem with selectivity banks[J]. Computers &
Operations Research, 2013, 40(1): 98–108.
ZUFFEREY N. Tabu Search Approaches for two car sequencing
problems with smoothing constraints[C]//Metaheuristics for
Production Systems. Cham: Springer International Publishing,
2016: 167–190.
TANG Q H, LI Z X, Zhang L P, et al. Effective hybrid
teaching-learning-based optimization algorithm for balancing
two-sided assembly lines with multiple constraints[J]. Chinese
Journal of Mechanical Engineering, 2015, 28(5): 1067–1079.
TANG D B, DAI M. Energy-efficient approach to minimizing the
energy consumption in an extended job-shop scheduling
problem[J]. Chinese Journal of Mechanical Engineering, 2015,
28(5): 1048–1055.
BRAILSFORD S C, POTTS C N, SMITH B M. Constraint
satisfaction problems: Algorithms and applications[J]. European
Journal of Operational Research, 1999, 119(3): 557–581.
GRAVEL M, GAGNÉ C, PRICE W. Review and comparison of
CHINESE JOURNAL OF MECHANICAL ENGINEERING
Vol. 30,aNo. 2,a2017
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
three methods for the solution of the car sequencing problem[J].
Journal of the Operational Research Society, 2005, 56(11):
1287–1295.
RÉGIN J C, PUGET J F. A filtering algorithm for global
sequencing constraints[C]//Principles and Practice of Constraint
Programming-CP’97, Linz, Austria, October 29–November 1,
1997: 32–46.
BUTARU M, HABBAS Z. Solving the car-sequencing problem as
a non-binary CSP[C]//Principles and Practice of Constraint
Programming-CP'2005, Sitges, Spain, October 1–5, 2005:
840–840.
BAUTISTA J, PEREIRA J, ADENSO-DÍAZ B. A Beam Search
approach for the optimization version of the Car Sequencing
Problem[J]. Annals of Operations Research, 2007, 159(1):
233–244.
GOLLE U, ROTHLAUF F, BOYSEN N. Iterative beam search for
car sequencing[J]. Annals of Operations Research, 2015, 226(1):
239–254.
GOTTLIEB J, PUCHTA M, SOLNON C. A Study of Greedy,
Local Search, and Ant Colony Optimization Approaches for Car
Sequencing Problems[C]//Applications of Evolutionary Computing:
EvoWorkshops 2003, Essex, UK, April 14–16, 2003: 246–257.
GAGNÉ C, GRAVEL M, PRICE W L. Solving real car sequencing
problems with ant colony optimization[J]. European Journal of
Operational Research, 2006, 174(3): 1427–1448.
SOLNON C. Combining two pheromone structures for solving the
car sequencing problem with Ant Colony Optimization[J].
European Journal of Operational esearch, 2008, 191(3):
1043–1055.
BAUTISTA J, PEREIRA J, ADENSO-DÍAZ B. A GRASP
approach for the extended car sequencing problem[J]. Journal of
Scheduling, 2007, 11(1): 3–16.
PERRON L, SHAW P. Combining Forces to Solve the Car
Sequencing Problem[C]//Integration of AI and OR Techniques in
Constraint Programming for Combinatorial Optimization
Problems: CPAIOR 2004, Nice, France, April 20–22, 2004:
225–239.
ZINFLOU A, GAGNÉ C, GRAVEL M. Genetic Algorithm with
Hybrid Integer Linear Programming Crossover Operators for the
Car-Sequencing Problem[J]. INFOR: Information Systems and
Operational Research, 2010, 48(1): 23–37.
ESTELLON B, GARDI F, NOUIOUA K. Large neighborhood
improvements for solving car sequencing problems[J].
RAIRO-Operations Research, 2006, 40(04): 355–379.
ESTELLON B, GARDI F, NOUIOUA K. Two local search
approaches for solving real-life car sequencing problems[J].
European Journal of Operational Research, 2008, 191(3):
928–944.
PRANDTSTETTER M, RAIDL G R. An integer linear
programming approach and a hybrid variable neighborhood search
for the car sequencing problem[J]. European Journal of
Operational Research, 2008, 191(3): 1004–1022.
RIBEIRO C C, ALOISE D, NORONHA T F, et al. An efficient
implementation of a VNS/ILS heuristic for a real-life car
sequencing problem[J]. European Journal of Operational Research,
2008, 191(3): 596–611.
THIRUVADY D R, MEYER B, ERNST A. Car sequencing with
[29]
[30]
[31]
[32]
constraint-based ACO[C]//Proceedings of the 13th annual
conference on Genetic and evolutionary computation, GECCO'11,
Dublin, Ireland, July 12–16, 2011: 163–170.
THIRUVADY D, ERNST A, WALLACE M. A. Lagrangian-ACO
matheuristic for car sequencing[J]. EURO Journal on
Computational Optimization, 2014, 2(4): 279–296.
ARTIGUES C, HEBRARD E, MAYER-EICHBERGER V, et al.
SAT and Hybrid Models of the Car Sequencing
Problem[C]//Integration of AI and OR Techniques in Constraint
Programming: 11th International Conference, CPAIOR 2014, Cork,
Ireland, May 19–23, 2014: 268–283.
SIALA M, HEBRARD E, HUGUET M J. A study of constraint
programming heuristics for the car-sequencing problem[J].
Engineering Applications of Artificial Intelligence, 2015,
38(0):34–44.
BOYSEN N, FLIEDNER M. Comments on “Solving real car
sequencing problems with ant colony optimization”[J]. European
Journal of Operational Research, 2007, 182(1): 466–468.
Biographical notes
ZHANG Xiangyang, born in 1974, is currently an assistant
professor in the Faculty of Maritime and Transportation, Ningbo
University, and also is a PhD candidate at State Key Laboratory of
Digital Manufacturing Equipment & Technology in Huazhong
University of Science & Technology, China. He received his
master degree from Huazhong University of Science &
Technology in 2004. His research interests include intelligent
algorithm and logistics operation.
E-mail: [email protected]
GAO Liang, born in 1974, is a professor and a PhD candidate
supervisor at State Key Laboratory of Digital Manufacturing
Equipment & Technology in Huazhong University of Science &
Technology, China. His research interests include operations
research and optimization, scheduling, etc. He had published more
than eighty journal papers, including Computer & Operations
Research, Expert Systems with Applications, Computers &
Industrial Engineering, and so on.
E-mail: [email protected]
WEN Long, born in 1988. Now, is a postdoctor at State Key
Laboratory of Digital Manufacturing Equipment & Technology,
Huazhong University of Science & Technology, China. He
received his bachelor degree and doctoral degree from Huazhong
University of Science & Technology in 2010 and 2014. His
research interests include machine learning and intelligence
optimization.
E-mail: [email protected]
HUANG, Zhaodong, born in 1984, currently is an assistant
professor in the Faculty of Maritime and Transportation, Ningbo
University. He received his Ph.D degree from New Jersey Institute
of Technology in 2012. His research interests include transit
network optimization and intelligence transportation system.
E-mail: [email protected]