A Hybrid ACO Algorithm for the Next Release Problem

A Hybrid ACO Algorithm for the Next Release
Problem
He Jiang
Jingyuan Zhang
Jifeng Xuan
School of Software
Dalian University of Technology
Dalian 116621, China
[email protected]
School of Software
Dalian University of Technology
Dalian 116621, China
[email protected]
School of Mathematical Sciences
Dalian University of Technology
Dalian 116024, China
[email protected]
Zhilei Ren
Yan Hu
School of Mathematical Sciences
Dalian University of Technology
Dalian 116024, China
[email protected]
School of Software
Dalian University of Technology
Dalian 116621, China
[email protected]
Abstract—In this paper, we propose a Hybrid Ant Colony
Optimization algorithm (HACO) for Next Release Problem
(NRP). NRP, a NP-hard problem in requirement engineering, is
to balance customer requests, resource constraints, and
requirement dependencies by requirement selection. Inspired by
the successes of Ant Colony Optimization algorithms (ACO) for
solving NP-hard problems, we design our HACO to
approximately solve NRP. Similar to traditional ACO algorithms,
multiple artificial ants are employed to construct new solutions.
During the solution construction phase, both pheromone trails
and neighborhood information will be taken to determine the
choices of every ant. In addition, a local search (first found hill
climbing) is incorporated into HACO to improve the solution
quality. Extensively wide experiments on typical NRP test
instances show that HACO outperforms the existing algorithms
(GRASP and simulated annealing) in terms of both solution
quality and running time.
reasonable time, including greedy algorithms, hill climbing,
simulated annealing (SA) [2, 6], and genetic algorithms [3, 8].
Among these algorithms, LMSA (a Simulated Annealing
algorithm by Lundy and Mees) can work efficiently on some
problem instances [2].
Ant Colony Optimization (ACO) is one of the new
technologies in approximately solving NP-hard problems since
1991 [11]. With a colony of artificial ants, ACO can achieve
good solutions for numerous hard discrete optimization
problems. To improve the performance of ACO, some hybrid
ACO algorithms (i.e. ACO with local search) have also been
proposed for solving classical optimization problems, including
Traveling Salesman Problem (TSP) [11], Quadratic
Assignment Problem (QAP) [12], 3-Dimensional Assignment
Problem (AP3) [13], etc.
Motivated by the great successes of ACO in tackling NPhard problems, we propose a new algorithm named Hybrid
ACO (HACO) for solving large NRP instances in this paper.
HACO updates the solutions under the guideline of artificial
ants and pheromone trails. At every iteration, HACO constructs
the solutions and updates the pheromones for the next iteration.
In contrast to traditional ACO algorithms, a first found hill
climbing (FHC) operator is incorporated into HACO to
improve the solution quality. This operator always updates
solutions when the first better solution is found. Since it’s a
challenging and time consuming task to tune appropriate
parameters for ACO algorithms, we employ CALIBRA
procedure (an automatic tuning procedure by Adenso-Diaz and
Laguna in 2006) [14] to effectively determine those interrelated
parameter settings. Experimental results on typical NRP test
instances show that our new algorithm outperforms those
existing algorithms in both solution quality and running time.
Keywords-next release problem (NRP); ant colony optimization;
local search; requirment engineering
I.
INTRODUCTION
As a well-known problem arising in software requirement
engineering, Next Release Problem (NRP) seeks to maximize
the customer benefits from a set of interdependent requirements,
under the constraint of a predefined budget bound. NRP was
firstly formulated as an optimization problem in software
evolution [1] by Bagnall in 2001 [2]. To optimize software
requirements, NRP and its variants have attracted much
attention from the research community in recent years [3-5],
such as Component Selection and Prioritization [6], Fairness
Analysis [7], Multi-Objective Next Release Problem (MONRP)
[7, 8], and Release Planning [1, 9].
Since NRP has been proved as NP-hard, no exact algorithm
exists to find optimal solutions in polynomial time unless
P=NP [10]. Therefore, many heuristic algorithms have been
proposed for NRP to obtain near optimal solutions in
This work is partially supported by the Natural Science Foundation of
China under Grant No. 60805024 and the National Research Foundation for
the Doctoral Program of Higher Education of China under Grant No.
20070141020.
The remainder of this paper is organized as follows. Section
II gives out the related definitions of NRP. Section III
introduces ACO and HACO for NRP. Section IV describes the
166
incorporates a local search operator L to further improve those
constructed solutions (see Step (2.1)). Then, HACO updates the
pheromone trails for the next iteration, including the
pheromone evaporation (see Step (2.2)) and deposition (see
Step (2.3)). During the iterations, when a better solution is
found, our best solution will be updated. Finally, the best
solution will be returned.
experimental results on NRP instances. Section V discusses
some related work and Section VI concludes this paper.
II.
PRELIMINARIES
This section gives the definitions of NRP and some
notations which will be used in the following part of this paper.
In a software system, let R={r1, r2, …, rn} denotes all the
possible software requirements. Each requirement ri ∈ R is
Algorithm 1: HACO
Input: NRP(S,R,W), local search operator L, iteration times t, ant
number h
Output: Solution S ∗
Begin
B(1) S ∗ = ∅ , ω ( S ∗ ) = 0 ;
B(2) for i=1 to t do
BB(2.1) for k=1 to h do
BBBBBBB // SolutionConstruction phase
BBBBBBB ant k construct solutions Sk;
BBBBBBB // Local search phase
BBBBBBB call L to optimize Sk ;
BB//Pheromone update phase
BB(2.2) evaporate pheromone on all customers, by (2);
BB(2.3) for k=1 to h do
BBBBBBB deposit pheromone on customers in Sk, by (4);
BBBBBBB if ω( S ∗ )<ω(Sk) then S ∗ =Sk;
associated with cost ci ∈ Z + . A directed acyclic graph G=(R,E)
denotes the dependency of requirements, where (r , r ') ∈ E
indicates that r must be satisfied before r'. A set of
requirements parent(r) contains all requirements which must be
satisfied before r.
Let S={1, 2, …, m} denotes all the customers related to the
requirements. Each customer i is satisfied, if and only if a set of
requirements Ri ⊆ R is satisfied. A profit wi ∈ Z + denotes the
priority of customer i. To satisfy a customer, both the
requirements of Ri and the ones satisfied before
them parent ( Ri ) = ∪ r∈Ri parent ( r ) must be satisfied. Let
Ri' = Ri ∪ parent ( Ri ) denotes all the requirements which have
to be developed to satisfy customer i. Let cost (Ri' ) = ∑ rj∈R ' c j
B(3) return S ∗ ;
End
i
be the cost of satisfying customer i. The cost of a subset S ' ⊆ S
is defined as cost ( S ') = cost (∪ i∈S ' Ri' ) and the overall profit
obtained is defined as ω ( S ') = ∑ i∈S ' wi .
More details related to HACO are discussed as follows:
1) Pheromone trail and heuristic information: The
pheromone trails τ i for NRP refers to the desirability of
adding a customer i to the current partial solution. We define
τ i = θ wi , where θ is a parameter associated with customer
profits wi to make sure that the initial pheromone value varies
between 0 and 1. We also define the heuristic information
ηi =wi/cost(Ri'). Obviously, the higher profit and the lower
requirements cost a customer has, the higher ηi is.
2) Solution Construction: Initially, there’re no customers
in ants for NRP. At each construction step, ant k applies a
probabilistic action choice rule (random proportional rule [11])
to decide which customer to add next.
In particular, the probability with which ant k chooses
customer i is given by:
Based on above definitions, the goal of NRP is to find a
subset S ' ⊆ S to maximize ω ( S ') , subject to cost ( S ') ≤ B ,
where B is the predefined development budget bound [2].
Given a NRP instance (denoted as NRP(S,R,W)), a feasible
solution is a subset S ' ⊆ S subject to cost ( S ') ≤ B .
III.
A HYBRID ACO ALGORITHM
In this section, we present the HACO algorithm for solving
NRP in detail. In contrast to traditional ACO algorithms, a
local search operator called FHC is incorporated into HACO to
improve the quality of solutions. We present the framework of
this HACO in Part A and the local search operator in Part B,
respectively.
A. HACO
Ant Colony Optimization (ACO) is a meta-heuristic
algorithm based on communication of a colony of simple
agents (artificial ants), which are directed by artificial
pheromone trails. The ants use pheromone trails and heuristic
information to probabilistically construct solutions. In addition,
the ants also use these pheromone trails and heuristic
information during the algorithm’s execution to reflect their
search experiences [11, 12].
pik =
[τ i ]α [ηi ]β
, if i ∈ N k .
∑ [τ l ]α [ηl ]β
(1)
l∈ N k
where α and β are two parameters which determine the
relative influences of the pheromone trail and the heuristic
information, and Nk is the set of customers that ant k has not
chosen yet.
The pseudo-code for the ant solution construction is given
in Algorithm 2. This procedure works as follows. Firstly, there
is no customer in ant k (see Step (1)). Secondly, the next
customer is chosen probabilistically by the roulette wheel
selection procedure of evolutionary computation, proposed by
In this subsection, we propose HACO algorithm to solve
NRP. The main framework of HACO is presented in Algorithm
1. After the initialization, a series of iterations are conducted to
find solutions. At every iteration, HACO employs those ants to
construct solutions by a probabilistic action choice rule and
167
Goldberg in 1989 [15]. The roulette wheel is divided into
slices proportional to weights of customers that ant k has not
chosen yet (see Step (2.3.2)). Finally, the wheel is rotated and
the next customer i chosen by ant k is the one which the
marker points to (see Step (2.3.3)).
B. FHC
To enhance the performance of HACO, we incorporate a
local search operator L named FHC into HACO. In this section,
we introduce the framework of FHC (see Algorithm 3).
Algorithm 3: FHC
Input: NRP(S,R,W), budget B, iteration times t
Algorithm 2: SolutionConstruction
Input: NRP(S,R,W), budget B, ant k
Output: Solution S ∗
Begin
B(1) S ∗ = ∅ , ω ( S ∗ ) = 0 ;
Output: Solution S ∗
Begin
B(1) S ∗ = ∅ , ω ( S ∗ ) = 0 ;
B(2) for i=1 to t do
BB(2.1) let S0 be a random feasible solution;
B(2) while (cost( S ∗ ) ≤ B)
BB(2.1) sumProb = 0;
BB(2.2) let r be a value randomly generated in [0, 1];
BB(2.3) for each customer i do
BBB(2.3.1) if (i has been added to S ∗ )
BB(2.2) flag = true;
BB(2.3) while (flag = true) do
BBB(2.3.1) randomly choose j ∈ S \ S0 ;
BBBBBBBB pik =0.0;
BBB(2.3.2) let S1 = S0 ∪ { j} ;
BBB(2.3.2) else
BBBBBBBB pik is calculated by (1);
BBB // when S1 is feasible
BBB(2.3.3) if cost(S1)<B then S0 = S1 ;
BBBBBBBB sumProb += pik ;
BBBBBBB else // swap a customer in S0 with j
BBBB(2.3.3.1) flag = false;
BBBB(2.3.3.2) S1 = S0 ;
BBB(2.3.3) if (sumProb ≥ r)
BBBBBBBB add customer i to S ∗ , break;
BBBB(2.3.3.3) for every customer l ∈ S0 do
B(3) return S ∗ ;
End
BBBBBBBBBBB if cost( S0 ∪ { j} \ {l} ) < B and
BB ω ( S0 ∪ { j} \ {l}) > ω ( S1 )
3) Update of pheromone trails: The pheromone trails are
updated after all the ants have constructed their solutions.
Firstly, we decrease the pheromone value of every customer
by a constant factor, and then add pheromone to those
customers that the ants have chosen in their solutions.
Pheromone evaporation is implemented by
τ i ← (1 − ρ ) τ i .
BBBBBBBBBBB then S1 = S0 ∪ { j} \ {l} , flag = true;
BBBB(2.3.3.4) S0 = S1 ;
BB(2.4) if ω ( S0 ) > ω ( S * ) then S * = S0 ;
B(3) return S ∗ ;
End
(2)
The hill climbing algorithm is a classic local search
technology, which can find local optimal solutions in the
solution neighborhood. FHC always updates the solution when
a first local optimal solution is found.
where 0 < ρ ≤ 1 is the pheromone evaporation rate, which
enables the algorithm to forget bad decisions previously taken
and have more opportunities to choose other customers.
All ants deposit pheromone on the customers which they
have chosen in their solutions after evaporation:
h
τ i ← τ i + ∑ Δτ ik .
FHC mainly consists of a series of iterations. At every
iteration, a feasible solution will be randomly generated as the
current solution (see Step (2.1)). After that, this current solution
S0 will be further improved by local search as follows (see
Step (2.3)). Firstly, an unselected customer j ∈ S \ S0 will be
arbitrarily chosen out (see Step (2.3.1)). A new solution can be
achieved by adding customer j to S0 (see Step (2.3.2)). If
such action will result in a new feasible solution, then the
current solution S0 will be updated with the new generated
solution and the local search process is further conducted to
improve it (see Step (2.3.3)). Otherwise, we try to replace a
customer l ∈ S0 with j such that the resulting solution is
feasible and its profit is maximized. If succeeded, the current
solution will be updated with the resulting solution and the
local search is further conducted to improve it (see Step
(2.3.3.1)-(2.3.3.4)). After every iteration, when a better solution
is obtained, the best solution will be updated (see Step (2.4)).
Finally, the best solution is returned (see Step (3)).
(3)
k =1
where Δτ ik is the amount of pheromone that ant k deposits on
those chosen customers. It is defined as follows:
⎧γ W k , i is selected
Δτ ik = ⎨
⎩ 0, otherwise
(4)
where Wk, the quality of the solution Sk built by ant k, is
computed as the sum of customer profits in Sk. The parameter
γ is used to tune Δτ ik .
168
TABLE I.
Instance
Number
Cost
Max.
Customer
Request
Profit
NRP-1
20/40/80
1~5/2~8/5~10
8/2/0
100
1~5
1~30
THE DETAILS OF INSTANCE GENERATION
NRP-2
20/40/80/160/320
1~5/2~7/3~9/4~10/5~15
8/6/4/2/0
500
1~5
1~30
NRP-3
250/500/750
1~5/2~8/5~10
8/2/0
500
1~5
1~30
NRP-4
250/500/750/1000/750
1~5/2~7/3~9/4~10/5~15
8/6/4/2/0
750
1~5
1~30
NRP-5
500/500/500
1~3/2/3~5
4/4/0
1000
1
1~30
α in the range 0.1 to 4.0 with an accuracy of one decimal
place is internally handled as an integer variable with a lower
bound of 1 and an upper bound of 40. The other four
parameters are handled in the same way. Table II shows the
parameters, the original range considered before discretization
and the values found by CALIBRA. With these parameter
values, HACO can achieve good solutions on all problem
instances.
IV. EXPERIMENTAL RESULTS
In this section, we describe the experimental results to
evaluate the performance of HACO and existing algorithms.
A. Instances generation
Since the customer requirements are usually private data of
a company, no public NRP instance is available. We generate
NRP test instances by following the methods in the classic
paper of NRP [2]. These instances consist of five randomly
generated problems, each of which contains some multi-level
requirements. Each requirement in a level is with the
constraint of numbers, dependency, and cost. For each
problem, the predefined development budget bound is defined
as 30%, 50%, and 70% of the total cost of requirements. But
the ranges of customer profits have not been mentioned before.
We assume each customer profit is selected randomly from 1
to 30. Table I gives the details of instance generation. The 6
rows of this table show the instance names, number of
requirement in each level, cost of requirements, the maximum
number of requirement dependency, number of customers,
request of each customer, and the customer profits.
TABLE II.
HACO PARAMETER RANGES CONSIDERED FOR TUNING AND
RESULTS OF RUNNING CALIBRA
Parameters
α
β
γ
ρ
h
Ranges
[0.1, 4.0]
[0.1, 6.0]
[0.010, 0.050]
[0.01, 0.99]
[1, 30]
Values by
CALIBRA
1.1
1.5
0.020
0.13
10
CALIBRA is an automated tool for finding performanceoptimizing parameter settings, which liberates algorithm
designers from the tedious task of manually searching the
parameter space. It uses Taguchi’s fractional factorial
experimental designs coupled with local search to finely tune
algorithm parameters with a wide range of possible values. The
benefit of using CALIBRA is evident in situations where the
algorithm being fine-tuned has parameters whose values have
significant impact on performance.
C. Results and analysis
In this section, we show the experimental results of
heuristic algorithms, including GRASP, SA [2], FHC, ACO,
and HACO, where ACO is referred as the HACO version using
no local search. All the algorithms are implemented in C++ and
run on a personal computer with Microsoft Windows XP, Intel
Core 2.53GHz CPU and 4GB memory. For those heuristic
algorithms, we set their parameters as follows. Both in GRASP
and in FHC, each solution quality is the best found after 100 restarting times. For GRASP, the construction phase is
implemented according to [2] and the length of Restricted
Candidate List (RCL) is set to 10. The local search phase is
similar to FHC. For SA algorithm, the Lundy and Mees cooling
schedule in [2] is conducted with temperature control
parameter to be 10-8. In ACO and HACO, we set its five
parameters to 1.1, 1.5, 0.020, 0.13, and 10 as suggested by
CALIBRA. For the HACO, 10 iterations are executed and 10
ants apply local search at each iteration for comparison with
FHC within similar running time.
When linking the online supplement and using the current
version of CALIBRA 1 , 5 instances are selected randomly for
testing, one for each group size, to ensure proper representation
of problem instances from all sizes. We pick lower and upper
bounds for parameters and discretize the intervals uniformly.
For example, the continuous pheromone influence parameter
Table III illustrates ACO and HACO outperform other
algorithms in terms of solution quality and running time. The
first column shows the instance names, and the following 5
columns show the 5 algorithms in the experiments. The 2 subcolumns of each algorithm present the solution quality and
running time, respectively.
B. Parameters for HACO
The performance of ACO algorithms usually depends on
the configuration of five parameters. In this section, we use
CALIBRA procedure [14], proposed by Adenso-Diaz and
Laguna in 2006, to determine the interrelated HACO parameter
settings that improve the algorithm defaults.
1
http://or.pubs.informs.org/org/Pages.collect.html
169
TABLE III.
Instance
NRP-1-0.3
NRP-1-0.5
NRP-1-0.7
NRP-2-0.3
NRP-2-0.5
NRP-2-0.7
NRP-3-0.3
NRP-3-0.5
NRP-3-0.7
NRP-4-0.3
NRP-4-0.5
NRP-4-0.7
NRP-5-0.3
NRP-5-0.5
NRP-5-0.7
EXPERIMENTAL RESULTS OF FIVE ALGORITHMS ON NRP INSTANCES
GRASP
solution
810
1364
1945
4084
6616
9284
6061
9131
11723
9161
13731
17999
14940
20030
24508
time(s)
1.406
1.281
1.359
144.328
333.187
351.437
140.25
300.14
102.078
1329.59
1220.11
273.312
465.14
686.718
138.203
SA
solution
827
1397
1960
3902
6629
9602
5556
8836
11648
8498
13279
17932
14808
19991
24439
FHC
time(s)
14.453
27.312
32.921
401.031
655.687
801.859
820.812
940.687
352.312
5763.55
8827.31
1809.71
2535.55
2682.19
375.656
solution
817
1353
1939
3802
6352
9436
5423
8647
11572
7935
12822
17771
13322
19608
24410
ACO
time(s)
0.968
1.281
0.812
334.062
401.64
349.937
678.39
666.921
191.25
4729.84
12208.4
1433.06
506.296
1461.38
245.781
solution
829
1416
1963
4139
6712
9628
6063
9136
11726
9165
13751
18022
15592
20784
24568
HACO
time(s)
1.703
1.671
1.656
42.593
43.125
42.5
48.984
48.812
47.187
137.156
136.968
131.156
158.515
156.531
155.578
solution
835
1418
1968
4262
6786
9732
6067
9196
11728
9183
13810
18029
15616
20946
24570
time(s)
0.734
0.89
0.859
187.218
337.484
274.593
397.531
464.312
79.812
2810.8
3059.88
424.234
282.76
1036.47
163.312
those high-quality solutions constructed by ACO can reduce
the time of searching for local optima to some extent.
On the small instances as NPR-1, the running time of ACO
is nearly the same as the existing ones, but on large NRP
instances, ACO algorithm can get better solutions in much less
computing time than other algorithms. However, HACO can
achieve better solutions than ACO and need less running time
to obtain a local optimal solution than FHC.
10000
8000
Running Time (s)
Fig. 1 shows the solution quality comparison of FHC, ACO
and HACO algorithms in a more intuitively way. ACO and
HACO prove to be much more sufficient than FHC on all
instances, especially on large ones like NRP-4 and NRP-5.
HACO can obtain the best solution quality on all instances
among these 3 algorithms and can be implemented as a general
approach for NRP problems.
6000
4000
2000
0
25700
-2000
1-0.31-0.51-0.72-0.32-0.52-0.73-0.33-0.53-0.74-0.34-0.54-0.75-0.35-0.55-0.7
20700
Solution Quality
FHC
15700
NRP Instances
ACO
HACO
Figure 2. An illustration of running time by FHC, ACO and HACO
algorithms on all instances.
10700
V. RELATED WORK
There already existed several heuristic approaches for the
NRP problem. Among those algorithms, the hill climbing (HC)
and simulated annealing (SA) were implemented in [2, 6].
When using greedy algorithm in [2] as the construction phase,
GRASP can find better solutions than SA on some larger
instances within only modest amounts of computing time (as
shown in Table III in Section IV). Recently, an approximate
backbone based multilevel algorithm (ABMA) was proposed in
[17] to solve large scale NRP instances.
5700
700
1-0.3 1-0.5 1-0.7 2-0.3 2-0.5 2-0.7 3-0.3 3-0.5 3-0.7 4-0.3 4-0.5 4-0.7 5-0.3 5-0.5 5-0.7
NRP Instances
FHC
ACO
HACO
Figure 1. Solution qualities obtained by FHC, ACO and HACO algorithms
on all instances.
Fig. 2 illustrates the running time of FHC, ACO and HACO
algorithms. On all the NRP instances, ACO uses the shortest
computing time. The running time of FHC on larger instances
like NRP-4 is much longer than that of HACO. It implies that
when we incorporate local search operator FHC into HACO,
ACO is a kind of meta-heuristic approach and Ant System
(AS) [18] was the first ACO algorithm, using Traveling
Salesman Problem (TSP) as an example application. Then a
number of direct extensions of AS were provided [19-21],
170
being different in the way the pheromone updated, as well as
the management details of the pheromone trails.
[5]
When using ACO algorithms, automatic tuning procedures
could effectively find better parameter settings. There already
existed several approaches for automatic algorithm parameters
tuning. Boyan and Moore [22] proposed a tuning algorithm
based on machine learning techniques. But there was no
empirical analysis of this algorithm when applied to parameters
that have wide range of possible values. Audet and Orban [23]
introduced a mesh adaptive direct search that used surrogate
models for algorithmic tuning. Nevertheless, this approach has
never been used for tuning local search algorithms. AdensoDiaz and Laguna [14] designed an algorithm called CALIBRA
for fine tuning algorithms. It can narrow down choices for
parameter values when facing a wide range of possible values
and search for the best possible parameter settings.
[6]
[7]
[8]
[9]
[10]
[11]
VI.
CONCLUSION AND FUTURE WORK
In this paper, we propose HACO for NRP, which
incorporates FHC into ACO to solve NRP. We compare the
performance of standard ACO and HACO with existing
heuristic algorithms on some typical NRP instances. The
experimental results illustrate that ACO outperforms the
existing algorithms in solution quality and running time. When
coupled with FHC, HACO can achieve better solution quality
than ACO.
[12]
In future work, we will investigate how to further improve
the performance of HACO and extend HACO for multiobjective NRP.
[16]
[13]
[14]
[15]
[17]
ACKNOWLEDGMENT
We would like to thank Prof. Belarmino Adenso-Diaz and
Prof. Manuel Laguna for providing the latest online version of
CALIBRA. Prof. Adenso-Diaz is with Computer Engineering
School and Industrial Engineering School, University of
Oviedo and Prof. Laguna is with Leeds School of Business,
University of Colorado.
[18]
[19]
[20]
REFERENCES
[1]
[2]
[3]
[4]
[21]
K. H. Bennett and V. T. Rajlich, “Software maintenance and evolution: a
roadmap,” Proceedings of the 2000 Conference on The Future of
Software Engineering (FOSE’00), pp. 73–87, June 2000.
A. J. Bagnall, V. J. Rayward-Smith, and I. M. Whittley, “The next
release problem,” Information and Software Technology, vol. 43, pp.
883–890, November 2001.
G. Ruhe and M. O. Saliu, “The art and science of software release
planning,” IEEE Software, vol. 22, pp. 47–53, November 2005.
M. Harman and B. F. Johns, “Search based software engineering,”
Information and Software Technology, vol. 43, pp. 833–839, November
2001.
[22]
[23]
171
M. Harman, “The current state and future of search based software
engineering,” Proceedings of the 2007 Conference on The Future of
Software Engineering (FOSE’07), pp. 342–357, May 2007.
P. Baker, M. Harman, K. Steinhofel, and A. Skaliotis, “Search based
approaches to component selection and prioritization for the next release
problem,” Proceedings of the 22nd International Conference on
Software Maintenance (ICSM’06), pp. 176–185, September 2006.
A. Finkelstein, M. Harman, S. A. Mansouri, J. Ren, and Y. Zhang,
“Fairness analysis in requirements assignment,” Proceedings of the 16th
IEEE International Requirements Engineering Conference (RE’08), pp.
115–124, September 2008.
Y. Zhang, M. Harman, and S. A. Mansouri, “The multi-objective next
release problem,” Proceedings of the 9th Genetic and Evolutionary
Computation Conference (GECCO’07), pp. 1129–1136, July 2007.
D. Greer and G. Ruhe, “Software release planning: an evolutionary and
iterative approach,” Information and Software Technology, vol. 46, pp.
243–253, April 2004.
M. R. Garey and D. S. Johnson, Computers and Intractability. W. H.
Freeman and Company, San Francisco, 1979.
M. Dorigo and T. Stützle, Ant Colony Optimization. The MIT Press,
London, 2004.
M. Dorigo and C. Blum, “Ant colony optimization theory: A survey,”
Theoretical Computer Science, vol. 344, pp. 243–278, November 2005.
H. Jiang, Z. Ren, and Y. Hu, “A sampling based fant for the 3dimensional assignment problem,” Proceedings of IEEE Congress on
Evolutionary Computation 2008 (CEC’08), pp. 4118–4124, June 2008.
B. Adenso-Diaz and M. Laguna, “Fine-tuning of algorithms using
fractional experimental design and local search,” Operations Research,
vol. 54, pp. 99–114, January 2006.
D. E. Goldberg, Genetic algorithms in search, optimization and machine
learning. Reading, MA, Addison-Wesley, 1989.
R. E. Bixby, M. Fenelon, X. Z. Gu, and R. Rothberg, “Mip: theory and
practice closing the gap,” Proceedings of the 19th IFIP TC7 Conference
on System Modelling and Optimization Methods, Theory and
Applications, pp. 19–49, July 2000.
H. Jiang, J. Xuan, and Z. Ren, “Approximate backbone based multilevel
algorithm for next release problem,” Proceedings of Genetic and
Evolutionary Computation Conference (GECCO’10), in press.
M. Dorigo, “Optimization, learning and natural algorithms,” PhD thesis,
Dipartimento di Elettronica, Politecnico di Milano, Italy, 1992.
M. Dorigo, V. Maniezzo, and A. Colorni, “The Ant System:
optimization by a colony of cooperating agents,” IEEE Transactions on
Systems, Man, and Cybernetics-PartB, vol. 26, pp. 29–41, January 1996.
B. Bullnheimer, R. F. Hartl, and C. Strauss, “A new rank-based version
of the Ant System: A computational study,” Central European Journal
for Operations Research and Economics, vol. 7, pp. 25–38, May 1999.
T. Stützle and H. H. Hoos, “The MAX-MIN Ant System and local
search for the traveling salesman problem,” Proceedings of the 1997
IEEE International Conference on Evolutionary Computation (ICEC’97),
pp. 309–314, April 1997.
J. Boyan and A. Moore, “Using prediction to improve combinatorial
optimization search,” Sixth International Workshop on Artificial
Intelligence and Statistics, January 1997.
C. Audet, and D. Orban, “Finding optimal algorithmic parameters using
the mesh adaptive direct search algorithm,” SIAM Journal on
Optimization, vol. 17, pp. 642–664, December 2006.