The ACO Metaheuristic

The ACO Metaheuristic
ACO 2.1 - 2.3
January 2008
C. Colson
Ants in Mythology
 Zeus turned the hardworking ants of the uninhabited island of
Aegina into the subjects of Aeacus. The people were called the
Myrmidons.
 Pēleús (son of Aeacus) was the father of Achilles
 Although Achilles was from Thessaly, he leads his distant
kinsmen, the Myrmidons in the Trojan War as told by Homer in
the Iliad.
 Of the Myrmidons: “Industry, thrift, endurance; they are eager
for gain, and never easily relinquish, what they have won!"
Metaheuristic?
 Refers to a master strategy that modifies sub-heuristics.
 A general purpose heuristic designed to guide an underlying
problem-specific heuristic towards promising solutions.
 NP-hard problems are believed to be unsolvable in
polynomial time (tractable and intractable problems).
 As a result, heuristic methods are applied to get nearoptimal results in a reasonable time.
 Metaheuristics have entered the picture to guide
“heuristic methods” applicable to widely varying
problems.
    ACO is a metaheuristic framework:
applicable to many different applications.
Combinational Optimization
 Involve finding values for discrete variables such that an optimal solution
with respect to a given objective function is found.
– Traveling salesman
– Supply-chain logistics planning
– Asset allocation
 Maximize or Minimize applications: П(S, f, Ω)
 Problem: general question to be answered
 Instance: a case of specified values for a problem





S: set of candidate solutions
f(s): objective function (s Є S)
Ω: constraints
Feasible solutions: Ŝ subset of S, that satisfies Ω
Globally optimal solution: s* Є Ŝ
 Minimization example: f(s*) ≤ f(s) for all {s Є Ŝ}
 Maximization example: f(s*) ≥ f(s) for all {s Є Ŝ}
Computational Complexity
 Straight-forward approach to solution: exhaustive search!
– The possible solutions grows exponentially with instance size (n).
 Time complexity: maximum time an algorithm needs to find a
solution to an instance of n-size (aka worst-time complexity).
 “Big-O” formal notation: Ŏ(function)
 A function g(n) is said to be Ŏ( h(n) ), if two positive constants
A and n0 exist such that g(n) ≤ Ah(n) for all n≥n0 . This is the
asymptotic upper bound.
 Polynomial time complex: if Ŏ( g(n) ) where g(n) is a
polynomial. If k is the largest exponent of the polynomial
g(n), then the problem is said to be solvable in Ŏ( nk ) time.
 Exponential time complex: if Ŏ( g(n) ) cannot be bounded by
a polynomial.
 Intractable = not polynomial time complex.
NP-Completeness
(Bottom of page 28)
 P-class: an algorithm that outputs the correct answer (“yes”
or “no) in poly-time.
 NP-class (stands for Non-deterministic Polynomial): an
algorithm that verifies every instance is indeed “yes” in polytime.
NP Problems
 P is a subset of NP
P = NP-complete
P Problems
NP-hard???
???
NP-complete
Problems
 Poly-time reductions: the transformation of one problem into
another one by a poly-time algorithm.
 Key point: if the resultant problem is solvable in poly-time,
then the original problem is likewise solvable in poly-time!
Two more classifications of algorithms:
 Exact :
– guaranteed to find optimal solution
– prove optimality for every (finite-size) instance
– runs within instance-dependent time (worst case scenario:
exponential-time)
 Approximate (trades optimality for speed/efficiency) heuristic
methods; seeks near-optimal solutions but cannot guarantee
optimality.
– Further classified into constructive and local searches
– Constructive (iteratively add solution components to the empty
solution set until solution set is found): incremental solutions without
backtracking (see NNH for TSP problem on pg. 30)
– Local search (starts from a full solution set and tries to make
improvements by local changes): iterative exploration that seeks to
improve the solution with local changes (see best-improvement rule
for neighborhood examination scheme on pg. 31)
– Neighborhood structure: the set of possible solutions that the
algorithm can “move” to from the current solution.
The ACO Metaheuristic
 A colony of artificial ants cooperate to find good
solutions to difficult discrete optimization problems.
 Good solutions are an emergent property of
cooperation!
 Static problems: all characteristics of the problem
are defined once and do not change.
 Dynamic problems: characteristics vary according to
underlying functions and the optimization must
adapt to the changing environment.
 See problem definition on pgs. 34 & 35.
ACO (continued)…
 Ants build solutions by performing stochasic action
on the construction graph which is made up of
components (nodes) and connections (paths).
 Sometimes the ants find feasible solutions,
sometimes not (and that’s ok).
 The pheromone trail is coded long-term memory.
 Heuristic information: additional information that
the ants have, a priori, from a source other than
the environment (example: estimated path cost)
 Although ants act concurrently, independently, and
most times dumbly, good–quality solutions arise
from collective interaction of the ants.
ACO Components
 ConstructAntsSolutions:
– manages ants concurrently
– stochastic engine resides here
– evaluation function for ant performance resides here
 UpdatePheromones:
– deposits or evaporates pheromone trails.
 DaemonActions:
– centralized actions not resident in individual ants
– optional functionality
– example: pheromone bonuses for shortest path yet
 See page 38 for pseudocode.
ACO Applications
 Hamiltonian circuit: a trip solution on an “undirected” graph
which visits each city (node) exactly once and returns to
the starting city.
 Symmetric TSP:
– only paths have “cost”
– shortest Hamiltonian circuit trip length
– “cost” from node i to j is identical to j to i.
 Asymmetric traveling salesman problem
– “cost” from node i to j is not identical to j to i.
 Sequential Ordering Problem: similar to TSP
– an asymmetric traveling salesman problem with additional
constraints
– doesn’t need to return to starting city
– a precedence constraint must be considered
– the precedence constraint requires that some node i has to be
visited before some other node j
 Note: pheromones play roughly the same role in these cases.
ACO Applications (continued…)
 Generalized Assignment Problem:
– a number of agents and a number of tasks
– any agent can be assigned to perform any task, but incurs some
cost/profit that varies with the assignment
– each agent has a budget and the sum of the costs of task assigned
(cannot exceed its budget).
– solution is an assignment in which all agents do not exceed their
budget
– good solution minimizes cost or maximizes “profit”
– pheromones associated with either the next task to choose or which
agent to assign the task to.
 Multiple Knapsack Problem:
– goal is to maximize valuable items that can fit into one bag to be
carried on a trip
– given a set of items, each with a cost and a value, determine the
number of each item to include in a collection so that the total cost is
less than a given limit and the total value is as large as possible
– pheromones are associated only with the desirability of adding a
particular item to a solution
ACO Applications (continued…)
 Network Routing Problem:
– minimum path costs between pairs of nodes in
the network
– each “connection” (path) should have multiple
pheromone trails associated for each different
node destination.
 Dynamic TSP:
– time is a factor
– cities can be added or removed from the graph
– pheromones act similarly to standard TSP
problem.