Branch-and-Price Algorithms for the One

Computational Optimization and Applications, 9, 211–228 (1998)
c 1998 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands.
°
Branch-and-Price Algorithms for the
One-Dimensional Cutting Stock Problem
PAMELA H. VANCE
[email protected]
Department of Industrial and Systems Engineering
Auburn University
Auburn, Alabama 36849-5346
Received November 21, 1995; Revised July 11, 1996
Abstract. We compare two branch-and-price approaches for the cutting stock problem. Each algorithm is based on
a different integer programming formulation of the column generation master problem. One formulation results in
a master problem with 0–1 integer variables while the other has general integer variables. Both algorithms employ
column generation for solving LP relaxations at each node of a branch-and-bound tree to obtain optimal integer
solutions. These different formulations yield the same column generation subproblem, but require different
branch-and-bound approaches. Computational results for both real and randomly generated test problems are
presented.
Keywords: column generation, cutting stock problem, branch-and-bound
1.
Introduction
The successful solution of large-scale mixed integer programming (MIP) problems requires
formulations whose linear programming (LP) relaxations give a good approximation to the
convex hull of feasible solutions. Often these strong formulations have an exponential
number of rows or columns or both. Branch-and-price is a general approach for solving
strong IP formulations with a huge number of variables. In branch-and-price, sets of
columns are left out of the LP relaxation because there are too many columns to handle
efficiently and most of them will have their associated variable equal to zero in an optimal
solution anyway. Then to check the optimality of an LP solution, a subproblem, called the
pricing problem is solved to try to identify columns to enter the basis. If such columns are
found, they are added to the original problem, referred to as the master problem, and the
LP is reoptimized. Branching occurs when no columns price out to enter the basis and the
LP solution does not satisfy the integrality conditions. Branch-and-price [Barnhart et al.
1995], which is a generalization of branch-and-bound with LP relaxations, allows column
generation to be applied throughout the branch-and-bound tree.
At first glance, it may seem that branch-and-price involves nothing more than combining well-known ideas for solving linear programs by column generation with branch-andbound. However, as Appelgren [1969] observed 25 years ago, it is not that straightforward.
There are fundamental difficulties in applying column generation techniques for linear programming in integer programming solution methods [Johnson 1989]. The major difficulty
in applying branch-and-bound with column generation is that conventional integer programming branching on variables can destroy the structure of the pricing problem. Thus,
212
VANCE
customized branching rules have to be devised that allow efficient solution of the pricing
problem at each node of the branch-and-bound tree.
Computational experience reported to date for branch-and-price algorithms (for example,
Desrochers et al. [1989], Mehrotra and Trick [1994], Parker and Ryan [1995], Salvesbergh
[1993], and Vance et al. [1994]) has been for cases where the master problem is a set partitioning problem. This set partitioning structure greatly simplifies the customized branching
schemes needed to solve the integer programming problem. In this work we experiment
with two branching schemes, one suggested in Barnhart et al. [1995] and Johnson [1989]
and the other in Vance et al. [1994] and Vanderbeck and Wolsey [1994], for master problems where the column coefficients and row right hand sides are general integers rather
than 0–1. We use the classical cutting stock problem as a case study to test the efficiency
of these branch-and-price algorithms for general integer master problems.
In the next section we describe the cutting stock problem and present two possible column
generation formulations for solving it. In Section 3 we present branching rules for the
two formulations. Section 4 discusses details of the implementation of branch-and-price
algorithms for each formulation. In Section 5 we present computational experience for both
implementations and discuss our results. Finally in Section 6, we discuss our conclusions.
2.
The Cutting Stock Problem
The objective of the cutting stock problem (CSP) is to find the minimum number of stock
rolls of length L necessary to meet exactly the demands di for rolls of shorter lengths ai for
i = 1, . . . , n. Where di and ai i = 1, . . . , n are integers. These shorter lengths are referred
to as items. The problem solution gives the minimum number of rolls necessary to meet
the demand and specifies the cutting patterns used to cut the stock rolls into the items.
CSP can be formulated as the IP :
z = min
K
X
yk
k=1
K
X
xik ≥ di
i = 1, . . . , n
ai xik ≤ Lyk
k = 1,. . . , K
k=1
n
X
(1)
i=1
yk ∈ {0, 1} k = 1, . . . , K
xik ≥ 0 and integer i = 1, . . . , n; k = 1, . . . , M
where yk = 1 if roll k is used and 0 otherwise, xik is the number of times item i is cut
from roll k, and K is an upper bound on the number of stock rolls needed. The first set of
constraints require that the demand for each item is met, while the second set requires that
the total length of the items cut from each
cannot exceed L. It is easy to show that this
Proll
n
formulation gives the trivial LP bound i=1 (di ai )/L which is poor for instances where
there is a large amount of waste [Vance et al. 1994].
213
BRANCH-AND-PRICE ALGORITHMS
2.1.
Two Column Generation Formulations
To get a stronger LP relaxation, Dantzig-Wolfe decomposition can be applied to the block
angular formulation (1). In the decomposition, the knapsack constraint for roll k is replaced
by a convex combination of the extreme points (xj1k , xj2k , . . . xjnk )T j = 1, . . . , P of
conv{xk :
n
X
ai xik ≤ L
i=1
xik ≥ 0 and integer for i = 1, . . . , n}.
The resulting formulation is
z = min
K X
P
X
λjk
k=1 j=1
P
K X
X
xjik λjk ≥ di
i = 1, . . . , n
(2)
k=1 j=1
P
X
λjk ≤ 1
k = 1, . . . , K
j=1
P
X
xjik λjk
integer i = 1, . . . , n, k = 1, . . . , K
j=1
λjk ≥ 0
j = 1, . . . , P, k = 1, . . . , K.
The first set of constraints require that the demand for each item is met by the patterns that
are chosen. The convexity constraints for each roll k ensure that the pattern specified for
roll k is in the convex hull of integer solutions to the knapsack constraint. The integrality
restrictions on each component of the resulting patterns require that the convex combination
must result in an integral cutting pattern. This formulation gives a stronger LP relaxation
than (1) because it restricts the patterns in any LP solution to be in the convex hull of integer
solutions to the knapsack problem. Thus, we eliminate any LP solutions arising from
fractional extreme points of the knapsack polytope which are feasible to the LP relaxation
of (1).
The above formulation can be simplified. Note that the set of feasible patterns for cutting
each stock roll is identical since all the rolls have identical length. The convexity constraints
and integrality restrictions in (2) require that the cutting pattern for each roll is a feasible
integer solution to the knapsack constraint for that roll. Let the columns (xj1k , xj2k , . . . xjnk )T
j = 1, . . . , P 0 represent feasible solutions to
n
X
ai xik ≤ L
i=1
xik ≥ 0 and integer for i = 1, . . . , n.
rather than extreme points of the convex hull. If we require λjk ∈ {0, 1}, we can drop
the convexity rows and the integrality requirements on the components of each pattern
214
VANCE
without eliminating any feasible integer solutions. The columns xjk can be denoted as
xj for j = 1, . . . , P 0 and the 0–1 variables λjk can be replaced by integer variables µj
that denote the number of rolls cut using pattern j. In this manner we obtain the classic
formulation presented by Gilmore and Gomory [1961]
0
z = min
P
X
µj
j=1
0
P
X
xji µj ≥ di
i = 1, . . . , n
(3)
j=1
µj ≥ 0 and integer
j = 1, . . . , P.
The constraints require that the selected patterns must fill the demand for each item.
2.2.
Solving the Master Problems
One can solve the LP relaxations of both (2) and (3) using a column generation approach
by first solving the LP relaxation over a subset of the possible columns. We refer to a
master problem with only a subset of the possible columns as a restricted master problem.
Additional columns can be generated as needed for the restricted master problem by solving
the knapsack problem
max
n
X
i=1
n
X
πi x i
ai xi ≤ L
(4)
i=1
xi ≥ 0 and integer
where the π’s are the optimal dual prices on the demand constraints from the solution of the
LP relaxation of the restricted master problem (2) or (3) and xi the number of times item i
is cut in the new pattern. A column prices out favorably to enter the basis if its reduced cost
(1 − πx + σk in (2) where σk is the dual variable on the convexity row for roll k, 1 − πx in
(3)) is negative. When the optimal solution to the knapsack problem does not identify any
such column, the current solution to the LP relaxation of the restricted master problem is
optimal for the unrestricted master problem.
Desrosiers et al. [1996] state that branch-and-price solution approaches for cutting stock
problems may not produce optimal solutions since a column in the optimal solution may
never be generated in solving the pricing problem. Their point is that the pricing problem
will always generate an extreme point (pattern) of the convex hull of feasible solutions to the
knapsack problem, but the optimal solution may contain a pattern which is not an extreme
point. In formulation (2) observe that the convexity constraints enable us to represent
any feasible integer solution to the knapsack problem as a convex combination of extreme
points. Therefore, a nonextremal pattern need not be generated in order to be part of the
solution. Their observation does however have implications for solving this formulation by
BRANCH-AND-PRICE ALGORITHMS
215
branch-and-bound. We will address this in more detail when we develop a branching rule
for this formulation. Notice that formulation (3) includes all integer patterns as possible
columns, not just extremal patterns. When we develop a branching rule for (3), we will
observe that each branching decision adds cuts to the knapsack subproblem that enable
nonextremal columns to be generated.
Theorem 1 The LP relaxations of (2) and (3) provide the same bound on the value of
the optimal IP solution.
Proof: To prove this result we must show that for any LP solution to (2) there is a solution
to (3) that gives the same LP bound and vice versa. In (2), the columns generated are
extreme points of the convex hull of integer solutions to the knapsack constraint while the
columns of (3) represent any integer solution to the knapsack constraint. Thus, any pattern
represented by a column in (2) is also represented as a column of (3) but the reverse is not
necessarily true. In general, P 0 ≥ P . Assume that the first P columns of (3) are extreme
integer solutions and other integer solutions are represented by columns with j > P . Further
assume that the numbering of the first P patterns in each formulation is identical.
Given an LP solution λ∗ to (2) an LP solution µ∗ for (3) is constructed as follows:
µj∗ =
K
X
λj∗
k
j = 1, . . . , P
k=1
µj∗ = 0
j > P.
Since
P
K X
X
0
λj∗
k
=
k=1 j=1
P
X
µj∗
j=1
the two LP solutions give the same objective function value.
Pj
Given an LP solution µ∗ to (3) let N0 = 0, Nj = p=1 dµp e for j = 1, . . . P 0 , and
let K ≥ NP 0 . We can think of Nj as the cumulative number of rolls needed to cut
patterns 1, . . . , j if we were to round up each fractional pattern in the LP solution µ∗ to
obtain an integer solution to the problem. We assume that pattern k is used to cut rolls
Nk−1 + 1, . . . , Nk . We will specify the cutting patterns for each of the NP 0 rolls in terms
of the variables λ from (2) to show the equivalence of the two formulations. If pattern k is
used a fractional number of times, the last roll cut with it, Nk , is assigned the corresponding
fractional values of λjk . We construct λ∗ for the first P patterns as follows:
λj∗
k = 1
λj∗
Nj
= µ
λl∗
k
= 0
k = Nj−1 + 1, . . . , Nj − 1, j = 1, . . . , P
j∗
− bµ c
j∗
j = 1, . . . , P.
k = Nj−1 + 1, . . . , Nj − 1, j = 1, . . . , P, l 6= j.
Each of the remaining patterns, P + 1, . . . , P 0 , can be expressed as a convex combination
of
PP
the first P patterns. Thus, each pattern xj with P +1 ≤ j ≤ P 0 can be written as l=1 xl νjl
PP
where l=1 νjl = 1 and νjl ≥ 0. So λ∗ for the remaining patterns can be determined as
216
VANCE
follows:
l = 1, . . . , P, k = Nj−1 + 1, . . . , Nj − 1, j = P + 1, . . . , P 0
l
λl∗
k = νj
l
j∗
λl∗
− bµj∗ c) l = 1, . . . , P, k = Nj , j = P + 1, . . . , P 0 .
k = νj (µ
It is easy to see from the definition of λ∗ that
NP X
P
X
λj∗
k =
k=1 j=1
P
X
µj∗ .
j=1
The fact that
X
NP 0
P
X
0
λj∗
k
k=NP +1 j=1
=
P
X
µj∗
j=P +1
PP
follows from the requirement that l=1 νjl = 1. Thus, the two LP solutions give the same
objective function value. This mapping from a solution of the LP relaxation of (3) to an LP
solution for (2) can be extended for the case where K is greater than or equal to the current
LP bound but less than NP 0 .
Since (2) and (3) provide the same LP bound and (3) is more compact, it might seem
that we should limit our study to this second formulation. However, (2) allows a simpler
branching logic to be applied, so it is not immediately apparent which formulation will
prove to be more efficient. Also, formulation (2) is applicable in cases where the stock rolls
do not all have identical length while (3) is not.
3.
Branching Rules
Marcotte [1985] showed that the optimal value of the IP is frequently equal to the round
up of this optimal LP solution value. However, the optimal solution to the LP relaxation
may be highly fractional. Furthermore, when column generation is used to solve the LP,
there is no guarantee that a good integer solution exists that uses only the subset of the
columns generated to optimize the LP. To find an optimal integer solution we must branch
to eliminate the current fractional solution and reoptimize the LP relaxation at each node
in the branch-and-bound tree. Thus, we must be able to price new columns efficiently after
branching. To enable efficient pricing, we need to be careful in our selection of a branching
rule.
To see why the choice of a branching rule is critical, consider the resulting algorithm if we
used the conventional rule of branching on a fractional variable given a fractional solution
to (2). We would choose a fractional variable λj∗
k and create two new subproblems (nodes
in our branch-and-bound tree) one where we restrict λjk ≤ bλj∗
k c and the other where we
require λjk ≥ dλj∗
e.
Now
suppose
we
choose
to
optimize
the
LP relaxation for the first
k
node. It is possible (and quite likely) that the optimal solution to the pricing problem (4) will
be the pattern represented by λj∗
k . Then, in order to find a column with negative reduced
cost that satisfies the branching decisions in force at this node, or prove that no such column
BRANCH-AND-PRICE ALGORITHMS
217
exists, we must identify the solution to the knapsack problem with the 2nd highest objective
value. At depth n in the branch-and-bound tree, to generate columns we may need to find
the nth best solution to a knapsack problem. Thus we see that if we wish to allow for
efficient column generation at each node of the branch-and-bound tree, we must choose
a branching rule that is compatible with the pricing problem. By compatible, we mean
that we must be able to modify the pricing problem at each node so that columns that are
infeasible because of the branching decisions in force at that node are not generated and the
pricing problem remains tractable.
The challenge in formulating a branching rule is finding a rule that excludes the current
fractional solution, validly partitions the solution space of the problem, and provides a
pricing problem that is tractable. We need a guarantee that a feasible integer solution will
be found (or infeasibility proved) after a finite number of branches and we need to be able to
encode the branching information into the pricing problem without destroying the problem
structure.
3.1.
Branching in Formulation (2)
A branching strategy for general mixed integer master problems with convexity rows can
be derived directly from (2) as follows [Johnson 1989]. From the integrality restrictions in
(2) we see that the optimal solution to the LP relaxation, λ∗ is infeasible to the IP if and
only if
P
X
xjlk λj∗
k =α
j=1
is fractional for some i and k. This suggests the following branching rule: on one branch
we require
P
X
xjlk λjk ≤ bαc
j=1
and on the other branch we require
P
X
xjlk λjk ≥ dαe.
j=1
This branching decision can be enforced in the restricted master problem by deleting any
columns for roll k that violate the upper bound bαc on component l on the first branch or
the lower bound of dαe on the second branch. When a new pattern is generated for roll k an
upper bound of bαc on component l is added to the pricing problem on the first branch and
a lower bound of dαe on the second branch. Note that each pricing problem differs only in
the lower and upper bounds on the components. Thus, the resulting pricing problem at each
node is a knapsack problem with upper and lower bounds on some of the variable values
and is no more difficult to solve than the pricing problem at the root node.
218
VANCE
Before proceeding, we demonstrate that for (2) any pattern (extremal or nonextremal)
needed for the optimal IP solution can be represented using the columns that can be generated
in the branch-and-bound tree. Consider the first branch for roll k. We now require that
the pattern used to cut roll k is in the convex hull of integer solutions to the knapsack
subproblem and has xjlk ≤ bαc. As part of the implementation of this branching rule,
we have deleted from the master all columns with xjlk greater than bαc. Thus, we can no
longer guarantee that any feasible knapsack solution with xjlk ≤ bαc can be represented
as a convex combination of the remaining extreme points. However, when we solve the
pricing subproblem we now generate extreme points of
conv{
n
X
ai xi ≤ L
(5)
i=1
xl ≤ bαc
xi ≥ 0and integer}.
We have added a cut to the knapsack subproblem that will allow us to generate the
columns needed to represent any pattern which satisfies the branching constraints in force
at this node.
3.2.
Branching in Formulation (3)
For (3), formulating a branching rule is more complex. If the solution, µ∗ to (3) is fractional,
we can identify a set of rows S and integers {αl : l ∈ S} such that
X
µj∗ = βS
j:xjl ≥αl ∀l∈S
is fractional. We call the set S a branching set. We can then branch on the constraints
X
X
µj ≤ bβS c and
µj ≥ dβS e.
j:xjl ≥αl ∀l∈S
j:xjl ≥αl ∀l∈S
These constraints place upper or lower bounds on the number of columns with xjl ≥ αl for
all l ∈ S that can be present in the solution. Note that there is no guarantee about the size
of |S|. For this case branching is not as easy as bounding a single variable in the pricing
problem.
Given a fractional solution to the LP relaxation of the restricted master problem, it’s
not immediately clear how to select a branching set S that will result in a fractional sum
βS . Vanderbeck and Wolsey [1994] showed that given a fractional solution µ∗ to the
LP relaxation of (3), a branching set can be chosen by selecting a fractional variable µĵ∗
whose associated column is maximal with respect to the other fractional columns and letting
S = {l : xĵl > 0} and αl = xĵl for all l ∈ S. This result is important in that it guarantees
that given a fractional solution, we can identify a branching rule that will eliminate that
BRANCH-AND-PRICE ALGORITHMS
219
fractional solution. However, the result does not guarantee that the branching decision will
be easy to enforce in the pricing problem.
Unlike the branching rule for (2) which could be enforced by deleting columns from the
restricted master problem, this rule must be enforced by explicitly adding a new constraint
to the master problem at each node created by the branching decision. Thus, each branching
decision adds a new dual variable that must be incorporated into the pricing problem in
order to price columns correctly. At depth T in the branch-and-bound tree there will be T
additional dual variables. We can formulate the pricing problem at this node as a knapsack
problem with additional constraints and variables. Let St be the branching set associated
with branching constraint t. Each branching constraint adds |St | + 1 0–1 variables and
3|St | + 1 constraints to the knapsack pricing problem. Thus, the pricing problem will be
easier to solve if the number of branching constraints T and |St | for t = 1, . . . , T are small.
However, in general this problem will become prohibitively difficult for relatively small
values of T .
We now address the issue of generating nonextremal patterns needed in the optimal
solution of (3). Each branching decision can be implemented by adding variables and
constraints to the knapsack subproblem. These additional constraints change the feasible
region of the subproblem so that nonextremal solutions of the original subproblem may
become extremal solutions. Thus, necessary nonextremal patterns may be generated.
It is possible to avoid these difficult pricing problems if only maximal cutting patterns
are allowed as columns in the master problem formulation (3). By maximal, we mean that
the waste left after cutting this pattern is shorter than the length of the smallest item. In
this special case, any fractional column defines a potential branching set. Furthermore, for
any branching set S derived in this manner, the set of all columns with xi ≥ αi for all
i ∈ S contains only one member, the maximal pattern used to define that set. Thus, the
branching decisions can be enforced by changing the upper or lower bounds on a single
variable in the master problem. The pricing problem is only affected when an upper bound
has been placed on the value of the variable associated with a pattern. In that case, we
must avoid regenerating that maximal pattern. It seems we have encountered exactly the
situation we wished to avoid - having a set of forbidden cutting patterns. However, the fact
that these forbidden patterns are all maximal makes eliminating them easier than it would be
for general patterns. The details of how these additional constraints on the pricing problem
are handled computationally are given in the next section.
4.
4.1.
Implementation Details
Initial Solution and Solving the Root Node LP
The master problem was initialized with a cutting pattern for each item. In each of these
initial patterns, as many copies as possible of a single item are cut from the roll. If the
pattern is not maximal, as many copies as possible of the smallest item are added to make
it maximal. In formulation (2) each of these patterns is a candidate for cutting each of the
rolls. Thus, there are nK columns in the initial formulation. In formulation (3), there is a
single copy of each pattern, that is, there are n columns initially.
220
VANCE
Once the LP relaxation is solved over the initial set of columns, the pricing problem is
invoked to identify patterns with negative reduced cost. For formulation (2) at the root
node, the problem for each roll is identical, so one knapsack problem is solved and the new
pattern is added as a candidate for each roll. For formulation (3), there is only one pricing
problem, and one column is added after each knapsack problem solution.
In general, it is not necessary to solve the root node master LP to optimality to obtain a
lower bound on the optimal IP solution value, zIP . In particular, let zmin be the optimal
value of the LP relaxation of the root node master problem, zB be the optimal value of the
root node restricted master problem over the current subset of columns, and c̄min be the
reduced cost of the column given by an optimal solution to the pricing problem.
Theorem 2 If dzB e = dzB /(1 − c̄min )e, then zIP ≥ dzB e.
A proof of this result is provided in Vance et al. [1994]; it is a direct consequence of an
LP bound presented in Farley [1990]. As a consequence of Proposition 2, solution of the
LP at the root node is terminated when dzB e = dzB /(1 − c̄min )e.
Let zLB = dzB e where zB is the value of the final optimal solution to the restricted
master problem at the root node. At subsequent nodes in the branch-and-bound tree, once
the value of the LP drops below zLB it pays to branch immediately since nothing is gained
by solving the LP to optimality. Clearly, it will be necessary to branch since the value of
the relaxation is already less than the known IP bound.
4.2.
Solving the Knapsack Subproblems
The Horowitz-Sahni (HS) algorithm [Horowitz and Sahni 1974] is used to solve the knapsack problems. The algorithm is a specialized branch-and-bound approach. Since this
algorithm solves 0–1 knapsack problems, a logarithmic transformation is used to transform
the integer knapsack problems into 0–1 problems. That is, blog(li /ai )c + 1 items are used
with weights ai , 2ai , 4ai . . . . for each of the original items. We present the details of HS
here so that the reader may understand the modified HS algorithm we will describe later.
The items are ordered in nondecreasing order of the ratio of their objective coefficient
divided by their weight. We refer to this ordering as the greedy ordering. The algorithm is
initialized with s = 0 and x̂j = 0 for j = 1, . . . , n. A forward move consists of inserting
the largest possible set of new consecutive items, those items with indices s + 1, . . . , r − 1,
into the current solution. Whenever a forward move is exhausted, the linear programming
upper bound corresponding to the current solution x̂ given by
u = ẑ +
r−1
X
j=s+1
where
ẑ =
s−1
X
j=1
πj x̂j
πj + (L − â −
r−1
X
j=s+1
aj )πr /ar
BRANCH-AND-PRICE ALGORITHMS
â =
s−1
X
221
aj x̂j
j=1
is computed. After a forward move, we update the current solution by setting x̂j = 1
for j = s + 1, . . . , r − 1 and s = r. If the bound u is greater than the value of the
best known solution, a new forward move is performed, otherwise a backtracking move
follows. When the last item has been considered for addition by a forward move, the
current solution is complete and possible updating of the best known solution takes place.
A backtracking move consists of removing the last inserted item from the current solution.
After a backtracking move, the current solution is updated, and the linear programming
upper bound, u, is computed. If the upper bound is greater than the value of the best known
solution, a forward move follows, otherwise the algorithm performs another backtracking
move. The algorithm stops when no further backtracking can be performed.
The algorithm is modified slightly so that the knapsack solutions generated always represent maximal cutting patterns. This is accomplished by including items whose associated
profit (dual variable) is zero in the pricing problem. Thus, we continue to add these items
to the knapsack until it is full although they do not change the reduced cost of the resulting
pattern. Since all the dual variables are nonnegative, this ensures that the patterns generated
are maximal.
For formulation (3) when one or more maximal patterns are forbidden, a modified version
of the algorithm is used. For each forbidden pattern t, the item with αit > 0 that appears
last in the greedy ordering is marked. Actually, since the general integer knapsack problem
has been transformed into a 0–1 knapsack problem, there may be several 0–1 variables
corresponding to item i. Each of these items is marked. Each time the HS algorithm
attempts to add a marked item in a forward step, the algorithm checks to see if adding this
item will result in a forbidden pattern. If so, the item is not added. Since the HS algorithm
always adds items according to the greedy ordering, it is sufficient to check for a forbidden
pattern only when the last pattern item in the greedy ordering is considered for addition to
the knapsack.
Note that with these additional constraints on the knapsack problem we are no longer
guaranteed to generate a pattern that is maximal. The pattern will however be maximal
with respect to the additional constraints on the problem at this node in the branch-andbound tree. That is, it will either be maximal, or it will be a maximal proper subset of one
of the forbidden patterns.
4.3.
Branching Choice and Tree Search
Once column generation has ceased at a node in the branch-and-bound tree, if the solution
is fractional and the round up of the LP bound at the node is less than the best known
integer solution, branching takes place. Generally, for a given fractional LP solution, there
are several valid branching choices that can be made. Here we discuss how the branching
choice was made for each formulation.
PP
For formulation (2), we must identify a roll k and item i such that j=1 xjik λj∗
k is
fractional. This is done by sequentially checking each roll 1, ..., K and each item 1, ..., n.
The first roll/item combination that defines a valid branching rule is used.
222
VANCE
For formulation (3) a maximal pattern whose variable is fractional in the current LP
relaxation is chosen to define the branching decision. Later, on the branch where this
pattern cannot be regenerated, we must be able to quickly check whether adding an item
will result in this pattern. This check is simplified if the pattern contains a small number
of items. Therefore, the candidate pattern with the smallest number of nonzeroes is chosen
for branching.
The search order of the nodes may also have a dramatic effect on the efficiency of the
algorithm. For formulation (2), depth-first search is used, with the exception that any node
whose LP bound exceeds the round-up of the root LP is not searched unless there are no
nodes whose LP bound does not exceed this value. Since cutting-stock problems generally
have solutions that attain this tight bound, we first concentrate on finding such a solution.
For formulation (3), the same depth-first strategy is used with one additional condition.
We always search the branch where a lower bound has been imposed on the value of the
branching variable first. The search is performed in this order to minimize the number of
forbidden patterns that will have to be considered in the knapsack problem.
4.4.
Overview of Algorithm
A flowchart of the algorithm is presented in Figure 1. The upper and lower bounds used
by the algorithm are updated as shown on the flowchart. A lower bound on the optimal IP
solution value is used to prevent unnecessary column generation in the branch-and-bound
tree. After column generation has ceased at the root node the lower bound is set equal to
the round up of the LP solution value. It may be necessary to update the lower bound later
in the course of the algorithm if all the active nodes have larger LP objective values. If this
is the case, the lower bound is set equal to the round up of the minimum value of the LP
bound over all the active nodes. The value of the best known integer solution is used as
an upper bound for fathoming nodes. Whenever a new integer solution is found, the upper
bound is updated accordingly.
Several of the steps shown in the algorithm are optional. The “generate columns” decision
involves comparing the current LP bound to the IP lower bound. The “cut off column
generation” decision is an opportunity to use Theorem 2 to stop column generation early.
Omitting these steps would cause the algorithm to optimize the unrestricted master LP at
each node.
5.
Computational Results
The two branch-and-price algorithms described above were implemented using MINTO
2.1 [Nemhauser et al. 1994] with CPLEX 3.0 [CPLEX Optimization, Inc. 1990] as the LP
solver on an IBM RS6000/590. Both real industrial instances and random instances were
solved using each of the two approaches.
Table 1 summarizes the characteristics of the real cutting stock instances solved. The
column “prob” gives the problem number, “nitems” gives the number of unique lengths
with positive demand, “max demand” gives the maximum demand for any single length,
“min demand” gives the minimum demand for any length, “average demand” gives the
average demand per length, “max fraction” gives the value of the maximum item length as
BRANCH-AND-PRICE ALGORITHMS
223
Figure 1. Column Generation/Branch-and-Bound Algorithm
a fraction of the stock roll size, “min fraction” gives the minimum item length as a fraction
of the roll size, and “avg fraction” gives the demand-weighted average item length. The 28
problems ranged in size from 2 lengths to 18. The demand per length ranged from fewer
than 10 to over 700.
The computational results for these instances are shown in Table 2. The column “prob”
gives the instance number. Under “convexity formulation” results for formulation (2) are
presented. The column “depth” gives the depth of the branch-and-bound tree, “nodes”
gives the number of nodes explored, “LPs” gives the total number of LPs solved, and
“time” gives the CPU time in seconds. The same statistics are reported for formulation (3)
under “compact formulation”.
The compact formulation found an optimal solution more quickly for 21 of the 28 instances. It solved all but four of the instances in under 1 second and generally outperformed
the convexity formulation. However, for instance 4, it took more than 20 times longer than
the convexity formulation and 2000 times longer than required for any of the other instances
using the compact formulation. It seems that while the compact formulation is generally
the better approach for these problems, in a small number of instances it will perform far
worse than expected.
224
VANCE
Table 1. Real Problem Characteristics
prob
nitems
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
2
4
7
11
3
6
2
3
2
3
5
14
5
7
9
8
7
7
12
6
12
18
4
12
14
5
11
2
max
demand
110
50
140
318
95
40
78
222
751
26
49
174
112
80
60
315
125
337
31
18
66
58
9
252
58
61
80
68
min
demand
21
18
12
25
10
6
50
23
220
2
10
2
9
27
12
5
27
5
4
5
5
1
1
4
2
6
1
6
avg
demand
65
38
61
123
47
16
64
101
485
13
29
41
33
56
26
74
65
89
14
12
25
14
4
40
18
34
21
37
max
fraction
0.175
0.164
0.138
0.165
0.158
0.166
0.081
0.149
0.120
0.325
0.167
0.214
0.167
0.260
0.194
0.176
0.260
0.176
0.264
0.231
0.177
0.278
0.242
0.179
0.278
0.186
0.236
0.119
min
fraction
0.118
0.064
0.074
0.052
0.138
0.053
0.053
0.108
0.108
0.052
0.083
0.052
0.052
0.125
0.139
0.121
0.126
0.121
0.080
0.114
0.091
0.076
0.170
0.088
0.076
0.119
0.116
0.083
avg
fraction
0.166
0.110
0.101
0.074
0.152
0.081
0.070
0.114
0.117
0.074
0.140
0.097
0.080
0.177
0.167
0.144
0.188
0.145
0.135
0.178
0.136
0.127
0.221
0.121
0.127
0.161
0.156
0.086
Computational trials were also performed on randomly generated problem instances. We
attempted to evaluate the effect of the number of items, their average demand, and their
size relative to the size of the stock rolls on algorithmic performance. Two values for the
¯ of 10 and 50 were
number of items, 10 and 50, were used. Average demand levels d,
considered. The individual demands were determined by generating m uniform random
numbers R1 , R2 , . . . , Rm where m is the number of items and letting
di = b(Ri /
m
X
¯
Rj )mdc
for i = 1, . . . , m − 1
j=1
and
dm
= md¯ −
m−1
X
di .
i=1
Item lengths were taken from three different uniform distributions U(1,2500), U(1,5000),
and U(1,7500). The length of the stock rolls was fixed at 10000 for all problem instances.
This approach for generating random cutting stock instances was also used in Wascher and
Gau [1994].
225
BRANCH-AND-PRICE ALGORITHMS
Table 2. Computational Results for Real Problems
prob
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
depth
0
28
33
413
25
8
0
56
85
3
20
144
16
122
50
221
142
192
75
12
165
146
4
149
107
36
47
9
convexity formulation
nodes
LPs time (sec)
1
2
0.10
29
34
0.28
34
42
1.45
417
447
360.16
26
27
0.19
9
16
0.21
1
1
0.09
57
61
0.90
86
87
2.97
4
9
0.12
21
22
0.20
145
169
72.29
17
21
0.19
123
135
20.76
51
65
9.88
225
255
230.33
143
157
39.22
193
215
136.23
76
104
6.24
13
18
0.19
187
239
52.71
162
280
171.11
5
10
0.14
150
172
53.60
109
144
17.68
37
46
1.38
49
79
12.86
11
12
0.13
depth
0
32
13
615
2
14
2
3
2
1
40
60
21
41
6
3
33
71
26
29
17
47
9
26
22
10
4
2
compact formulation
nodes
LPs
time (sec)
1
1
0.07
52
84
0.41
19
42
0.25
1213
3753
8377.18
3
4
0.11
23
50
0.27
3
3
0.07
4
7
0.09
3
4
0.11
2
3
0.08
71
143
0.89
99
341
3.45
36
70
0.34
66
118
0.71
7
24
0.23
4
18
0.16
49
108
0.55
89
135
0.83
39
150
1.00
55
136
0.74
25
91
0.57
74
224
1.83
16
20
0.14
38
135
0.84
31
100
0.66
14
21
0.15
9
41
0.36
3
4
0.08
Table 3 reports the computational results for the randomly generated problems. The
column “dist.” gives the distribution of lengths for the problem, 1 for U(1,7500), 2 for
¯ the average
U(1,5000), and 3 for U(1,2500). “n” gives the number of lengths and “d”
demand per length. “convexity formulation” gives results for formulation (2). The column
“depth” gives the depth of the branch-and-bound tree, “nodes” gives the number of nodes
searched, “LPs” gives the number of LPs solved, and “time” gives the CPU time in seconds.
The values reported are averages over 20 random instances except for cases where a number
appears in parentheses after the time. In those cases, the value in parentheses gives the
number of instances that could be solved in less than 600 CPU seconds. If no values appear
for a problem type and formulation, none of the instances could be solved in under 600
CPU seconds. Finally, under “median time” the median solution time over the instances
solved is reported. The same quantities are reported for formulation (3) under the heading
“compact formulation”.
For formulation (2) it was impossible to solve any of the instance with 50 items in less
than 600 CPU seconds. In fact, in many of the trials, the algorithm was still attempting to
optimize the root node LP when the time limit was reached. It is important to note that the
size of this formulation grows much more quickly with an increase in the number of items
since more items imply more possible columns and more rolls K. Formulation (2) has K
226
VANCE
times the number of possible columns as formulation (3). However, this formulation was
relatively insensitive to the distribution that the item lengths were generated from. In fact,
the instances for the U(1,2500) distribution seem to be a little easier for this formulation
than the others which was not at all the case for formulation (3). This may be due to the
fact that smaller items imply a smaller K value thus keeping the formulation size smaller.
In general formulation (3) outperformed formulation (2) on all problem classes. However,
the formulation was more sensitive to the distribution of the item lengths. The smaller the
item lengths relative to the roll length, the more difficult the problems were to solve. Also,
we observed the same behavior we saw with the real instances in that in many cases there
were one or two problems of each class that were much harder to solve than the others.
This is illustrated for the instances where the lengths were U(1,7500) and there were 50
items. In each case, the median computing time is significantly less than the average and
one of the 20 instances could not be solved within the time limit even though the average
computing time was far less than the limit.
We did not perform computational trials for any competing exact methods. One other
possible exact method is to solve (1) by general integer programming methods. This was
attempted in Vance et al. [1994] for cutting stock problems where the demands were
restricted to be exactly one. Because of the weak LP bound provided by the LP relaxation
of (1) it was impossible to solve problems of any size in less than an hour of CPU time.
The only other exact algorithm we found in the literature is in Goulimis [1990]. Optimal
solutions were found by generating all possible columns before entering branch-and-bound,
but this can only be done for relatively small problems.
Table 3. Computational Results for Random Problems
convexity formulation
nodes LPs
time
(sec)
25
35
2.9
95
105
166.5
dist.
n
d¯
depth
1
1
1
10
10
50
10
50
10
24
94
1
50
50
2
2
2
2
10
10
50
50
10
50
10
50
25
157
27
158
44
175
2.0
185.9
2.0
166.6
21
28
53
121
3
3
10
10
10
50
32
166
34
168
58
192
1.4
79.9
1.0
66.5
67
31
3
50
10
99
3
50
50
157
median
time
2.8
129.8
depth
6
12
31
55
compact formulation
nodes LPs
time
(sec)
10
21
0.2
19
33
0.3
47
161
2.7
(19)
129
403
32.1
(19)
37
83
1.7
38
57
0.4
68
445
15.1
147
549
27.0
(18)
128
356
34.2
48
143
1.2
(19)
155
1524 101.0
(9)
227
1950 307.7
(13)
median
time
0.1
0.1
1.5
2.7
0.2
0.3
13.4
27.5
0.7
0.7
87.8
287.1
BRANCH-AND-PRICE ALGORITHMS
6.
227
Conclusions
In this paper we presented two exact algorithms for one-dimensional cutting stock problems
using a branch-and-price solution approach. These algorithms were both successful at
solving real-world instances on a workstation-class computer. In general, the more compact
column generation formulation performed better due in part to the smaller size of the LP
relaxations that needed to be solved. However, the more compact formulation was less
consistent in its performance over the entire set of instances solved which was especially
apparent in the results for randomly generated instances.
One purpose of this study was to examine the efficacy of branch-and-price for general
integer master programs. Use of a compact formulation seems to be preferable for solving
larger problems, since the growth in the number of columns is much faster for formulations
that include convexity rows. These less compact formulations also have a greater degree of
problem symmetry which is detrimental to the performance of branch-and-bound. However
because of the more complex branching rules required by compact formulations, they may
not be practical for some applications. Furthermore, they cannot be used at all for cases
like cutting stock problems with unequal stock roll lengths where the simplifications used
to derive the compact formulation do not apply.
References
1. C. Barnhart, C., E.L. Johnson, G.L. Nemhauser, M.W.P. Savelsbergh, and P.H. Vance, “Branch-and-Price:
Column Generation for Solving Huge Integer Programs”, Operations Research, to appear.
2. M. Desrochers, and F. Soumis, “A Column Generation Approach to the Urban Transit Crew Scheduling
Problem”, Transportation Science, vol. 23, pp. 1-13, 1989.
3. J. Desrosiers, P. Hansen, B. Jaumard, F. Soumis, and D. Villeneuve, Dantzig-Wolfe Decomposition and
Column Generation for Integer and Non-convex Programming, Working Paper, Ecole des Hautes Etudes
Commerciales, Montreal, 1996.
4. A.A. Farley, “A Note on Bounding a Class of Linear Programming Problems Including Cutting Stock
Problems”, Operations Research, vol. 38, pp. 922-923, 1990.
5. P.C. Gilmore and R.E. Gomory, “A Linear Programming Approach to the Cutting-Stock Problem”, Operations Research, vol. 9, pp. 849-859, 1961.
6. C. Goulimis, “Optimal Solutions for the Cutting Stock Problem,” European Journal of Operational Research,
vol. 44, pp. 197-208, 1990.
7. E. Horowitz and S. Sahni, “Computing Partitions with Applications to the Knapsack Problem,” Journal of
ACM, vol. 21, pp. 77-292, 1974.
8. E.L. Johnson, “Modeling and Strong Linear Programs for Mixed Integer Programming”, in S.W.Wallace
(ed.), Algorithms and Model Formulations in Mathematical Programming , Berlin: Springer-Verlag, pp.
1-43, 1989.
9. O. Marcotte, “The Cutting Stock Problem and Integer Rounding”, Mathematical Programming, vol. 33, pp.
82-92, 1985.
10. M. Parker and J. Ryan, ”A column generation algorithm for bandwidth packing,” Telecommunications
Systems, to appear.
11. D.M. Ryan, and B.A. Foster, An integer programming approach to scheduling, in A. Wren (ed.), Computer
Scheduling of Public Transport Urban Passenger Vehicle and Crew Scheduling, Amsterdam: North-Holland,
pp. 269-280, 1981.
12. M.W. P. Savelsbergh, G.C. Sigismondi, and G.L. Nemhauser, “A Functional Description of MINTO, A
Mixed INTeger Optimizer”, Technical Report, Computational Optimization Center, Institute of Technology,
Atlanta, Georgia, 1992.
13. M.W.P. Savelsbergh, ”A Branch-and-Price Algorithm for the Generalized Assignment Problem,” Operations
Research, to appear.
228
VANCE
14. P.H. Vance, C. Barnhart, E.L. Johnson, and G.L. Nemhauser, “Solving Binary Cutting Stock Problems
by Column Generation and Branch-and-Bound” Computational Optimization and Applications, vol. 3, pp.
111-130, 1994.
15. F. Vanderbeck and L.A. Wolsey, ”An Exact Algorithm for IP Column Generation,” Technical Report, CORE,
1994.
16. G. Wascher and T. Gau, ”Generating Almost Optimal Solutions for the Integer One-dimensional Cutting
Stock Problem,” Arbeitsbericht-Nr. 94/06, Technischen Universitat Braunschweig, Braunschweig, Germany,
1994.