Implementation of an Exact Algorithm for a Cutting

RC22720 (W0302-024) February 5, 2003
Mathematics
IBM Research Report
Implementation of an Exact Algorithm for a Cutting-stock
Problem Using Components of COIN-OR
Laszlo Ladanyi, Jon Lee, Robin Lougee-Heimer
IBM Research Division
Thomas J. Watson Research Center
P.O. Box 218
Yorktown Heights, NY 10598
Research Division
Almaden - Austin - Beijing - Delhi - Haifa - India - T. J. Watson - Tokyo - Zurich
LIMITED DISTRIBUTION NOTICE: This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It
has been issued as a Research Report
for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests.
After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g. , payment of royalties). Copies may be requested from IBM T. J. Watson Research Center ,
P. O. Box 218, Yorktown Heights, NY 10598 USA (email: [email protected]). Some reports are available on the internet at http://domino.watson.ibm.com/library/CyberDig.nsf/home .
IMPLEMENTATION OF AN EXACT ALGORITHM
FOR A CUTTING-STOCK PROBLEM
USING COMPONENTS OF COIN-OR
Laszlo LADANYI1 , Jon LEE1 & Robin LOUGEE-HEIMER1
January 2003
Abstract
The rate at which research ideas can be prototyped is significantly
increased when re-useable software components are employed. A
mission of the Computational Infrastructure for Operations Research
(COIN-OR) initiative is to promote the development and use of reuseable open-source tools for operations research professionals. In
this paper, we introduce the COIN-OR initiative and survey recent
progress in integer programming that utilizes COIN-OR components.
In particular, we present an implementation of an algorithm for finding integer-optimal solutions to a cutting-stock problem.
1
Department of Mathematical Sciences, IBM T.J. Watson Research Center, P.O. Box
218, Yorktown Heights, N.Y. 10598, U.S.A. {ladanyi,jonlee,robinlh}@us.ibm.com.
Introduction
Software is integral to much of the research conducted in the field of operations research (OR). For instance, during 2001 approximately seventy-five
percent of the articles in Operations Research, the flagship journal of the
Institute for Operations Research and the Management Sciences, contained
computational results. The prevalence of software in OR research indicates
that the way OR research software is developed, managed, and distributed
can have a potentially significant impact on the field [34].
Today, research software is typically developed in a more proprietary
manner than the theory that it supports. Research software is commonly
written for the author’s sole use, and it is not intended for public distribution. In contrast, the theory is peer reviewed, archived, and publicly
disseminated. A variety of adverse effects arguably results from such software practices: Wheels are re-invented, models and implementations are
lost, comparisons are unfair, knowledge transfer is limited, collaboration is
inhibited by lack of standards, evolution of the field is stifled. (For a thorough discussion of these points, see [34] which is the basis of this section.)
For an individual researcher, perhaps the most noticeable adverse consequence of current OR research-software practices is the need to “reinvent” pre-existing software. New algorithmic ideas are frequently tested
by computationally bench marking them against published techniques. To
make a comparison meaningful, the competing implementations need to be
run in the same computing environment over the same test sets. If the
original software is unavailable, then there is little choice but to attempt to
recreate it. The effort required to re-implement an algorithm from its published description is nontrivial due to the well-acknowledged gap between
the abstract theoretical explanation of an algorithm (usually expressed in
infinite precision and without details such as specific data structures employed), and a software implementation of the algorithm.
The performance of the re-implementation to the reported results.
Computational performance is a product of both the strength of the algorithmic theory and the efficiency of the implementation. When implementation details are not disclosed, it can be difficult to reproduce published
computational results. (To cite a well-known example, had the implementation used by Karmarkar been made publicly available, his computational
studies could have been quickly repeated, and the confusion surrounding
his contributions would have have been greatly diminished [40, 21].) If a
1
reference implementation of existing techniques were published along with
the theory and computational results, the time required to conduct a comparative study would be significantly reduced, and the quality of such a
study would be greatly enhanced.
Considering the often incremental nature of scientific progress, the time
and effort spent reinventing seems especially wasteful. Not atypically, a
researcher reads a published paper with computational results. Then, (s)he
invents a way to extend the state of the art using an idea which at its heart
is a novel twist on an existing algorithm or a new extension to a related
problem. Ideally, the researcher would like to make a modification to the
existing code from a published study, but that source code is usually not
available, and the researcher must start from scratch.
Re-implementing an algorithm involves to varying degrees re-designing,
re-coding, re-testing, and re-debugging. The perceived effort involved can
be a barrier to the adoption of promising computational ideas (especially
high for graduate students and others working under time limits). Under
current software practices, there is often a lengthy time lag between when a
promising computational idea appears in the refereed literature and when
it migrates into commercial codes. In MIP: Theory and practice – closing
the gap [17], Bixby et al. state that “...until recently little of that [theoretical] work has made it into the codes used by practitioners”. Publishing
the source code (and parameter settings) along with the theory and computational results could accelerate the adoption process by reducing the
effort required of others to establish the value of the theory in their own
endeavors.
Given the continuum between OR theory and practice, the differentiation between research software and non-research software may be vague.
By research software, we mean software that is developed to produce the
computational results that are reported in the open literature. Clearly,
under traditional software business models, freely disseminating software
with a significant potential for profit from licensing fees is ill-advised (and
one may wonder whether it would not be prudent to protect the underlying
theory as well). The majority of the research software written today, however, does not travel down the path to market, but the realization that this
is an available path encourages a proprietary attitude towards software.
One way researchers seemingly profit from not disseminating code is by
raising the barrier to entrance for would-be competitors mining the same
2
research vein. “Giving away” research software by publishing it along with
the theory may seem like giving away a lead in the race to extend the state
of the art. Rather than foregoing the benefits of publishing code, what is
needed is a suitable definition of “giving it away” that makes disseminating
source code an optimal strategy for the author. One such definition is the
open-source definition.
Open source is a phenomenon in computer science that is increasingly
receiving attention in the popular press. The underlying philosophy of
open source is to promote software reliability and quality by supporting independent peer-review and rapid evolution of source code. This philosophy
is pragmatically advanced by using copyright law in a nontraditional way.
Technically, the term “open source” refers to a specific type of software
license that is certification marked by the nonprofit Open Source Initiative ([6]). All open-source licenses certified by the Open Source Initiative
adhere to principles set forth in the Open Source Definition. The Open
Source Definition version 1.9 states criteria on nine fundamental issues,
including access to source code, free distribution, and prohibition of discrimination. A common misconception is that open source is synonymous
with “public domain” software. But unlike software in the public domain,
open-source software is clearly copyrighted. Open source is also distinct
from free-ware and share-ware software, which is usually available in binary executable format. And unlike “free-for-academic-use-only” licenses,
open-source licenses do not discriminate so that diversity and participation are maximized. There are dozens of certified open-source licenses.
While all certified open-source licenses conform to the principles of the
Open Source Definition, they can have significant differences. For example,
all open source licenses must permit unrestricted redistribution of source
code (e.g., the Berkeley System Distribution License), but some licenses
require it (e.g., the Gnu Public License).
The open-source philosophy has spawned a unique development paradigm of developing complex software, that is high-performance, robust,
and secure. In a typical, successful open-source software project (e.g., the
Linux operating system), a virtual community of volunteer developers spontaneously arises from among users. Users download the source code from
the Web, and may use it as is or modify it for their own purposes. Access to the actual source code, as opposed to executables, gives the users
power. Users may find and fix bugs, extend functionality, and port to new
3
platforms. They then typically, (and, depending on the license , may be
required to) contribute their modifications to be incorporated into the base
code distribution. Because of the relatively large number of developers
working simultaneously, the code evolves rapidly.
Distributed software development is made possible through concurrent
editing tools, such as the Concurrent Versions System (CVS, [2]), which
like many of the tools used in the open source community is itself an opensource project. The virtual community typically has a pyramid structure
and shares a set of values. Rank in the hierarchy is based on the value of
one’s contributions. Open-source communities are sometimes characterized
as a “brutal meritocracy”; they run on ego and reputation. The code has
maintainers, not owners. Bugs are publicized, not hidden. Open exchange
is valued and information hoarding is abhorred.
The benefits of the open source paradigm have been proven by many
successful projects. (Much of the Internet is run on open-source tools.
Roughly 60 percent of Web sites run on the open-source Apache HTTP
server, vs. 25 percent for Microsoft’s IIS [5]). These benefits address many
of the adverse consequences in the development and distribution of OR
research software. By opening source code under an open-source license,
computational results can be reproduced, fair comparisons of algorithm
performance can be made, the best implementations can be archived and
built on, code reinvention can be minimized, implementation innovation
knowledge can be transferred, and collaboration and software standards
can be fostered.
While open source presents an attractive alternative for the OR community it is by no means a panacea. Open source gains advantage from a
large community of volunteer developers. Operations research is a comparatively specialized area, and the number of developers is correspondingly
smaller. Open source projects that have succeeded have been fairly low on
the software stack [e.g., operating systems (Linux), Web servers (Apache),
scripting (Perl), Internet naming services (bind), Internet mail (sendmail)].
OR software belongs in the mid- to high-level application area of the software stack. Writing software for peer review (let alone peer-extension and
peer-maintenance), can be a non-incremental effort from writing software
for one’s own use. The benefit to a researcher’s institution is not universally
recognized, let alone rewarded, by tenure and promotion processes.
The interplay between practice and research has long been held as a
4
key to the health and vitality of operations research [24]. In open source
development (and associated business models, see [44]) interaction between
the research and practice communities is promoted.
Open source can be thought as a means for “publishing” software in
a way that is analogous (but very different) to how theoretical results are
published today. To explore the viability of open source and its ability to
accelerate progress in the field, “COIN-OR” was conceived.
In Section 1, we introduce the Computational Infrastructure for Operations Research (COIN-OR) initiative, created to provide open-source software for the operations research community, and we describe the currentlyavailable software components for integer programming. In Section 2, we
detail an ongoing project that uses COIN-OR software to explore exact solution approaches to the classic Cutting Stock Problem (CSP). This project
serves as a case-in-study on the ability of open source to accelerate the
rate at which research ideas can be prototyped. In Section 3, we briefly
survey other noteworthy integer programming research and practice that
employs COIN-OR software. This survey does not include the other nonlinear, meta-heuristic, and pure linear-programming projects using COIN-OR
software which fall outside of the integer-programming domain.
1
COIN-OR
COIN-OR is an initiative to promote open source software resources for
operations research professionals [33, 35, 45, 34]. The idea for the initiative
was conceived by IBM Research. The goal was to create a communityowned, community-operated repository of open source software to meet
the needs of OR professionals. More broadly, the long term goal is to create a venue for “publishing” software, analogous (but different) to the open
literature for theory. The COIN-OR web site debuted August 2000, in conjunction with the 17th triennial International Symposium on Mathematical
Programming in Atlanta, Georgia. IBM Research made a three year commitment to host the on-line resources. COIN-OR started with software
contributions from IBM Research and an invitation to the community to
become active partners as contributors, users, thought-provokers, to the
point of taking ownership and leadership of the initiative. Success was
defined as having COIN-OR become community-owned and communityoperated.
5
Originally, COIN-OR stood for the “Common Optimization Interface
for Operations Research”. It was pointed out that the word “optimization”
in the name misrepresented the broad ambitions of the project to have simulation software, visualization tools, models, data sets, and other resources
for the specialized needs of OR professionals that fall outside of the optimization area. Consequently, in November 2002 the initiative’s name was
changed to “Computational Infrastructure for Operations Research.”
A significant milestone towards the project’s goal of community-ownership was reached in November 2002, when the INFORMS board unanimously voted to become the new host of COIN-OR.
Under the COIN-OR umbrella, a variety of software tools are under development by a heterogeneous group of volunteers from industry,
academia, and government. The code offerings currently available in the
source code repository at www.coin-or.org include modules for integer programming, nonlinear programming, subgradient optimization, and tabu
search. Examples, documentation, and data sets are also available. COINOR includes independent (stand-alone) projects as well as inter-dependent
projects. In this paper, we will focus on three integrated tools for mixedinteger programming: a generic framework for branch-cut-price techniques
(BCP), a common interface to linear-programming solvers (OSI), and a
cutting plane library (CGL). Information on the other tools is available
on the COIN-OR project web site at http://coin-or.org. In particular:
the Volume Algorithm (a subgradient method), Derivative Free Optimization, IPOPT (an interior-point method for nonlinear optimization), CLP
(an LP solver), SBB (a branch-and-bound/branch-and-cut code), and the
Open Tabu Search framework.
1.1
Branch-Cut-Price Framework (BCP)
BCP is a development framework for parallel implementation of problemclass specific branch, cut, and price algorithms that use LP relaxations
for lower bounding. Parallelism is achieved by processing the subproblems
simultaneously. A master-workers model is employed, where the master
oversees the distribution of the subproblems and the workers process them.
Communication between master and workers uses a message-passing module that allows execution either sequentially or in a distributed parallel
environment.
While a worker processes a search-tree node, BCP allows the user to
6
generate new constraints (cutting) to strengthen the LP relaxation and
variables (pricing) to extend the feasible region. Since the set of constraints
and variables in the formulation can change, the user needs to be able to
create a realization of a constraint (resp., variable) with respect to the
current set of variables (resp., constraints). However, this is all the user
needs to worry about, as BCP takes care of the necessary book-keeping to
correctly create the current formulation at a search-tree node.
BCP handles the general tasks that are common to (parallel) branch,
cut, and price algorithms, while the developer provides only the problemclass specific routines through C++ virtual classes. User routines may include input/output management, preprocessing, heuristic upper bounding,
variable and constraint generation, feasibility testing, and branching rules.
Interfacing with the framework for a specific problem class is facilitated
in two ways. First, the design allows for easy incorporation of an existing
code base. Second, for most methods that the user can define, there is
an extensive collection of default methods, such as testing feasibility (the
default is to test integrality) selecting branching objects (the default is to
select a variable that should be integer but is not), and communicating an
LP solution to the cut generator.
BCP contains an extensive list of state-of-the-art features such as reduced cost fixing, strong branching, an extended notion of branching objects, multi-way branching and warm starting. To save memory, the search
tree is stored in an incremental fashion; that is, instead of explicitly storing
the active search tree nodes, the differences between parent and child are
stored. The modular design of BCP allows for the use of various messagepassing protocols and LP solvers.
1.2
Open Solver Interface (OSI)
The goal of the OSI is to extend the usability of optimization software by
providing a uniform interface to various linear-programming (LP) solvers,
both commercial and open source, exact and approximate.
The creation of the OSI was motivated by the lack of standards for
the application programming interface (API) of LP solvers. Even though
various LP solvers provide essentially the same functionality, they store
problems in various formats and implement functions with different parameters and arguments. The different conventions among the APIs make
sharing implementations and reproducing results difficult for researchers
7
with different solvers. Commercial entities can also be adversely affected
by the proliferation of APIs; they are often locked into using one particular
solver if they do not have the experience or resources to switch to another
solver later on. The OSI alleviates this problem by defining a generic (i.e.,
solver independent) LP solver API. Function calls from applications are
converted by translation libraries to the appropriate function calls for supported solvers. Translation libraries have been contributed for all of the
best-known LP solvers, justifying the need for a standardized API. The
list of supported LP solvers include the commercial products, (Cplex, OSL
and XPRESS-MP), open source codes (Clp, Dylp, Glpk and the Volume
Algorithm), and free-for-academics-only codes (SoPlex).
The challenge in creating the OSI is to develop a good design, i.e., to
select the most general and logical representation while keeping the translation libraries easy to implement. Since LP solvers are used by almost
everyone in the optimization community, there was great interest in this
project from early on.
1.3
Cut Generation Library (CGL)
Cutting planes are well known for their utility in attacking large mixedinteger programs [42]. Research in cutting planes has been underway for
decades, and a plethora of general and problem-specific cutting planes exist.
The Cut Generation Library (CGL) module of the COIN-OR repository
contains a collection of cutting plane implementations written in C++. As
of this writing, implementations for the following cuts are available in the
CGL.
• Knapsack cover cuts. The CglKnapsackCover code contains alternate
methods for finding covers (e.g, greedy, heuristic, most violated), and
alternate methods for lifting (sequential, simultaneous).
• Odd-hole cuts.
• Mixed-integer Gomory cuts.
• Probing cuts. Probing (deducing implications of changing a bound
on an integer variable, [32, 26]) is normally thought of as preprocessing technique, and not a cut generation method. However, if in
the course of probing, a continuous variable goes to a bound, then
it may be possible to construct a disaggregation cut. Moreover, if a
8
constraint goes slack when a variable is fixed, it may be possible to
strengthen a coefficient in that constraint. The CglProbing code can
perform variable fixing, coefficient strengthening, and disaggregation.
For coefficient strengthening, it produces a new cut with the strengthened coefficient (as opposed to actually changing the coefficient in the
matrix).
• Simple rounding cuts. The CglSimpleRounding was designed as an
example and is likely to always be dominated by other cuts.
• Lift-and-project (Norm 1) cuts. There are a whole cadre of lift-andproject cuts using different norms. The CglLiftAndProject is an
implementation of the most basic one using what is called “Norm 1”
in the literature [10].
There would be tremendous value in the contribution of new cut generators,
as well as improvements to the existing set.
The design of the CGL is very modular, and it is integrated with the
OSI. The cut generator instances are derived from a C++ abstract base
class. There is one main method in the class, namely a generateCuts
method. This method extracts problem information by querying the OSI
(e.g., not the specific solver directly), and populates a set of cuts. If, in the
course of its logic, the cut generator detects that the underlying problem
is infeasible, it communicates that information to the OSI by adding an
infeasible cut (e.g. an inequality that cannot be satisfied, e.g., 0x ≤ −1).
2
Cutting Stock Problem
In this Section, we describe an ongoing effort to develop software to solve
instances of a cutting stock problem using integer-programming components of COIN-OR. We chose the cutting stock problem because (i) it is
well known in the OR community, and (ii) our approach to it makes use of
multiple components of COIN-OR.
In 2.1, we describe the classical Gilmore-Gomory approach to the classical (one-dimensional) cutting stock problem. In 2.2, we describe a more
general model that is better-equipped to handle real-world instances. In 2.3,
we describe a branch-and-price method to attack the model. This method
is a variation on an approach due to Vanderbeck [47]; also see Vance [46].
In 2.4, we describe our implementation using COIN-OR components. In
9
particular, we make use of BCP, the OSI and the CGL. In 2.5, we describe
our computational results.
2.1
The Gilmore-Gomory formulation
We consider the usual Gilmore-Gomory [25] column formulation of the
cutting-stock problem:
X
xj ,
(1)
min
j
subject to
X
aij xj ≥ di ,
for i = 1, . . . , m ;
(2)
j
xj ≥ 0, integer,
(3)
where the columns
Aj = (a1j , a2j , . . . , amj )t
denote patterns. That is, the Aj are vectors satisfying
m
X
wi ai ≤ W ;
i=1
aij ≥ 0 integer ,
for i = 1, . . . , m, ∀ j .
The goal is to minimize the number of stock rolls (of width W ) used, while
satisfying the demands di for rolls of width wi .
We adopt the Gilmore-Gomory column-generation strategy for implicitly considering all possible patterns. In this approach, the usual goal is to
just solve the linear programming relaxation. Then the LP solution can be
rounded up to find a solution having objective value within m − 1 of the
optimal objective value.
In this usual approach, we associate dual variables π ∈ Rm
+ with the
demand constraints, and we arrive at the column generation subproblem
max
m
X
πi ai ,
(4)
subject to
m
X
wi ai ≤ W ;
(5)
i=1
i=1
ai ≥ 0, integer.
(6)
A column is generated for explicit inclusion in (1–3) when the optimal
objective value in the subproblem (4–6) exceeds one.
10
2.2
The real problem
In actual instances of the cutting stock problem, the cutting machines have
a limited number, say †, of knives. So we have the added restriction
Pm
m
X
wi ai
ai ≤ † + i=1
,
W
i=1
or, equivalently,
m
X
(W − wi )ai ≤ †W ,
i=1
since we want to allow † + 1 pieces to be cut when there is no scrap for a
pattern.
Also, we allow for the possibility of not requiring demand to be exactly
satisfied. This is quite useful and practical in real applications in the paper
industry. We assume that for some small nonnegative integers qi and pi ,
the customer will accept delivery of between di −qi and di +pi rolls of width
wi , for i = 1, . . . , m. We allow for over production, beyond di + pi rolls of
width wi , but we treat that as scrap.
We have the integer programming formulation:
min W
X
xj −
j
m
X
wi (di + si ) ,
(7)
i=1
subject to
X
aij xj − si − ti = di ,
for i = 1, . . . , m ;
(8)
j
−qi ≤ si ≤ pi ,
ti ≥ 0 ,
for i = 1, . . . , m ;
for i = 1, . . . , m ;
xj ≥ 0 integer, ∀ j ,
(9)
(10)
(11)
where the patterns Aj are vectors satisfying
m
X
i=1
m
X
wi aij ≤ W ;
(W − wi )aij ≤ †W ;
i=1
aij ≥ 0 integer ,
for i = 1, . . . , m .
We explicitly include the variables si and ti , but we generate the
columns Aj and associated variables xj as needed. In this regard, we
11
associate dual variables π ∈ Rm with the demand constraints, and our
column-generation subproblem is
max
m
X
πi ai ,
(12)
subject to
m
X
wi ai ≤ W ;
(13)
i=1
i=1
m
X
(W − wi )ai ≤ †W ;
(14)
i=1
ai ≥ 0 integer ,
for i = 1, . . . , m .
(15)
A column is generated for explicit inclusion in (7–11) when the optimal
objective value in the subproblem (12–15) exceeds W .
2.3
Solving the integer program
Let x̃ be a continuous solution of the linear-programming relaxation of
(7–11). We adopt the usual branch-and-bound philosophy of choosing a
fractional x̃k and forming two subproblems: Problem U consists of (7–11)
together with the constraint xk ≥ dx̃k e, and problem L consists of (7–
11) together with the constraint xk ≤ bx̃k c. Next, we describe how we
incorporate these branching constraints into (7–11).
For the subproblem U , we simply adjust the right-hand side of (8) to
d − dx̃k eAk ,
and add the constant dx̃k e to the objective function (7). This has the effect
of forcing us to use the pattern Ak at least dx̃k e times. Of course, if the
adjusted right-hand side is not nonnegative, then we fathom the subproblem
by infeasibility.
The subproblem L is more difficult to handle. What we do is leave
the column Ak in (7–11) and never remove it. Also, we explicitly take the
simple upper-bound constraint xk ≤ bx̃k c and append it to the formulation
(7–11). Next, we need to make sure that the column-generation subproblem
does not try to introduce a new copy of Ak (which it very well might try
to do at this point!). What we do is write the column variables ai in
binary (following the approach in [47]). This will give us enough control
over the columns to prevent the re-generation of Ak . Toward this end,
we associate ai with the bi 0/1 variables ail , l = 0, 1, . . . , bi − 1, where
12
bi = dlog2 (1 + bW/wi c)e (i.e., the maximum number of bits needed to write
P i −1 l
ai in binary). So, we substitute bl=0
2 ail for ai in (12–15), and we arrive
at
max
m bX
i −1 ³
X
´
πi 2l ail ,
(16)
i=1 l=0
subject to
m bX
i −1 ³
´
X
wi 2l ail ≤ W
i=1 l=0
m bX
i −1 ³
X
´
(W − wi )2l ail ≤ †W
(17)
(18)
i=1 l=0
ail ∈ {0, 1}.
(19)
The advantage of (16–19) over (12–15) is that we can explicitly stop Ak
from being generated by the subproblem. Toward this end, we write each
P i −1 i
2 αil , and we write Ak for the set of pairs
entry ai of Ak in binary, as bl=0
(i, l) for which αil = 1 in the binary encodings. Then, we can exclude Ak
(and only Ak ) with the constraint
X
X
ail −
ail ≤ |Ak | − 1 .
(20)
(i,l)∈Ak
(i,l)∈A
/ k
We propose solving the new column-generation subproblem (16–20) by 0/1
linear programming methods. As we get deeper in the tree, there may be
many constraints of the type (20); in this case, depending on the specific
columns that are being excluded, we may be able to combine and tighten
(via nonnegative combination and rounding) sets of these constraints (20)
(see [31]).
2.4
The implementation
We implemented our algorithm using the BCP, OSI, and CGL components
of COIN-OR. This led us to rapidly realize a reliable a implementation.
We applied branch-and-bound to the subproblem integer programs and
branch-and-cut to the master problem integer program (where we could
afford a bit of inaccuracy). At the root node, both for the subproblem
and master problem), we used a variety of techniques to tighten the linearprogramming relaxations. In particular, we fixed variables based on their
reduced costs, we generated cutting planes (e.g., mixed-integer Gomory
cuts, lifted knapsack cover cuts, and odd-hole cuts).
13
2.4.1
Subproblem Techniques
• Integer knapsack relaxation: For instances with no knife constraint,
we solved the integer knapsack-problem relaxation (i.e., omitting the
pattern-exclusion side constraints) using the Horowtiz-Sahni (backtracking) algorithm. We then checked whether the pattern-exclusion
constraints were satisfied — they were in the vast majority of cases.
For instances where this approach was unsuccessful, we applied various cutting planes (see below) to these “difficult subproblem instances”. Using these cuts, the total time spent on these difficult
subproblem instances was less than five percent of the total running
time.
• CG inequalities: We generated Chvátal-Gomory cuts by adding pairs
of exclusion constraints, dividing by two, and rounding. This is effective in strengthening the relaxation when the binary encodings of
the pair of patterns differ in one bit (see [31]).
• Tightening the pattern-exclusion constraints: For (i, l) ∈
/ Ak , the coefficient of ail in (20) is −1. As long as
l
wi 2 > W −
m
X
wr ark
r=1
or
m
X
(W − wi )2 > †W −
(W − wr )ark ,
l
r=1
we can strengthen (20) by changing the coefficient of ail in (20) from
-1 to 0.
• MIR inequalities: We generated a number of MIR inequalities from
the knapsack constraint (17) and from the knife constraint (18). Following [37], we take any constraint of the form
X
αj ηj ≤ β ,
(21)
j∈S
where the αj and β are real numbers, and the ηj are nonnegative integer variables. Then, we derive the valid MIR (Mixed-integer rounding) inequality
¶
Xµ
(fj − f )+
bαj c +
ηj ≤ bβc ,
(22)
1−f
j∈S
14
where f := β − bβc (is assumed to be positive), fj := αj − bαj c (for
j ∈ S), and (·)+ := max{·, 0}.
We divided the constraint (17) (respectively, (18)) by various positive
integers that do not divide the right-hand side of (17) (respectively,
(18)). In this way, we obtained various base inequalities (21) from
which we derived MIR inequalities (22).
2.4.2
Master Problem Techniques
• Solve the restricted master problem to integer optimality: Based on
currently available columns, we solved the restricted master problem
to integer optimality every now and then. This only seemed to help
when it found the overall optimal solution.
• Branching rules: We preferred to branch on variables that were (i)
most fractional (i.e., nearest 1/2), (ii) near the middle of their range,
and (iii) had a large objective coefficient. Specifically, for each variable, let f rac be its value minus its round down, let −obj be the
objective in the master problem (min) objective, let ub and lb be
their upper and lower bounds. Then we chose the variable to branch
on that had the largest value of
obj
(ub − sol)(sol − lb)
(1 − f rac)(f rac) .
(ub − lb)2
• Perturbing the subproblem objective function: The dual optimal solution of the restricted master problem is used to create the subproblem objective function. With the goal of generating many reasonable
columns, we repeatedly solved the subproblem and perturbed the
subproblem objective function. We did perhaps 25 or so iterations
of this, starting from each optimal solution to a restricted master
problem. Specifically, we performed the perturbation in an iteration,
by discounting each subproblem objective coefficient (inititally πi 2l )
by 5 percent when the previous subproblem solution had ail = 1; we
left subproblem objective coefficients unchanged when the previous
subproblem solution had value ail = 0.
• Farley’s (intermediate) bound: Ordinarily, to get a valid lower bound
at a node of the branch-and-bound tree, we must be at a point where
15
Testset
Min
Max
Testset
Min
Max
LLL1
LLL2
LLL3
LLL4
LLL5
0.0048
0.0323
0.0152
0.0031
0.0539
0.7096
0.7490
0.4856
0.4641
0.2476
LLL6
LLL7
LLL8
LLL9
LLL10
0.0025
0.0529
0.0576
0.0573
0.0520
0.2389
0.4556
0.4444
0.2336
0.2462
Table 1: Data properties
the subproblem has no improving columns to generate. Farley provided a bound which does not require this condition to hold. We
implemented this, but we did not find it to be very effective in the
instances tested.
In keeping with the goals of COIN-OR, this implementation is available
for future researchers on the COIN-OR website at http://coin-or.org.
2.5
Computational results
We aspired to test our implementation using the same data sets as in [46]
and [47]. We contacted the authors, but unfortunately, these data sets were
not available to us. Consequently, we used the CUTGEN1 code of Gau
and Wascher [23] to generate 10 types of data sets similar to the randomly
generated cutting stock instances used in [47]. In each type, the length of
the stock roll is 500 and there are 20 items with an average demand of 50.
The item weights were uniformly generated over an interval. The intervals
for the different types are given in Table 1. In keeping with the objectives
of COIN-OR, these test sets used are available for other researchers at
http://www.coin-or.org.
For each type we have generated 10 instances and attempted to solve
them within a 300 seconds time limit on an RS6000 43P machine with
375MHz processor. The results reported in Table 2 are: the number of
instances solved for each type; how many of these were solved in the root
node; the average time in seconds it took to solve the problems that were
actually solved; and the average depth of the tree for the problems that
were solved but not in the root node.
Table 3 contains the same information when the subproblem objective
was perturbed. We have experimented with perturbing the objective 10,
16
Testset
Solved
In Root
Time
Depth
LLL1
LLL2
LLL3
LLL4
LLL5
LLL6
LLL7
LLL8
LLL9
LLL10
9
9
10
9
5
2
9
10
5
4
9
7
4
5
0
0
7
7
0
0
1.302
0.919
94.227
14.779
64.664
70.645
10.178
8.519
109.072
82.278
8.00
7.83
3.50
6.00
11.00
6.50
2.00
10.20
7.50
Table 2: No perturbation
20, 30, 40, and 50 times in a row (thus generating that many columns
every time an LP relaxation was solved). We have used a perturbation
factor of .90, .95 or .97 ( i.e., the cost of the binary variables active in
the optimal solution of a subproblem was discounted by 10%, 5%, or 3%,
respectively). These experiments showed that a moderate approach works
the best, and thus Table 3 contains the results for generating 20 columns
with 5% discount.
Note that we were able to solve significantly more problems when perturbation was allowed, and also that, with the exception of nb18, perturbation speeded up the computation, and the column generation process
converged faster. (The average running time shown is slower, but more
problems were solved and only the solved problems are included in the
average running time.)
3
Other Integer-Programming Related Projects
Utilizing COIN-OR Software
In this section, we briefly catalogue other known integer-programming research and industry applications using COIN-OR components.
17
Testset
Solved
In Root
Time
Depth
LLL1
LLL2
LLL3
LLL4
LLL5
LLL6
LLL7
LLL8
LLL9
LLL10
10
10
10
10
8
8
10
10
9
7
8
8
3
7
1
0
7
6
2
1
4.406
7.403
29.185
19.548
46.190
143.283
11.087
11.076
116.943
77.794
2.00
13.00
3.86
7.33
4.57
10.62
2.33
2.75
10.43
9.50
Table 3: With perturbation
3.1
Stochastic branch-price approaches
Lulli and Sen of the Department of Systems and Industrial Engineering
at the University of Arizona developed a branch-and-price methodology
for solving specially structured multi-stage stochastic integer programming
problems. To test their approach computationally, they implemented their
algorithm using the BCP framework and considered a stochastic version of
the well-known batch-sizing problem [36].
3.2
Branch and price methods for combinatorial auctions
Eso, Jensen and Ladanyi [] have used BCP to create software that will be
used in the Federal Communication Commission’s (FCC) upcoming Auction #31, where the FCC plans to auction off wireless spectrum licenses.
In this auction, package bidding is allowed, that is, bidders can place bids
on combinations of licenses. The auction is iterative: after bidders place
their bids the optimal solution is computed together with price estimates
for the licenses and then the bidders bid again. The authors had to solve
the bid evaluation problem after each round. This problem can be very
naturally formulated as a set packing problem with side constraints (round
constraints) arising from the the auction rules. The authors have devised a
method to apply Dantzig-Wolfe decomposition in this integer programming
settings. Their algorithm successfully combines the column generation inherent to Dantzig-Wolfe style methods with cut generation used in Branch-
18
and-Cut methods leading to a true branch, cut, and price type algorithm.
Besides BCP, the implementation uses the CGL to find violated cuts as
well. Without cut generation the lower bounds at the search tree nodes
were not strong enough for fathoming.
This problem also had a (randomly generated) secondary objective
function used to choose one solution from among all the solutions which
were optimal with respect to the primary objective. Traditionally this is
achieved by
1. solving the problem with respect to the primary objective function
(stage I),
2. adding a constraint specifying that the primary objective must be
equal to the primary optimum and then
3. optimizing for the secondary objective (stage II).
This method has several drawbacks. The most important one is that it introduces a fully dense row into the formulation, which was especially problematic in our situation, since the rest of the coefficients were all −1, 0,
and 1, while both objective functions contained numbers on the order of
106 to 108 , leading to serious numerical instabilities. Eso et al. has shown
how to apply linear complementarity conditions for integer programming
after solving the stage I problem. Their method avoids having the extra
constraint in stage II, albeit at the expense of having to solve multiple problems in that stage. In the authors experience, this tradeoff was beneficial
[19].
BCP was also used by Eso, Ghosh, Kalagnanam and Ladanyi [18] for
finding good solutions for procurement auctions with piece-wise linear supply curves. Such situation commonly arises when the suppliers give volume
discounts for large orders. The proposed branch-and-price based approach
appeared to scale well for large problems and found very good solutions
fast. More importantly, it is a methodology that can be easily extended to
accommodate various side constraints, such as lower and upper limits on
the amount of goods purchased from each supplier, for each commodity as
well as overall.
3.3
Research and applications using the Volume Algorithm
Another library available in COIN-OR is an implementation of the Volume
Algorithm [13]. The Volume Algorithm is an extension of the well-known
19
subgradient algorithm. Besides providing an approximate dual optimal solution, the Volume Algorithm delivers a primal solution as well. The primal
solution is near optimal, but is slightly primal infeasible. It converges to
the center of the primal (LP relaxation) optimal face. This is advantageous because primal heuristic solutions (e.g., through rounding) can be
derived from the approximate solution as in the Uncapacitated Facility Location Example of COIN-OR, or cutting planes can be derived from the
approximate solutions as in the MAXCUT Example of COIN-OR. (Here,
an infeasible point is separated, but hopefully the cut separates some feasible fractional point as well. This is likely as the infeasible point is near the
center of the optimal face to the LP relaxation.) As in the subgradient algorithm, the longer the Volume Algorithm runs, the dual and primal solution
become more precise and the primal solution becomes less infeasible.
Barahona and Ladanyi [16] have embedded the Volume Algorithm into
BCP replacing the traditionally used Dual Simplex Algorithm. They have
found that for the MaxCut Problem this approach shrinks the integrality
gap much faster than the traditional approach. Solving the individual LP
relaxations is faster and the cuts generated using the primal solutions provided by the Volume Algorithm are stronger than those generated using
the primal solutions provided by the Dual Simplex Algorithm. However,
since the Volume Algorithm produces only approximate dual solutions, the
resulting lower bound was usually not sufficient to completely close the integrality gap and the Dual Simplex algorithm was needed to fathom search
tree nodes. With this hybrid approach, Barahona and Ladanyi were able
to solve Ising Spinglass Problem (see [39]) instances of size 80x80 in three
hours. Up until then, only problems of size 60x60 were solvable within this
time frame.
Barahona and Chudak [15] have used the Volume Algorithm to find
near optimal solutions for the Uncapacitated Facility Location Problem
significantly faster than was theretofore possible. They have derived feasible solutions from the primal solution delivered by the Volume Algorithm
and lower bounds from the dual feasible solution produced by the Volume
Algorithm. This source code is available in COIN-OR as an example of
using the Volume Algorithm directly (i.e., not through the OSI).
Barahona and Dash are currently working on replacing the Dual Simplex Algorithm used in the Combinatorial Optimization and Networked
Combinatorial Optimization Research and Development Environment (CON-
20
CORDE, [1]) with the Volume Algorithm. CONCORDE is a computer
code for the Travelling Salesman Problem and related network optimization problems developed by Applegate, Bixby, Chvátal and Cook.
The Volume Algorithm has been used in an engagement involving
a limousine service company. The company wants to reduce labor and
mileage costs without degrading service by using up-to-the-second conditions to schedule and route the drivers and limousines. Examples of
changing conditions include customers calling to request new rides and
cancelling planned rides, customers requesting additional stops while rides
are in progress, variability in road traffic conditions, and variability in airline arrival times. Attractive schedules ensure VIP customers get the best
drivers, and avoid arriving late. In this work, a mixed-integer linear programming formulation was adopted where variables represent the drivers’
schedules. A column generation technique was developed to obtain new
schedules via an iterative price-and-fix approach. The Volume Algorithm
was exploited in the generation of new columns. Initial results using static
data produced schedules within 1.5% of optimality in 5 minutes. Testing
with dynamic data is underway.
3.4
Industrial applications using branch, cut, and price approaches
Anbil, Forrest, Ladanyi, Rushmeier and Snowdon have used BCP in a column generation scheme for a fleet scheduling problem [8]. The problem can
be stated as given a set of flight legs, and various side constraints (such as
the number of gates available), determine the minimum cost schedule for
the fleet. Although a solution of acceptably quality was attained in the root
node (thus no branching was ever required), BCP was used since both cut
generation and column generation were preformed. The gate-constraints
(i.e., at any time the number of aircrafts at an airport cannot exceed the
number of gates the airline has) were added dynamically to the formulation. The problem instances treated involved several thousand flights and
hundreds of aircrafts.
Using a branch-and-price approach, Forrest, Kalagnanam and Ladanyi
[29] were the first to solve to optimality (and that in a mere four seconds)
the mkc problem in MIPLIB 3.0 [3]. The mkc problem is an instance of the
multiple knapsack problem with color constraints, a problem which arose
in an IBM consulting engagement with a leading steel mill [28]. Forrest
21
et al., decreased the integrality gap from 5% to .5% on mkc7, a much
larger problem from the same family. The mck7 problem is available on
problem section of the IBM Research Deep Computing website [7]. The
BCP application is available on COIN-OR, as example of using BCP for
pure branch-and-price approach.
Galati used COIN/BCP in his work with Galexis Pharmaceutical Logistics, the largest pharamaceutical distributor in Switzerland, while at
EPFL in Lausanne, Switzerland. The problem was one of a tightly time
constrained routing problem - part of which was modeled as a classical
vehicle routing problem with time windows and solved with BCP [22].
3.5
Polyhedral approaches to network optimization using
branch-and-cut techniques
McCormick, Pesneau, Ridha and Thomas [43, p.21] have used BCP to
experiment with various separation routines for valid inequalities they have
derived for the 2-edge connected subgraph problem with bounded rings.
3.6
Industrial and research efforts involving the OSI
The OSI is probably the most actively discussed and developed component
of COIN-OR. When COIN-OR started, the OSI was interfaced only to OSL,
XPRESS-MP and the Volume Algorithm. Since then the list of interfaces
has more than doubled. First Tobias Achterberg provided an interface
to CPLEX, then Lou Hafer did the same for DYLP. Subsequently, Brady
Hunsaker and Vivian De Smedt submitted an OSI layer for GLPK. When
John Forrest contributed the COIN-OR LP Solver (CLP), the OSI interface
was provided as well. Significant additions to the OSI that will expand is
power are in the queue.
Forrest, Fasano and Lee (private communication) used OSI and probing
cuts to solve a very large mixed-integer programming model for managing
IBM spare-parts inventory levels and transportation logistics (see [30]). The
model that they worked with has an enormous number of variables, owing
to the large number of different part types, the geographic dispersion and
magnitude of the machine installs, and the stringent time-based servicelevel agreements. Optimal solutions to instances (which are solved on a
weekly basis) keep most part types at very low inventory levels (due to
high reliability), in close proximity to many customers.
22
Eckstein and Nediak [41] designed a heuristic method for general 0-1
mixed integer programming that needed pivot level control in an LP solver.
They have extended the interface of the OSI to provide this and other
specifically simplex algorithm related functionalities, like basis and tableau
operations. Their changes are expected in the future to be incorporated
into the OSI.
While each individual solver has more or less the same set of parameters
the user can set to modify the behavior of the solver, the interpretation
of these parameters does vary from solver to solver. Bjarni Kristjansson,
author of the MPL [4] modeling language, has offered for incorporation
MPL’s design of handling consistent setting of parameters across various
underlying LP solvers.
3.7
Common Linear Program Solver (CLP) and Simple Branchand-Bound (SBB)
Two new packages have recently been contributed to COIN-OR. Although
they are too new to have been used in research or applications, they deserve
mention.
The lowest layer in COIN-OR that is related to integer programming is
the COIN-OR Linear Programming Solver (CLP), a simplex based, opensource LP solver that has an interface to OSI. The initial CLP contribution was authored by John Forrest and contributed to COIN-OR by IBM
Research. While other open-source simplex-based solvers with the OSI interfaces (Glpk and Dylp) are licensed under the GNU Public Licence or
under free-for-academic-use-only licenses (SoPlex), CLP is placed under
the Common Public Licence (CPL). (All of the donations from IBM Research to date are under the same license, which enables the community to
intermingle code fragments from among these projects.)
The design goal for CLP was to balance performance with maintainability and extensibility of the code. (World-class performance often leads
to unwieldy code.) The performance target for CLP was 80% of the performance of similar commercial codes.
CLP contains both a primal and a dual simplex solver, with most stateof-the-art features already implemented. These include super-sparse matrix
and vector handling, scaling, primal and dual steepest-edge pivot selection
rules, and preprocessing. In addition to the built-in functionality, CLP’s
modular design allows the user to easily replace selected subroutines, such
23
as pivot selection and matrix factorization, if so desired.
The Simple-Branch-and-Bound (SBB) package, as the name suggests,
began as a simple branch-and-bound code. However, it has quickly grown
into a not-so-simplistic branch-and-cut-code. Its design goal is to be simpler
than BCP and to work on many problems without tuning. The initial
implementation was authored by John Forrest and contributed to COINOR by IBM Research.
SBB allows users to: add their own cuts or use those in the cut generation library; add their own heuristics or use those provided; add their own
methods for selecting branching a node and variable; and use any solver
that has an OSI interface.
4
Summary
The mission of COIN-OR is to promote open-source software for the OR
community. The initiative has been very successful to date, based on the
number of software contributions, the activity of the mailing lists, and
reported experiences with the software.
In this paper, we have argued that open source may be a successful means of addressing the adverse consequences of current OR researchsoftware practice. We introduced COIN-OR as an initiative to explore the
viability of open source for OR. As a case in point, we prototyped ideas
for solving the classic CSP using COIN-OR components. Thanks to the
re-usable components available under COIN-OR, we were able to quickly
prototype and test our ideas. Our implementation, along with the data sets
used in our computational study, have been archived for others to build on
under the COIN-OR source code repository. Comments, code improvements, and data-set contributions from the community are welcome.
We surveyed other known integer-programming related research efforts
using software components available under COIN-OR. Due to the availableon-demand nature of the COIN-OR software, there is no system to track if
and how the COIN-OR source code is being used. We encourage users of
COIN-OR software to provide feedback to improve the software and to cite
the use of the free software in their work. We encourage all OR software
developers to participate by using COIN-OR software, subscribing to the
mailing lists, and contributing software. For more ideas on how to become
involved visit http://coin-or.org/how-to-help.html.
24
References
[1] CONCORDE. http://www.math.princeton.edu/tsp/concorde.html.
[2] Concurrent Versions System. http://www.cvshome.org.
[3] MIPLIB 3.0. http://www.caam.rice.edu/ bixby/miplib/miplib3.html.
[4] MPL. http://www.maximal-usa.com.
[5] Netcraft. http://www.netcraft.com, July 2001.
[6] Open Souce Initiative. http://opensource.org.
[7] Problems at DCI. http://www.research.ibm.com/dci/problems.shtml.
[8] Ranga Anbil, Francisco Barahona, Laszlo Ladanyi, Russell Rushmeier,
and Jane Snowdon. Ibm makes advances in airline optimization.
OR/MS Today, 26:26–29, December 1999.
[9] Laura Bahiense, Francisco Barahona, and Oscar Porto. Solving Steiner
tree problems in graphs with Lagrangian relaxation. IBM Research
Report, RC21847, 2000.
[10] Egon Balas, Sebastián Ceria, and Gérard Cornuéjols. A lift-andproject cutting plane algorithm for mixed 0-1 programs. Math. Programming, 58(3, Ser. A):295–324, 1993.
[11] Egon Balas, Sebastián Ceria, and Gérard Cornuéjols. Solving mixed 01 programs by a lift-and-project method. In Proceedings of the Fourth
Annual ACM-SIAM Symposium on Discrete Algorithms (Austin, TX,
1993), pages 232–242, New York, 1993. ACM.
[12] Francisco Barahona and Ranga Anbil. Solving large scale uncapacitated facility location problems. IBM Research Report, RC21515, 1999.
[13] Francisco Barahona and Ranga Anbil. The volume algorithm: Producing primal solution with a subgradient algorithm. Math. Program.,
87:385–399, 2000.
[14] Francisco Barahona and Ranga Anbil. On some difficult linear programs coming from set partitioning. Discrete Appl. Math., 118(1-2):3–
11, 2002. Third ALIO-EURO Meeting on Applied Combinatorial Optimization (Erice, 1999).
25
[15] Francisco Barahona and Fabian Chudak. Near-optimal solutions to
large scale facility location problems. IBM Research Report, RC21606,
1999.
[16] Francisco Barahona and Laszlo Ladanyi. Branch and cut based on the
volume algorithm: Steiner trees in graphs and max-cut. IBM Research
Report, RC2222, 2001.
[17] Robert
Roland
System
Kluwer
E. Bixby, Mary Fenelon, Zonghao Gu, Ed Rothberg, and
Wunderling. MIP: theory and practice—closing the gap. In
modelling and optimization (Cambridge, 1999), pages 19–49.
Acad. Publ., Boston, MA, 2000.
[18] Marta Eso, Soumyadip Gosh, Laszlo Ladanyi, and Jayant
Kalagnanam. Bid evaluation in procurement auctions with piece-wise
linear supply curves. IBM Research Report, RC22219, 2001.
[19] Marta Eso, Laszlo Ladanyi, and David L. Jensen. Solving lexicographic
multiobjective MIP with column generation. INFORMS Annual Meeting, San Jose, 2002; abstract, p. 88.
[20] Alan Farley. A note on bounding a class of linear programming problems, including cutting stock problems. Operations Research, 38:922–
923, 1990.
[21] John Forrest and John Tomlin. Implementing interior point linear
programming methods in the optimization subroutine library. IBM
System Journal, 31(1):183–209, 1987.
[22] Matthew Galati. Private communication, 2002.
[23] T. Gau and G. Waescher. Cutgen1: A problem generator for the
standard one-dimensional cutting stock problem. Eur. Jour. of Oper.
Res., 84:572–579, 1995.
[24] Arthur Geoffrion.
The shared destiny
http://www.anderson.ucla.edu/informs/sdi.htm.
initiative,
see
[25] Paul C. Gilmore and Ralph E. Gomory. A linear programming approach to the cutting-stock problem. Oper. Res., 9:849–859, 1961.
[26] Monique Guignard and Kurt Spielberg. Logical reduction method
in zero-one programming (minimal preferred variables). Oper. Res.,
29(1):49–74, 1981.
26
[27] Oktay Günlük and Yves Pochet. Mixing mixed-integer inequalities.
Math. Programming, 90:429–457, 2001.
[28] Jayant Kalagnanam, Milind Dawande, Mark Trumbo, and Ho Soo
Lee. The surplus inventory matching problem in the process industry.
Operations Research, 48(4):505–516, 2000.
[29] Laszlo Ladanyi, John Forrest, and Jayant Kalagnanam. Column generation approach to the multiple knapsack problem with color constraints. IBM Research Report, RC22013, 2001.
[30] Jon Lee. Service-parts logistics. Tutorial on Supply Chain and Logistics Optimization, Institute for Mathematics and its Applications,
University of Minnesota, September 2002.
[31] Jon Lee. Cropped cubes. IBM Research Report RC21830, 2000. To
appear in: Journal of Combinatorial Optimization.
[32] Carlton E. Lemke and Kurt Spielberg. Direct search algorithms for
zero-one and mixed-integer programming. Oper. Res., 15:892–914,
1967.
[33] Robin Lougee-Heimer. The coin-or initiative: open source software for
optimization. Proceedings Third International Workshop on Integration of AI and Or Techniques in Constraint Programming for Combinatorial Optimization Problems, pages 307–319, 2001.
[34] Robin Lougee-Heimer. The common optimization interface for operations research: Promoting open-source software in the operations research community. IBM Journal of Research and Development, 47(1),
2002.
[35] Robin Lougee-Heimer, J.P. Fasano, Francisco Barahona, Brenda
Dietrich, John Forrest, Robert Harder, Laszlo Ladanyi, Tobias
Pfender, Theodore Ralphs, Matthew Saltzman, and Katya Scheinberg.
The coin-or initiative: Open source accelerates operations research
progress. ORMS Today, 28(4):20–22, 2001.
[36] Gugleilmo Lulli and Suvrajeet Sen. A branch-and-price algorithm
for multi-stage stochastic integer programming with application to
stochastic batch-sizing problem. submitted for publication, 2001.
27
[37] Hugues Marchand and Laurence A. Wolsey. Aggregation and mixed
integer rounding to solve MIPs. Oper. Res., 49(3):363–371, 2001.
[38] Silvano Martello and Paolo Toth. Knapsack problems. WileyInterscience Series in Discrete Mathematics and Optimization. John
Wiley & Sons Ltd., Chichester, 1990. Algorithms and computer implementations.
[39] John E. Mitchell. An interior point cutting plane algorithm for Ising
spin glass problems. In Operations Research Proceedings 1997 (Jena),
pages 114–119, Berlin, 1998. Springer.
[40] Walter Murray, Philip Gill, Michael Saunders, and John Tomlin. On
projected newton barrier methods for linear programming and an
equivalence to Karmarkar’s projective method. Mathematical Programming, 36:183–209, 1986.
[41] Mikhail Nediak and Jonathan Eckstein. Pivot, cut, and dive: A heuristic for 0-1 mixed integer programming. RUTCOR Research Report,
RRR 53-2001, 2001.
[42] George Nemhauser and Laurence Wolsey. Integer and combinatorial
optimization. John Wiley & Sons Inc., New York, 1999. Reprint of the
1988 original, A Wiley-Interscience Publication.
[43] Pierre Pesneau, Ali Ridha Mahjoub, and S. Thomas McCormick. The
2-edge connected subgraph problem wiht bounded rings. IFORS, 2002.
[44] Raymond, Eric S.
The Magic Cauldron.
June 1999, see
http://www.tuxedo.org/ esr/writings/magic-cauldron/.
[45] Matthew J. Saltzman. Coin-or: An open-source library for optimization. In S. Nielsen, editor, Software for Compuational Economics and
Finance Optimization. Kluwer, 2002.
[46] Pamela H. Vance.
Branch-and-price algorithms for the onedimensional cutting stock problem. Comput. Optim. Appl., 9(3):211–
228, 1998.
[47] François Vanderbeck. Computational study of a column generation
algorithm for bin packing and cutting stock problems. Math. Program.,
86(3, Ser. A):565–594, 1999.
28