Operation Research
1. Show that interaction of any finite number of convex sets is a convex
set.
In mathematics, the convex hull or convex envelope for a set of points X in a real vector
space V (for example, usual 2- or 3-dimensional space) is the minimal convex set containing X.
When the set X is a finite subset of the plane, we may imagine stretching a rubber band so that it
surrounds the entire setX and then releasing it, allowing it to contract; when it becomes taut, it
encloses the convex hull of X.[1] [2]
The convex hull also has a linear-algebraic characterization: the convex hull of X is the set of
all convex combinations of points in X.
In computational geometry, a basic problem is finding the convex hull for a given finite set of
points in the plane.
To show that the convex hull of a set X in a real vector space V exists, notice that X is contained
in at least one convex set (the whole space V, for example), and any intersection of convex sets
containing X is also a convex set containing X. It is then clear that the convex hull is the
intersection of all convex sets containing X. This can be used as an alternative definition of the
convex hull.
The convex-hull operator Conv() has the characteristic properties of a hull operator:
extensive
S ⊆ Conv(S),
non-decreasing S ⊆ T implies that Conv(S) ⊆ Conv(T), and
idempotent
Conv(Conv(S)) = Conv(S).
Thus, the convex-hull operator is a proper "hull" operator.
[edit]Algebraic
characterization
Algebraically, the convex hull of X can be characterized as the set of all of the convex
combinations of finite subsets of points from X: that is, the set of points of the form
, where n is an arbitrary natural number, the numbers tj are non-negative and sum to 1, and
the points xj are in X. It is simple to check that this set satisfies either of the two definitions
above. So the convex hullHconvex(X) of set X is:
In fact, according to Carathéodory's theorem, if X is a subset of an N-dimensional vector
space, convex combinations of at most N + 1 points are sufficient in the definition above.
This is equivalent to saying that the convex hull of X is the union of all simplices with at
most N+1 vertices from X.
The convex hull is defined for any kind of objects made up of points in a vector space,
which may have any number of dimensions, including infinite-dimensional vector spaces.
The convex hull of finite sets of points and other geometrical objects in a twodimensional plane or three-dimensional space are special cases of practical importance.
2. Give the mathematical formulation of an assignment problem.
One can imagine a number of variations on this objective or the corresponding
constraints are also possible. Here are just a few possibilities some models may
include (although not all at once!):
Minimize the total time it takes to perform a set of jobs.
Maximize the total satisfaction of the group.
Maximize the performance of the group (based on skill levels).
Subject to:
- Each person is assigned no more than five jobs.
- Each person may complete as many jobs as possible in eight hours.
- Each job requires three people working on it.
Now, let's model a specific example. Imagine that we have a set of household jobs to
be completed, and a group of available people to complete them. In order to create a
mathematical version of this model, we need to define a few variables:
assign(Sally, dusting) = "amount" of Sally assigned to dusting (0 or 1)
.
.
assign(Sally, washing) = "amount" of Sally assigned to mowing (0 or 1)
.
.
assign(Joe, dusting)
= "amount" of Joe assigned to washing (0 or 1)
.
.
assign(Joe, washing)
= "amount" of Joe assigned to washing (0 or 1)
Note: Each of these variables can take on the value of 0 (if the
assignment is not made) or 1 (if the assignment is made).
For the general problem formulation we need the following parameters:
cost(Sally, dusting) = amount to pay Sally to dust the furniture
.
.
cost(Joe, washing) = amount to pay Joe to wash the dishes
--------------------------------------------------
max_jobs(Sally) = maximum number of jobs Sally may perform
.
.
max_jobs(Joe)
= maximum number of jobs Joe may perform
Basic Formulation
Based on the notation from above, the model can be rewritten as follows:
Minimize:
cost(Sally, dusting) * assign(Sally, dusting) +
.
.
+ cost(Joe, washing) * assign(Joe, washing)
Subject to:
No more than the maximum number of tasks per person:
assign(Sally, dusting) + . . . + assign(Sally, washing) <= max_tasks[Sally]
.
.
assign(Joe, dusting) + . . . + assign(Joe, washing) <= max_tasks[Joe]
No more than one person assigned to each task:
assign(Sally, dusting) + . . . + assign(Joe, dusting) <= 1
.
.
assign(Sally, washing) + . . . + assign(Joe, washing) <= 1
Each variable assign[i,j] is (0 or 1).
3. Explain the types of transportation problem.
In mathematics and economics, transportation theory is a name given to the study of optimal
transportation and allocation of resources.
The problem was formalized by the French mathematician Gaspard Monge in 1781.[1]
In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically.
In 1930, in the collection Transportation Planning Volume I for the National Commissariat of
Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal
Kilometrage in Cargo-transportation in space".[2][3]
Major advances were made in the field during World War II by
the Soviet/Russian mathematician and economist Leonid Kantorovich.[4] Consequently, the
problem as it is stated is sometimes known as the Monge–Kantorovich transportation
problem.
Suppose that we have a collection of n mines mining iron ore, and a collection of n factories
which consume the iron ore that the mines produce. Suppose for the sake of argument that these
mines and factories form two disjoint subsets M and F of the Euclidean plane R2. Suppose also
that we have a cost function c : R2 × R2 → [0, ∞), so that c(x, y) is the cost of transporting one
shipment of iron from x to y. For simplicity, we ignore the time taken to do the transporting. We
also assume that each mine can supply only one factory (no splitting of shipments) and that each
factory requires precisely one shipment to be in operation (factories cannot work at half- or
double-capacity). Having made the above assumptions, a transport plan is
a bijection
— i.e. an arrangement whereby each mine
supplies
precisely one factory
plan T whose total cost
. We wish to find the optimal transport plan, the
is the least of all possible transport plans from M to F. This motivating special case of the
transportation problem is in fact an instance of the assignment problem.
[edit]Moving
books: the importance of the cost function
The following simple example illustrates the importance of the cost function in determining
the optimal transport plan. Suppose that we have n books of equal width on a shelf (the real
line), arranged in a single contiguous block. We wish to rearrange them into another
contiguous block, but shifted one book-width to the right. Two obvious candidates for the
optimal transport plan present themselves:
1. move all n books one book-width to the right; ("many small moves")
2. move the left-most book n book-widths to the right and leave all other books fixed.
("one big move")
If the cost function is proportional to Euclidean distance (c(x, y) = α |x − y|) then these two
candidates are both optimal. If, on the other hand, we choose the strictly convex cost function
proportional to the square of Euclidean distance (
"many small moves" option becomes the unique minimizer.
), then the
Interestingly, while mathematicians prefer to work with convex cost functions, economists
prefer concave ones. The intuitive justification for this is that once goods have been loaded
on to, say, a goods train, transporting the goods 200 kilometres costs much less than twice
what it would cost to transport them 100 kilometres. Concave cost functions represent
this economy of scale.
[edit]Abstract
[edit]Monge
formulation of the problem
and Kantorovich formulations
The transportation problem as it is stated in modern or more technical literature looks
somewhat different because of the development of Riemannian geometry and measure
theory. The mines-factories example, simple as it is, is a useful reference point when thinking
of the abstract case. In this setting, we allow the possibility that we may not wish to keep all
mines and factories open for business, and allow mines to supply more than one factory, and
factories to accept iron from more than one mine.
Let X and Y be two separable metric spaces such that any probability measure on X (or Y) is
a Radon measure (i.e. they are Radon spaces). Let
be a
Borel-measurable function. Given probability measures μ on X and ν on Y, Monge's
formulation of the optimal transportation problem is to find a transport map
that realizes the infimum
where T * (μ) denotes the push forward of μ by T. A map T that attains this infimum (i.e.
makes it a minimum instead of an infimum) is called an "optimal transport map".
Monge's formulation of the optimal transportation problem can be ill-posed, because
sometimes there is no T satisfying T * (μ) = ν: this happens, for example, when μ is
a Dirac measure but ν is not).
We can improve on this by adopting Kantorovich's formulation of the optimal
transportation problem, which is to find a probability measure γ on
that attains
the infimum
where Γ(μ,ν) denotes the collection of all probability measures on
with marginals μ on X and ν on Y. It can be shown[5] that a minimizer for this
problem always exists when the cost functionX is lower semi-continuous
and Γ(μ,ν) is a tight collection of measures (which is guaranteed for Radon
spaces X and Y). (Compare this formulation with the definition of the Wasserstein
metric W1 on the space of probability measures.)
[edit]Duality
formula
The minimum of the Kantorovich problem is equal to
where the supremum runs over all pairs of bounded and continuous
functions
and
such that
[edit]Solution
[edit]Optimal
of the problem
transportation on the real line
For
, let
measures on
denote the collection of probability
that have finite pth moment. Let
let c(x,y) = h(x − y), where
and
is aconvex function.
1. If μ has no atom, i.e., if the cumulative distribution
of μ is a continuous function,
function
then
is an optimal transport map. It is the
unique optimal transport map if h is strictly convex.
2. We have
[edit]Separable
Hilbert spaces
Let X be a separable Hilbert space. Let
denote the collection
of probability measures on X such that have finite pth moment;
let
denote those elements
that areGaussian
regular: if g is any strictly positive Gaussian measure on X and g(N) =
0, then μ(N) = 0 also.
Let
,
, c(x,y) =
, p − 1 + q − 1 = 1. Then the
Kantorovich problem has a unique solution κ, and this solution is
induced by an optimal transport map: i.e., there exists a Borel
| x − y | p / p for
map
such that
4. Describe the matrix form of transportation problem. Illustratrate with 2
origins and 3 destinations.
A scooter production company produces scooters at the units
situated at various places (called origins) and supplies them to
the places where the depot (called destination) are situated.
Here the availability as well as requirements of the various
depots are finite and constitute the limited resources.
This type of problem is known as distribution or transportation
problem in which the key idea is to minimize the cost or the
time of transportation.
In previous lessons we have considered a number of specific
linear programming problems. Transportation problems are also
l i n e a r p r o g r ammi n g p r o b l ems a n d c a n b e s o l v e d b y s imp l e x
method but because of practical significance the transportation
problems are of special interest and it is tedious to solve them
through simplex method. Is there any alternative method to solve
such problems?
63.2 OBJECTIVES
After completion of this lesson you will be able to:
l solve the transportation problems by
( i ) North-West corner rule;
( i i ) Lowest cost entry method; and
(iii)Vogel’s approximation method.
l test the optimality of the solution.112 :: Mathematics
63.3 M A T H E M A T I C A L F O R M U L A T I O N O F
TRANSPORTATION PROBLEM
Let there be three units, producing scooter, say, A1
, A2
and
A3
from where the scooters are to be supplied to four depots say
B1
, B2
, B3
and B4
.
Let the number of scooters produced at A1
, A2
and A3
be a
1
,
a
2
and a
3
respectively and the demands at the depots be b
2
, b
1
,
b
3
and b
4
respectively.
We assume the condition
a
1
+a
2
+a
3
= b
1
+b
2
+b
3
+b
4
i.e., all scooters produced are supplied to the different depots.
Let the cost of transportation of one scooter from A1
to B1
be
c
11
. Similarly, the cost of transportations in other casus are also
shown in the figure and 63.1 Table 1. The following terms are to be defined with
reference to the
transportation problems:
(A) Feasible Solution (F.S.)
A set of non-negative allocations x
ij
≥ 0 which satisfies the
row and column restrictions is known as feasible solution.
(B) Basic Feasible Solution (B.F.S.)
A feasible solution to a m-origin and n-destination problem is
s a i d t o b e b a s i c f e a s i b l e s o l u t i o n i f t h e n umb e r o f p o s i t i
ve
allocations are (m+n–1).
If the number of allocations in a basic feasible solutions are
less than (m+n–1), it is called degenerate basic feasible solution
(DBFS) (otherwise non-degenerate).
(C) Optimal Solution
A feasible solution (not necessarily basic) is said to be optimal
if it minimizes the total transportation cost.Transportat ion Problems : : 115
63.4 S O L U T I O N O F T H E T R A N S P O R T A T I O N
PROBLEM
Let us consider the numerical version of the problem stated
in the introduction and the mat
5. What is an optimal shipping schedule?
Optimal control deals with the problem of finding a control law for a given system such
that a certain optimality criterion is achieved. A control problem includes a cost
functional that is a function of state and control variables. An optimal control is a set of
differential equations describing the paths of the control variables that minimize the cost
functional. The optimal control can be derived usingPontryagin's maximum
principle (a necessary condition also known as Pontryagin's minimum principle or simply
Pontryagin's Principle[2]), or by solving the Hamilton-Jacobi-Bellman
equation (asufficient condition).
We begin with a simple example. Consider a car traveling on a straight line through a
hilly road. The question is, how should the driver press the accelerator pedal in order
to minimize the total traveling time? Clearly in this example, the term control law refers
specifically to the way in which the driver presses the accelerator and shifts the gears.
The "system" consists of both the car and the road, and the optimality criterion is the
minimization of the total traveling time. Control problems usually include
ancillary constraints. For example the amount of available fuel might be limited, the
accelerator pedal cannot be pushed through the floor of the car, speed limits, etc.
A proper cost functional is a mathematical expression giving the traveling time as a
function of the speed, geometrical considerations, and initial conditions of the system. It
is often the case that the constraints are interchangeable with the cost functional.
Another optimal control problem is to find the way to drive the car so as to minimize its
fuel consumption, given that it must complete a given course in a time not exceeding
some amount. Yet another control problem is to minimize the total monetary cost of
completing the trip, given assumed monetary prices for time and fuel.
A more abstract framework goes as follows. Minimize the continuous-time cost
functional
subject to the first-order dynamic constraints
the algebraic path constraints
and the boundary conditions
where
is the state,
is the control, t is the independent variable (generally
speaking, time), t0 is the initial time, and tf is the terminal time. The terms Φ and
are
called the endpoint costand Lagrangian, respectively. Furthermore, it is noted that the
path constraints are in general inequality constraints and thus may not be active (i.e.,
equal to zero) at the optimal solution. It is also noted that the optimal control problem as
stated above may have multiple solutions (i.e., the solution may not be unique). Thus, it is
most often the case that any solution
to the optimal control
problem is locally minimizing.
[edit]Linear quadratic control
A special case of the general nonlinear optimal control problem given in the previous
section is the linear quadratic (LQ) optimal control problem. The LQ problem is stated
as follows. Minimize thequadratic continuous-time cost functional
Subject to the linear first-order dynamic constraints
and the initial condition
A particular form of the LQ problem that arises in many control system problems is that
of the linear quadratic regulator (LQR) where all of the matrices (i.e.,
and
,
,
,
) are constant, the initial time is arbitrarily set to zero, and the terminal time is
taken in the limit
(this last assumption is what is known as infinite horizon).
The LQR problem is stated as follows. Minimize the infinite horizon quadratic
continuous-time cost functional
Subject to the linear time-invariant first-order dynamic constraints
and the initial condition
In the finite-horizon case the matrices are restricted in that
and
are positive semi-
definite and positive definite, respectively. In the infinite-horizon case, however,
the matrices
and
are not only positive-semidefinite and positive-definite,
respectively, but are also constant. These additional restrictions on
and
in the
infinite-horizon case are enforced to ensure that the cost functional remains positive.
Furthermore, in order to ensure that the cost function is bounded, the additional
restriction is imposed that the pair
is controllable. Note that the LQ or LQR cost
functional can be thought of physically as attempting to minimize the control
energy (measured as a quadratic form).
The infinite horizon problem (i.e., LQR) may seem overly restrictive and essentially
useless because it assumes that the operator is driving the system to zero-state and hence
driving the output of the system to zero. This is indeed correct. However the problem of
driving the output to a desired nonzero level can be solved after the zero output one is. In
fact, it can be proved that this secondary LQR problem can be solved in a very
straightforward manner. It has been shown in classical optimal control theory that the LQ
(or LQR) optimal control has the feedback form
where
and
is a properly dimensioned matrix, given as
is the solution of the differential Riccati equation. The differential Riccati
equation is given as
For the finite horizon LQ problem, the Riccati equation is integrated backward in time
using the terminal boundary condition
For the infinite horizon LQR problem, the differential Riccati equation is replaced with
the algebraic Riccati equation (ARE) given as
Understanding that the ARE arises from infinite horizon problem, the matrices
and
,
,
,
are all constant. It is noted that there are in general multiple solutions to the
algebraic Riccati equation and the positive definite (or positive semi-definite) solution is
the one that is used to compute the feedback gain. The LQ (LQR) problem was elegantly
solved by Rudolf Kalman.[3]
[edit]Numerical methods for optimal control
Optimal control problems are generally nonlinear and therefore, generally do not have
analytic solutions (e.g., like the linear-quadratic optimal control problem). As a result, it
is necessary to employ numerical methods to solve optimal control problems. In the early
years of optimal control (circa 1950s to 1980s) the favored approach for solving optimal
control problems was that of indirect methods. In an indirect method, the calculus of
variations is employed to obtain the first-order optimality conditions. These conditions
result in a two-point (or, in the case of a complex problem, a multi-point) boundary-value
problem. This boundary-value problem actually has a special structure because it arises
from taking the derivative of a Hamiltonian. Thus, the resulting dynamical system is
aHamiltonian system of the form
where
6. Define a two person zero sum game and give an example of the same.
7. Explain the graphical method of solving (2*n) and (m*2) games.
© Copyright 2026 Paperzz