azu_td_hy_e9791_1977

A MULTI-OBJECTIVE, STOCHASTIC PROGRAMMING
MODEL IN WATERSHED MANAGEMENT
by
Ambrose Goicoechea
A Dissertation Submitted to the Faculty of the
DEPARTMENT OF SYSTEMS AND INDUSTRIAL ENGINEERING
In Partial Fulfillment of the Requirements
For the Degree of
DOCTOR OF PHILOSOPHY
WITH A MAJOR IN SYSTEMS ENGINEERING
In the Graduate College
THE UNIVERSITY OF ARIZONA
1977
THE UNIVERSITY OF ARIZONA
GRADUATE COLLEGE
I hereby recommend that this dissertation prepared under my
direction by
entitled
Ambrose
Goicoechea
A MULTI-OBJECTIVE, STOCHASTIC PROGRAMMING MODEL IN
WATERSHED MANAGEMENT
be accepted as fulfilling the dissertation requirement for the
degree of
Doctor of Philosophy
j
L
D
issertation
c(-1 7 F1
U Lv
Diiector
Date
As members of the Final Examination Committee, we certify
that we have read this dissertation and agree that it may be
presented for final defense.
If
-1
--V 2 Z
Z /14 4
a 77
-
-
Final approval and acceptance of this dissertation is contingent
on the candidate's adequate performance and defense thereof at the
final oral examination.
STATEMENT BY AUTHOR
This dissertation has been submitted in partial fulfillment of
requirements for an advanced degree at The University of Arizona and is
deposited in the University Library to be made available to borrowers
under rules of the Library.
Brief quotations from this dissertation are allowable without
special permission, provided that accurate acknowlegment of source is
made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head
of the major department or the Dean of the Graduate College when in his
judgment the proposed use of the material is in the interets of
scholarship. In all other instances, however, permission must be obtained from the author.
SIGNED:
ACKNOWLEDGMENTS
This report constitutes the doctoral dissertation of the same
title completed by the author in May, 1977 and accepted by the Faculty
of the Department of Systems and Industrial Engineering.
The investigation conducted in this report was conducted under
the direction of Lucien Duckstein, Professor of Systems and Industrial
Engineering.
The research effort was supported in part by funds provided by
the Office of Water Resources Research and Technology under Projects
B-043 AZ, "Practical Use of Decision Theory to Assess Uncertainties
about Actions Affecting the Environment" and B-055 AZ, "Hydrologic
Considerations and Decision Analysis for Reclaiming Strip-mined Lands
in the Southwest."
This report series constitutes an effort to communicate to
practitioners and researchers the complete research results, including
computer programs and more detailed theoretical developments, that cannot be reproduced in professional journals. These reports are not intended to serve as a substitute for the review and referee process
exerted by the scientific and professional community in their journals.
I wish to express my appreciation to Professors Lucien Duckstein, A. Wayne Wymore, Duane L. Dietrich, and Robert L. Bulfin of the
Department of Systems and Industrial Engineering, The University of Arizona, for their continued encouragement and/or relentless harrassment
iv
during the preparation of this manuscript. In particular, I would like
to thank Professor Duckstein, my dissertation advisor, who suggested the
topic of this research, spent countless hours guiding my effort, and who
never tired of asking for more work. My special thanks to Professor
Wymore whose professional competence was inspiring throughout my program
of studies, and whose personal generosity was without limit or preset
conditions. Gratitude is also due to Professor Ferenc Szidarovszky who
reviewed the manuscript in the later stages and contributed his expertise to the final product.
The author is also indebted to Professors Martin M. Fogel, John
L. Thames, and Tika R. verma of the Watershed Management Department,
who provided the motivation for this work and the scenario in which it
is applied and tested.
Thanks also to Professor Donald G. Schultz, Department Chairman,
who looked after my financial well-being and allowed me the opportunity
to teach courses in the department. Finally, I would like to thank
Mrs. Paula Tripp for her meticulous typing of this dissertation.
To my wife, Nancy, and sons, Miguel and Carlos, without whose
support this dissertation could not have been completed. Some thoughts,
deeply felt, might best be expressed in two of the languages of my
native Spain, Spanish and Basque. My thanks to Jose Mari Ormazabal who
collaborated with me in the Basque portion.
NANCY
Multitud de caras,
selva de ladrillo y cemento,
verdes corredores de pasto y arbol,
mediodia de un viernes,
preludio al encuentro.
Ojos que buscan,
palabras que invitan,
corazon que consiente.
Libros,
examenes,
juegos de futbol,
y balles de victoria.
Muchedumbre jovial,
sonrisas contagiosas,
paso apresurado,
libros en el brazo,
suerios en la mente,
dia y tarde,
tarde y noche.
Cada viernes anhelo,
cada sabado frenesf,
cada domingo dolor.
La semana entera a esperar
una vez mas.
Mentes que repasan en detalle
las promesas de ayer,
cuerpos que anticipan
las delicias de ma5ana.
Valles y montarias
de aventura
con un sol de primavera
yo te prometi,
y tu me dijiste que si;
los rios de agua clara
y castillos de mi Espdria
yo te promet(,
y tu me dijiste que si;
parras de uva dulce,
de vino rojo y embriagador
yo te prometi,
y tu me dijiste que si;
ese cielo azul
vi
y GRAN CAgON DE ARIZONA
yo te prometi,
y tu me dijiste que si.
Te he dado
y me has dado,
en cantidad y variedad
hasta saciar los sentidos.
Cada poro y grieta
de tu cuerpo
sabe de mi,
de ansiedad,
sudor,
y delirio.
No podrla ser menos.
Pienso de nuestra substancia,
transitoria como lo es.
Hacia atra's
una eternidad sin ser,
ahora, pero por un momento
nada ILA,
hacia delante,
una eternidad sin ser,
y nunca
Quiero que este momento
sea una eternidad.
Una eternidad
para pensar,
sentir,
vivir,
dejar vivir,
estudiar y rumiar
esta experiencia.
Quiero saber
por mi mismo,
acceptar nada sin prueba,
evitar respuestas baratas
y fraudulentas.
Quiero saber
y compartir contigo
esta experiencia,
este momento de luz y vida
vi i
Ostiral eguardi
Millaka aurpegi
arri ta bustin baso
zugaitz ta belar asko
alkarren aur-esku.
Begiak gose-egarri
itz-xamur ugari
biotz erantzunki.
Liburu,maixu,ostikalari
irrintzizko dantz ugari.
Gaztedi alai,
Irripar ezti,
Anka arm,
Liburuak eskuetan
burua ametsetan,
Goiz eta arratsalde,
Eta gau illuna,alkate.
Ostiraletan gose-egarri
Larunbatetan ezin etsi,
Igandeetan biotz-erre.
Aste bakoitza itxaropen berri.
Biotz-buruak u itz-emanaren" azterketan,
Gorputza,berriz,biarko gozamenetan.
Udaberriko eguzki xamurrez
Esnatutako mendi-aran guziak
Agindu nizkitzun;
Eta,Zuk,baiezkoa eman zidazun.
Ur garbizko ibai guziak
eta gaztelu ederrenak
agindu nizkitzun;
Eta,Zuk : BAI
Erantzun zidazun.
Ardo gorrizoko ale goxoak
Moskorgarrizko maats-ondoak
Agindu nizkitzun
Eta,Zuk,BAI
Eraatzun zidazun.
ARIZONA'ko zeru-urdinak
Eta mendi-sakon guziak
Agindu nizkitzun,
Eta,dudarik gabe,Zuk
BAI,erantzun zidazun
Neurri gabeko ugaritasunean
Eman eta artu,artu ta eman
Amets guziak osatzeraino
ALKARTASUNEAN.
Zure gorputzaren atal bakoitzak
vi ii
Nere izardiaren,
gose ta egarriaren,
Itz batean:IZATEAREN,dauzka berriak.
EZ DA GUTXIAGOETARAKO,izan ere.
Gure ilkor izakeran dut pentsatzen,
Eta,ez da errex,asmatzen.
Atzera:ixtant bateko betikotasuna
Aurrera,berriz: betiko osasuna
Gaur eta orain,eta biar
Gure izatearen zear.
Gu,GARU geradena
betikotasunez jantzia
Da,amesten detana.
Pentsatu ta sentitzeko
ulertu ta aztartzeko
bizi eta biziarazteko.
Jakin,ere,nai dut ,neronez
Arrazoi gabekeririk,ez
Gezurkeririk,beinere,ez
Egin ere,egizko jakinez.
ix
MIGUEL
Claro y fresco
es el amanecer,
calido y polvoroso
el atardecer
Noches secas,
techo estrellado,
murmullos del desierto.
TUCSON es la ciudad.
La ciudad se ensancha
y el desierto cede.
Dias de actividad,
de accidn,
friccidn,
desarrollo,
y despojo.
La noche viene,
la ambicidn adormece.
El dia despierta
y parte de su rostro
el desierto cede.
Inevitable es el cambio
en nuestras cualidades,
colinas,
rios,
y ciudades.
Se reduce el espacio
para nacer,
crecer,
perecer.
Recuerdo bien, Miguel,
el verte crecer.
Placer, anhelo y esperanza.
Aquel primer paso,
innocente sonrisa,
dolor de dientes,
juego sin prisa.
Los ojos de tu padre
la gente dice,
la sonrisa.de tu madre
la gente insiste.
Cuantas veces he pensado
con ansiedad,
inseguridad,
y encanto,
de las mil preguntas
que un dia tu tendra's,
de las respuestas
que podre'darte,
de las muchas mas
que he de negarte
en ignorancia frustrante.
Cinco alios tienes,
mil pecas en la cara.
Pelo rizado y complicado,
como tus preguntas.
Es el mundo redondo, papa(
iY porque?
redondo y no nos caemos,
no te entiendo, papa%
los dinosauros,
todos han desaparecido?
porque?
Dinosauros, monstruos,
y tiburones.
Cordones de zapatos, ranas,
y libro de colores.
Preguntas y respuestas,
curiosidad ins aciable.
Nitos y leyendas,
afliccidn indomable.
Saber por saber,
dudar por no mentir.
Una vida a vivir
persiguiendo el intelecto,
manteniendo el cuerpo.
Ldgica que se aplica
cual espada de mil fibs,
para seguir nuevos caninos,
para matar una mentira.
Papa', quiero ser astronauta.
No, mejor un pez,
o un ledn,
que puedo ser?
Todo y nada.
Nada si no decides,
todo si decides y persigues.
Persigue con el intelecto
para que tu esfuerzo brille,
xi
que quemen las entrailas
para dare forma,
aplica el corazdn
para dare vida,
entregale el alma
para que tenga tu sabor.
Infierno y cielo,
mitos y leyendas,
sombras, penumbras
de un lejano pasado.
Desde el verde mar,
a traves de los tiempos,
un milldn de formas,
aurora del hombre
en su capsula terrenal
a traves del espacio.
Rios de vida,
un mar de hamanidad.
Despacio, papa',
despacio.
Infierno y cielo,
aurora del hombre,
hablas de dios, papa?
Nada se de ello, Miguel,
•
Idios
creo'al hombre,
o el.hombre creo'a dios?
Nada se.
Se que muchos van a extremos
para afirmar, o negar,
para dar refugio,
alimento y esperanza.
Que hay otros
que esgrimen su dios
cual hoz fulminante,
cortando la vida,
hiriendo con sazd'il,
vert iendo sangre,
sofocando la razoil.
Que veo esto
y ayudarte quisiera,
librarte de la crueldad,
intransigencia,
envidia,
y maldad.
Que veo esto
y sugerir atentase
ver la parte buena
en cada persona.
xii
Ojos claros,
dientes de marfil,
acaricio con esmero
tus mejillas de terciopelo.
Esos hombros anchos, Miguel,
hablan de tu gente,
de azaddn y martillo,
de sudor de mente.
Esa frente amplia, Miguel,
habla de tu gente,
de estudio y libro,
de fervor diligente.
CARLOS
Ansiedad que se desborda
y corre,
frenes1 incontenible en tarde
y noche.
Cadencias de mil colores
en un abrazo de pasidn,
manjar de mil sabores.
Sienes al rojo vivo,
corazdn que golpea el pecho
y castiga el respiro.
Brota el volcA
en torrentes de lujuria,
abrasando las entragas,
buscando el mar,
y en su agua fria
la vida encontrar.
Ese mar te vid, Carlos,
extender la mano,
dar tus primeros pasos.
Cutis sonrosado
y febril,
mirada intensa
y gentil.
Miel es el color
de tu cabello,
roble es la fibra
de tu empego.
Asomado a las ventanas
de tus ojos
busco el porqué de tu ser,
la razdn de tus modos.
TABLE OF CONTENTS
Page
LIST OF ILLUSTRATIONS xvii
LIST OF TABLES xviii
ABSTRACT xix
CHAPTER
1
INTRODUCTION 1
Purpose of the Study
Organization of the Study
2
CONCEPTS IN MULTI-OBJECTIVE DECISION MAKING
Problem Formulation
Set of Nondominated Solutions
The Role of the Decision Maker Conflict
Goals
Stochastic Elements Summary 4
4
7
7
11
13
14
15
17
20
3
STATEMENT OF THE PROBLEM 21
4
LITERATURE REVIEW 25
Multi-objective Programming Methods
Weighting Method e-constraint Method
Adaptive Search
Multi-criteria Simplex Methods
Goal Programming Utility Function Assessment
Electre Method
Surrogate Worth Tradeoff Method
Step Method
Sequential Multi-objective Problem Solving
(SEMOPS)
Tradeoff Development Method (TRADE) Probabilistic Programming Methods Chance-constrained Programming (CCP)
Two-stage Programming (TSP)
xiv
25
27
28
29
30
31
33
36
37
40
43
44
45
47
47
TABLE OF CONTENTS--Continued
Page
Stochastic Linear Programming (SLP) Transition Probability Programming (TPP)
5
48
48
DEVELOPMENT OF THE PROTRADE ALGORITHM 50
Development Numerical Example Step 1 -- Problem Definition Step 2 -- Range of Objective Functions Step 3 -- Initial Surrogate Objective Function
F(X)
Step-4 - Initial Solution Step 5 -- Generate a Multidimensional Utility
Function u(G) Step 6 -- Define a New Surrogate Objective
Function Step 7 -- Generation of Alternative Solution • • •
Step 8 -- Generate Vector V1 Step 9 -- Assume 1J 2 is Not a Satisfactum Step 10 -- Select a Pair (Gk (x2 ),1-ak) Step 11 -- Define Constraint Space D, Step 12 -- Generate the New SOF S 2 (,
X —
—
6
DETERMINISTIC EQUIVALENTS IN STOCHASTIC PROGRAMMING . .
General Method Definition 1 Definition 2 A Deterministic Transformation
An Illustrative Example Discussion 7
.
50
55
56
56
58
58
58
60
61
61
61
61
61
62
66
66
67
67
81
83
87
A MULTI-OBJECTIVE PROBLEM IN WATERSHED MANAGEMENT 89
The Black Mesa Region in Northern Arizona Formulation of Objective Functions List of Assumptions Set of Constraints
Livestock Production Water Runoff Augmentation Farming of Selected Crops Control of Sedimentation Rates Fish Pond Harvesting Implementation of the PROTRADE Algorithm Steps of the Method Present Value of the Objective Functions
Discussion
89
91
95
96
97
99
101
104
105
108
108
125
131
xvi
TABLE OF CONTENTS--Continued
Page
8
CONCLUSIONS, DISCUSSION, AND EXTENSIONS 133
Topics for Future Research 137
APPENDIX A: A CUTTING-PLANE TECHNIQUE 140
APPENDIX B: MULTI-OBJECTIVE COMPUTER PROGRAM (PROGRAM SEARCH) . 153
APPENDIX C: COMPUTER RESULTS 165
APPENDIX D: ALTERNATIVE DEVELOPMENT 168
LIST OF REFERENCES 170
LIST OF ILLUSTRATIONS
Figure
Page
1.
Feasible region and set of nondominated solutions 10
2.
Feasible region and nondominated set in objective
space 12
3.
Constraint set D
57
4.
Initial solution vector X,
5.
Space D 2 and solution vector X3 63
6.
The set A in the
73
7.
The set B in the y 1 -y 2 plane
8.
The set A in the
plane, beta distribution 77
9.
The set B in the y 1 -y 2 plane, beta distribution 77
10.
Cumulative distribution of the objective function z . . . .
86
11.
The Black Mesa Region in northern Arizona 90
12.
Land allocation alternatives
13.
Stochastic livestock production model 14.
Land treated for runoff vs. water available for crops
(mean values) 111
Single-attribute utility functions
116
15.
1
c1-c2
c1-c2
plane
59
xvii
73
94
98
LIST OF TABLES
Table
Page
1.
Payoff table for the step method 42
2.
Computer program results 85
3.
Livestock production parameters
100
4.
Soil treatments for water runoff 100
5.
Water runoff parameters
102
6.
Crop model parameters
7.
Sediment parameters
8.
Fish harvesting parameters 103
106
xvi ii
109
ABSTRACT
This research develops an interactive algorithm for solving a
class of multi-objective decision problems. These problems are characterized by a set of objective functions to be satisfied subject to a
set of nonlinear constraints with continuous policy variables and
stochastic parameters.
The existence of a decision situation is postulated in which
there are N resources to be allocated so that P satisfactory objective
levels may be attained. A probabilistic tradeoff development algorithm,
labeled PROTRADE, is developed to provide a framework in which the
decision maker can articulate his preferences, generate alternative
solutions, develop tradeoffs among these, and eventually arrive at a
satisfactory solution provided it exists. As the decision maker arrives at a vector-valued solution, with a value for each objective function, he also generates the probabilities of achieving such values.
Then, as his preferences are articulated, he is able to trade-off objective function values against one another, and directly against their
probabilities of achievement. A central assumption of this research
is that there is not an "optimal" solution to the problem, but only
"satisfactory" solutions. The reason for this is that the decision maker
is allowed to have a dynamic preference structure that changes as the
various tradeoffs are generated and new information is made available to
him.
xix
xx
The algorithm is developed in the context of parameters normally
distributed. Several theorems are presented which extend the applicability of the algorithm to nonnormal random variables, specifically exponential, uniform, and beta random variables.
A case study of the Black Mesa region in northern Arizona is
provided to demonstrate the feasibility of the algorithm. This region
is being strip-mined for coal and the managing agency must decide on the
extent of several management practices. The practices or objective
functions considered in the study are: (1) livestock production, (2)
augmentation of water runoff, (3) farming of selected crops, (4) control
of sedimentation rates, and (5) fish pond-harvesting.
Finally, conclusions are presented and areas for future investigation are suggested.
CHAPTER 1
INTRODUCTION
The purpose of this investigation is to develop a methodology
for solving a class of multi-objective decision problems within the
framework of stochastic, nonlinear mathematical programming. This methodology is then cast into the form of an interactive algorithm, and
demonstrated with a case study in water resources management.
Traditionally water resource development was considered to be a
single objective problem, that of maximizing net income benefits (Maass
et al. 1962; Loucks 1975). But in fact, there are always numerous possible goals or objectives that are relevant -- many of which are often
conflicting -- and the importance of each objective function is rarely
well articulated in advance of the time decisions are made. The explicit tradeoffs between each of these partially complementary objective functions are usually vague and, frequently, the selection of an
alternative solution or plan fails to meet the expectations of the decision maker (DM) to the extent originally envisioned.
Water resources planners today are giving increasing attention
to the multi-objective nature of planning. The formulation of a set of
objectives, the procurement of relevant data often stochastic in nature,
the generation of alternative solutions satisfying a set of physical
constraints and, in some sense, in agreement with the preferences of the
DM will be required. While economic considerations have sufficed in the
1
2
past it now appears that environmental, sociological, and political concerns must be incorporated into the planning process. It remains to be
seen whether these requirements and concerns can be combined with appropriate analytical tools into a manageable interdisciplinary enterprise.
Several problems arise in considering the multi-objective nature
of planning: (1) quantification of objectives, (2) incommensurate units
of measurement, (3) the relative importance of the objectives, and (4)
the stochastic nature of parameters. Many important objectives in water
resources development defy quantification -- recreational potential of a
river basin, wildlife management in a watershed, and aesthetic attainment in national parks, to mention a few -- and these, along with quantifiable objectives, must be considered in the analysis (Cohon 1972;
Monarchi, Kisiel and Duckstein 1973; Goicoechea, Duckstein and Fogel
1976a).
The dimensions or units of the objective functions need not be
directly comparable. Accounting of wildlife levels in a watershed, for
instance, should specify variety of animal species, variations in population size, relative dominance of species, and so forth, and little is
gained, if anything, by reducing these natural units to monetary values.
A major shortcoming of most traditional methods of analysis is that
these reduce the units of the various objective functions involved to
monetary values to generate tradeoffs. It is important that these
tradeoffs among noncommensurable objectives be made clear to the decision maker and that meaningful units be retained.
3
The relative importance of each objective function to the decision maker must be specified, eventually. If the DM is to decide on the
level of acheivement of each objective function he must also quantify,
somehow, this relative worth.
The formulation and selection of alternatives are complicated by
uncertainty outside the control of the planner or decision maker. Many
of the parameters in the objective functions and set of constraints may
be random variables, rather than fixed quantities. Again, traditional
methods generally consider the expected values of these random variables
and proceed to generate an "optimal solution." It would be highly desirable to be able to continue operating with these random variables and
present the DM with the distribution of each objective function, so that
he is able to specify not just a level of achievement but a measure of
achievement, such as the probability of achievement.
While many mathematical models are restricted to those aspects
of the evaluation process that are quantifiable, the information derived
from them may significantly assist the DM in the selection of his final
choice. Two distinct philosophies of solution exist. There is the
"direct" approach which attempts to find the optimal solution by defining a scalar objective function through quantification of the decision
maker's preferences or utility function. This method has been advanced
by decision theorists. The other approach that appears in the literature is the "curve-generating" approach. The various approaches which
fall into this category generate the solution set, i.e., the nondominated
set. They do not attempt to find the optimal solution but rather an
4
element of the nondominated set. Approaches of this type have appeared
in the literature of control theory and public investment.
Purpose of the Study
This research addresses a multiple goal decision problem within
the framework of stochastic, nonlinear mathematical programming. It
focuses upon the development of alternative solutions, the cumulative
distributions of these, the structuring of the preferences of the decision maker into a form amenable to the analytical tool selected, and the
generation of tradeoffs between the level of achievement of each objective function and associated probability of achievement. This is accomplished with the decision maker playing a dynamic role in the search'
for a "best possible alternative," or satisfactum.
In the process, this research extended the applicability of
stochastic programming by finding a large class of deterministic equivalents -- previously restricted to functions of normal random variables -and a deterministic transformation of the original stochastic problem,
which can then be solved to generate the cumulative distribution of the
objective function in question. To complete the introduction a brief
summary of subsequent chapters is presented.
Organization of the Study
Chapter 2 develops some concepts in multi-objective decision
making. The concepts of nondominated solution, goal satisfaction, and
utility function are formally defined for later use and reference. The
role of the decision maker is also examined.
5
Chapter 3 presents a formal statement of this multiple objective
decision problem.
Chapter 4 contains a literature review of multiple objective
models, followed by a review of linear, stochastic models. The latter
are classified into three broad types: (1) m-stage programming under
uncertainty, (2) Monte-Carlo distribution problems, and (3) chanceconstraints and deterministic equivalents.
A probabilistic tradeoff development method, labeled PROTRADE,
is developed in Chapter 5. This method involves the formulation of an
initial surrogate objective function, the estimation of a multi-attribute
utility function to reflect the preference of the decision maker, and
the tradeoff of levels and probabilities of achievement for each objective function. An example problem is presented and solved to illustrate this method.
In Chapter 6 the subject of deterministic equivalents is extended to generate new forms for nonnormal random variables. The
literature on the subject has been restricted to normal random variables, until now. Also, the general stochastic programming problem is
transformed into a deterministic, nonlinear, equivalent problem. The
solution of the equivalent problem represents, then, an analytic,
closed-form solution to the problem whose objective function value, up
until now, had been solved via a Monte-Carlo simulation.
In Chapter 7 a multi-objective problem in watershed management
is presented in detail. This case study addresses the issue of reclamation and management of strip-mined land in northern Arizona. Five objective functions are considered: (1) livestock production, (2) water
6
runoff, (3) farming of selected crops, (4) control of sedimentation
rates, and (5) fish pond-harvesting. This problem is solved to demonstrate the practicability of the method in a real-world situation.
The final chapter, Chapter 8, contains a discussion of the research limitations and certain conclusions are drawn.
CHAPTER 2
CONCEPTS IN MULTI-OBJECTIVE DECISION MAKING
This chapter reviews some of the concepts associated with the
general multi-objective decision making problem within the framework
of mathematical programming. The terminology and notation presented in
this section will be observed throughout the remainder of this work to
allow a concise discussion of concepts such as optimal solution, noninferior set, utility function, and goal satisfactum, among others.
Problem Formulation
Multi-objective problems arise in the design, modeling, and
planning of many complex water resource systems. Traditional procedures
for building models essentially consist of the following steps (Vemuri
1974): (1) choosing a collection of goals and defining the corresponding
objective functions, (2) gathering relevant information, (3) building a
model, (4) validating and operating the model, (5) determining a feasible
control policy, (6) applying the policy to the system, and (7) reaching
the stated goals. Implementation of these steps is generally a straightforward exercise when there is a single objective function to optimize.
However, a large class of problems necessitate a collection of objective
functions to formulate realistically the situation at hand. In regional
planning of water and related land resources the simultaneous consideration of more than one project is often essential owing to the interactions and coupling that exist among them (Maass et al. 1962; Haimes and
7
8
Nainis 1973; Cohon and Marks 1975). The operation of a multi-purpose
reservoir may call for delivering irrigation water and supplying electric
power to the nearby community, while still trying to maintain certain
minimum water levels in the reservoir itself and downstream to accommodate environmental and recreational interests. These goals, in turn,
can be conflicting in nature and in trying to satisfy them simultaneously
it is no longer clear what is meant by an "optimal solution." The inclusion of a vector of objective functions, therefore, introduces new
dimensions in the areas of modeling and mathematical programming and
the notion of an optimal solution is no longer applicable. Instead, the
concept of a set of "nondominated" solutions (efficient set, admissible
set, Pareto-optimal set) is introduced and discussed (e.g., Geoffrion
1968; Cohon 1972; Haimes and Hall 1974; Zeleny 1974).
The single objective constrained optimization problem may be
defined as
maximize
Z(x)
subject to
g.(x) < 0,ï = 1,2, . .
—
j = 1,2, .
x.
j >— 0,
(2.1)
n
(2.2)
m
(2.3)
where the objective function Z(x) is a scalar-valued function defined on
an m-dimensional Euclidean vector space of decision variables, x e Rm ,
e.g., Z e F(Rm ,R). The region defined by constraint set (2.2) and (2.3)
x
X = {x: —
—
—
m
< 0, V ., x. > 0, V.}
R , g.( x) —
(2.4)
—
will be referred to as the feasible region in decision space.
On the other hand, the multi-objective programming problem is
defined by a vector of objective functions
9
Z(x) = (Z1 (2)
Z 2 (x), . .
Z (x))
P —
and the constraint sets (2.2) and (2.3) as given above.
,
(2.5)
Now, since one cannot in general optimize a vector of objective
functions, one proceeds to find those values of the p-dimensional vector
which satisfy the constraints, e.g., the p-dimensional objective function
maps the feasible region in decision space x into the feasible region in
objective space Z(x), defined on the p-dimensional Euclidean vector space.
The word "optimization" has been purposely kept out of the definition of
the multi-objective programming problem, as this one entails a mapping
from one vector space to another. Later on, in the remaining chapters,
the preferences of the decision maker and decision making aspects will
be structured into a criterion to optimize.
The definition and concepts given above are easily grasped with
the aid of a simple example. Consider the following two-objective, twodecision variable linear problem.
max.
Z (2) = x - 3x
2
1
1
max.
Zn(x) = -4x 1 + x 2
-
subject to
gl(E) = xi
<o
-
7/2 < 0
g2 (2) = xi x2 -
g 3 (x) = xl + x2 - 11/2 < 0,
g 4 (x) = 2x, + x 2 - 9 _< 0,
g5 ( 10 =
g6(2-) =
x2
< 0,
The feasible region in decision space x is shown in Figure 1. The feasible region in objective space Z(x) was found by the enumeration of all
10
7.22
6
( )=0
g
5
g2 (x)=0
3 2-
x =(1 ' 9/2)
—5
4
4= (O ,7/2)
g 4 (x)=0
3
r-gl (x)= °
2
g5(x)=0
Feasible
Region
x4 =(7/2,2)
Set of nondominated
solutions
1
x1 =(0 ' 0)
z3=(4,1)
(x)=0
xe(4,0)
1
2
3
5
4
Zi
Figure 1. Feasible region and set of nondominated solutions.
11
the extreme points and the computation of the values of each objective
function at each of these corner solutions, and shown in Figure 2. The
concept of a set of nondominated solutions is now introduced.
Set of Nondominated Solutions
Given a set of easible solutions X, the set of nondominated
solutions is denoted X* and defined (Cohon and Marks 1975) as follows
X* = {x: x e X, there exists no other x' e X such that
Z q (x') > Z q (x) for some q{1,2, . . .,p } , and
Zk (2E') 2: Zk (2) for all k
ql .
The main property of the set of nondominated solutions is, therefore,
that as one moves from one nondominated solution to another and one
objective function improves then one or more of the other objective functions decrease in value. For our illustrative example the set of nondominated solutions is also shown in Figure 1. Each nondominated solution x E X* implies values for each of the p objective functions Z(x).
The collection of all the Z(x) for x E X* yields the nondominated set
Z(X*), and is shown in Figure 2. The concept of nondominated solutions,
also known as Pareto optimum, efficient solution, etc., is basic in
economics (Koopmans 1951). Kuhn and Tucker (1950) extended the theory
of nonlinear programming for one objective function to a vector minimization problem and introduced necessary and sufficient conditions for a
"proper" solution. Similar concepts have recently appeared in the literature. Zeleny (1973 and 1974) has introduced a concept of the compromise
set and developed the Method of the Displaced Ideal. Such sequential
12
10
Z
2
5
Z(x )=(-21/2,7/2)
— —6
Z
.Z. (2i1 )=(0,0)
5
Feasible
Region
Nondominated
set
)=(4, -16)
Figure 2. Feasible region and nondominated set in objective space.
1
10
13
displacements of the ideal solution form also a basis for the Evolutive
Target Procedure of Roy (1975).
There are several techniques available to generate the set of
nondominated solutions and the nondominated set. These "curve-generating"
techniques include the weighting method, constraint method, adaptive
search, multi-criteria simplex methods, and will be discussed in some
detail in Chapter 4.
The Role of the Decision Maker
In the preceding section it is noted that the set of nondominated
solutions is a subset of the initial set of feasible solutions and that
it was reached without having to consider the preferences of the decision
maker. The associated nondominated set will generally represent a collection of incomparable solutions since the objective functions may be
incommensurate to begin with. Such incomplete orderings are characteristic of, but not restricted to, multi-objective planning problems,
and imply the need for introducing value judgments into the solution
process. At this point in the analysis the decision maker is asked to
articulate his "value structure" to order the alternative solutions in
the nondominated set. These value considerations have prompted a—variety
of techniques falling into two classes (Cohon and Marks 1975): (1)
techniques which rely on prior articulation of preferences, and (2)
techniques which rely on progressive articulation of preferences, to be
discussed in some detail in Chapter 4.
14
If the value structure of the decision maker is to be brought
into the analysis, how this is to be done is not clear. In the Theory
of the Displaced Ideal, Zeleny (1975, p. 157) points out:
If one obtains an accurate measurement of the net attractiveness (or utility) of each available alternative, one can predict with reasonable accuracy that a person will choose the
alternative which is "most attractive." So, the problem of
prediction of choice becomes the technical problem of measurement and mechanical search. Furthermore, if the alternatives
are complex and multi-attributed, then the measurement of utility
could be too difficult to be practical. The real question concerns the process by which the decision maker structures the
problem, creates and evaluates the alternatives, identifies
relevant criteria, adjusts their priorities and processes information. . . It is important to realize that whenever we face
a single attribute, an objective function, an utility function,
or any other single aggregate measure, there is no decisionmaking involved. The decision is implicit in the measurement
and it is made by the search . . . It is only when facing
multiple attributes, objectives, criteria, functions, etc., that
we can talk about decision making and its theory.
To define decision making is not simple. It is a process rather than an
act. Although it involves a choice on the set of feasible alternatives,
it is also concerned with the generation of alternatives. Decision
making is a dynamic process with all its components changing and evolving
during its course: alternatives are added and removed, the criteria for
their evaluation as well as the relative importance of the criteria are
in a dynamic flux, the interpretation of outcomes varies, human values
and preferences are reassessed.
Conflict
Inherent in multi-objective problems is the element of conflict.
Conflict provides the decision-motivating tension, a period of frustration and dissatisfaction with the status quo of a current situation.
We define "conflict" as a property of a situation in which the
15
simultaneous attainment of all the objective functions, at desired
levels, is not possible. In reference to this property, Von Neuman
and Morgenstern (1953, p. 10) point out:
• . . this multiple objective situation is certainly no maximum problem, but a peculiar and disconcerting mixture of several
conflicting maximum problems . . • This kind of problem is nowhere dealt with in classical mathematics • • • It arises in
full clarity even in the most 'elementary' situations, e.g.,
when all variables can assume only a finite number of values.
.
Conflict can be resolved in two ways (Monarchi 1972): innovation and adaptation. Innovation refers to development of previously
unknown alternatives so that the original goals can be attained. Information plays an important role here because it can suggest new avenues
to search. Adaptation refers to changes in the current value structure
of the individual so that he becomes content with one of the available
alternatives. In reality, conflict is often resolved by both methods
simultaneously. The decision maker often chooses to attain that which
he can attain while still striving to broaden the range of the attainable. This problem of conflict resolution is also dealt with by Zeleny
(1975) as he recognizes a pre-decision situation where the component
values of the ideal alternative become clearly perceived and an effort
for conflict resolution is replaced by an attempt for conflict reduction. Also, a post-decision situation may exist where the attractiveness of chosen alternatives is enhanced, while that of the rejected alternatives is reduced.
Goals
Johnsen (1968, p. 152) defines a goal as:
• • . an operational expression of some desire or desires of an
identifiable individual or individuals as one or several elements,
16
each of which may be a subset, and which may be ranked consciously by the goal-setting unit, . . . and goal attainment
being specified on the scale or scales and having the property
that is the case of several scales these may be related.
Berelson and Steiner (1964, pp. 239-240) define a goal as the " . .
objective, condition, or activity toward which the motive (all those
inner conditions variously described as wishes, desires, needs, drives
and the like) is directed; in short, that which will satisfy or reduce
the striving."
This study addresses those two main properties: a goal should
depict something desired, and it should be operational. In the usual
optimization problem, the search is directed towards an optimal policy
vector which possesses well defined mathematical properties (see, for
example, Luenberger 1973; Hillier and Lieberman 1967). In this research,
however, we are dealing with a collection of objective functions and an
undefined preference function which, somehow must be articulated to arrive at a "solution." If the concept of an "optimal solution" is no
longer applicable, the concept of a "satisfactory solution," termed a
"satisfactum" must be examined.
Simon (1953, p. 141) states: "Most human-decision making,
whether individual or organizational, is concerned with the discovery
and selection of satisfactory alternatives; only in exceptional cases
it is concerned with the discovery and selection of optimal alternatives."
Acceptability is a value judgment derived from the individual's preference function. A satisfactum then is any value within an interval of
acceptability on the range of an objective function. A multiple goal
satisfactum implies acceptable values of all objective functions. While'
17
searching for that satisfactum this study places emphasis on the method
used to generate those tradeoffs, the articulation and incorporation of
the DM's preference function in the analysis, and the handling of uncertainty of the various parameters in the objective functions and set
of constraints.
Stochastic Elements
So far in our discussion we have talked about objective functions,
set of constraints, and goal functions, all of these involving fixed
quantities for parameters. This is frequently not a realistic assumption in multi-objective problems where many of the parameters involved
are random variables, rather than fixed quantities. Some work has been
done and published in the literature dealing with stochastic elements
within the framework of linear, single-objective, mathematical programming. The author is not aware, however, of any work published on the
subject of multi-objective, stochastic programming.
Stochastic programming deals with situations where some or all
of the parameters of the general programming problem (with a single objective function) are described by random variables, rather than by
fixed quantities. Several models have been proposed to handle the
stochastic problem. Charries and Cooper (1961) have proposed a general
class of linear decision rules under the following three classes of objectives: (1) maximum expected value (E-model), (2) minimum variance
(V-model), and (3) maximum probability (P-model). The E-model, for
Instance, is presented in the form:
18
max. E[CX]
subject to
Prob [AX < b] > 8
where C is an n-dimensional row vector, X is an n-dimensional column
vector, A is an mxn matrix, and b and 8 are m-dimensional column vectors.
Now, some or all of the elements in C, A, and b may be random variables
with known probability density functions. The vector 8 contains a prescribed set of constants that are probability measures of the extent to
which constraint violations are admitted.
The P-model, on the other hand, is presented in the form:
max. Prob [CX > C ° X ° ]
subject to
Prob [AX < b] > 8
X = Db
and the objective is to maximize the probability of achieving a specified
value, C ° X ° . This time, the matrix A is assumed constant, and D is an
nxm matrix. Charnes and Cooper (1961) then proceed to find deterministic equivalents to the probability statements above when these contain
random variables normally distributed.
A deterministic equivalent is obtained for illustrative purposes.
Consider the probability statement:
Prob [Z(x) > d] > 1 - a
where
Z(2.) = cf.
].
+ c 2 x2 + . . • + cnxn
Πe R [0,1], d e R, and the c's are normal random variables, i.e.,
c.
N [E(c.), VAR(c.)]. Then, because of the properties of the normal
random variable
19
Prob [
Z(x) - E E(c.)x
1 i
i=1
tx
B X]
d- E E(c )x.
i 1
i=1
1/2
T
[X B
> 1 - a
1/2
where B is a symmetric variance-covariance matrix. The above statement
is realized if and only if
d - E E(c)x.
1 1
i=1
[x
where K
a
B
Ka
1/2
is a standard normal value such that (I)(K ) = a, and (I) represents
a
the cumulative distribution function for the standard normal. The inequality above, then, represents a deterministic equivalent,
+K [x B X]
E E(c.)x.
1 1 a -
1/2
> d,
i=1
which can now replace the original stochastic constraint and the programming problem can now be solved.
In Chapter 6, the above work will be extended to include nonnormal random variables. In the process we will start with a linear
stochastic programming problem and end with a nonlinear, deterministic,
equivalent programming problem. And, again, the issue of nonlinearity
will add complexity to the method of solution. To deal with random
variables in the objective function, a transformation is developed in
Chapter 6 where the random variables in the original problem act as
mathematical variables in the equivalent problem, increasing its dimensionality.
20
Summary
This chapter has provided the concepts and framework within which
this research will be developed. A few points should be outlined before
formally stating the research problems:
1. A large class of problems need a collection of objective func-
tions to formulate realistically the situation at hand.
2. The nondominated set needs to be identified to make the trade-
offs explicit to the DM.
3. The DM can play an important role in ordering the alternative
solutions in the nondominated set.
4. As the DM searches for a satisfactum his own aspirations change
as the "realities of the problem" and results of tradeoffs are
made available to him.
5.
The functional form of the DM's preference function should re-
quire assessments which are reasonable to consider, and should
lend itself to mathematical manipulation.
6. Multi-objective analysis should account for the possibility of
both fixed and random parameters.
7. As the DM trades levels of achievement, among the various objective functions, he should be able to learn about the probability of achieving those levels.
CHAPTER 3
STATEMENT OF THE PROBLEM
To state the research problem formally, we postulate the existence of a decision situation in which there are N resources to be allocated so that P satisfactory goal levels may be attained. Our main
objective is to develop an algorithm which will permit the decision
maker (1)11 ) to make a choice among alternative solutions. Specifically,
the following elements are considered:
1. A set of constraints defining the feasible region XGRm and
characterized by:
= 0,
E equality constraints, ROO = (gi (x), g 2 (.1), . . . gE(x))
— —
where gi (x) is differentiable, and either linear or nonlinear;
I inequality constraints, h(x) = ( 11 1 (x), h 2 (x), . . . h i (x)) <O
where h(x) is differentiable, and either linear or nonlinear;
—
Q probability constraints of the form
Prob [r i (x, a l , . . . an ) _< b i ]
+
where a. e R [0,1], b. e R ,
1
- a i ,
i = 1,2, . .,Q
r.(x, a 1 , . . .,a n )
1 — 1
is
differentiable and either linear or nonlinear. The parameters
a l , a2 , . . . a
n
are random variables, each with a given prob-
ability density function. The functions g, h, and r are defined
on the set X with values on an arbitrary set S.
21
22
2. A vector of objective functions
Z (x) = (Z 1 (x), Z 2 (x), . .
Z (x)), x e X .
Each objective function is defined on a set of resource allocations (the domain) and has values in the set of real numbers
(the range). These functions may be either linear or nonlinear,
and may contain random parameters. It is required that each
objective function be differentiable.
3. A vector of goal functions denoted G and defined as follows:
G(x) = (G1 (x), G2 (x), . .
Gp(x))
where
G(2) -
Z.min
Zi
Z i (x.*) - Z imin
Z.(x *) = max. Z.(x)
—
x e X
Z
= min.
x e X
Z (2)
i
'
i = 1, 2, . . ., P.
Each function G is defined on the feasible region X and has values
in the interval R [0,1].
4. A preference function u to articulate the "value structure" of
the DM. This preference function is defined on the set of ranges
of the goal functions with values in the interval R [0,1].
Furthermore, the evaluation of this function is subjective in
nature to accomodate the DM's relative worth of the values attained by the various goal functions, and provides a complete
ordering of the nondominated set.
23
5. An aspiration level for each goal function. This aspiration
level is the degree of goal attainment the DM strives to attain.
This set of aspiration levels will attempt to identify a subset
of the nondominated set. Each element in this subset is such
that it: (a) satisfies the physical realities of the problem,
and (b) satisfies the aspirations of the DM. In general, an
aspiration level does not remain fixed in time and space but
changes as the alternative solutions are made available to the
DM and such a subset may or may not be empty.
6.
Stochastic parameters in both the objective functions (and
hence the goal functions) and constraints. We have already
presented the general form of the probability constraints.
.Whenever random variables appear in a given objective function,
the objective function itself becomes a random variable and
the complexity associated with the task of determining the nondominated set is increased considerably. The specification of
a value for an objective function is no longer sufficient and,
instead one must talk about an achieved value and the probability of achieving that value.
An algorithm is sought, then, to perform a number of sequential
tasks and, in the process, take into account the elements presented
above. This algorithm should, somehow, identify the nondominated set,
order the elements in it according to the DM's preference function, and
identify a subset of the nondoMinated set which satisfies his aspiration
levels. Then, as the DM learns, reassesses his preference function, and
24
updates his aspiration levels, the algorithm should be able to redefine
the above subset and for each element in it provide probabilities of
goal level achievement.
In Chapter 4, the literature on multi-objective programming
methods is reviewed to ascertain what aspects of the problem stated
above have been dealt with already, and which ones are available for
further development in this research study.
.
CHAPTER 4
LITERATURE REVIEW
Multi-objective programming methods are less numerous than their
single objective counterparts, but the literature on the subject is
growing rapidly. At least 20 techniques have been formulated in the
last 10 years, and most of these in the last 6 years. This impressive
proliferation of methods in a relatively short period has left little
time for evaluation of their main features. This chapter presents a
classification and review of current multi-objective programming methods
and developments in a chronological order. An effort is made to identify
the strengths and weaknesses of each method as they relate to the problem
formulated in Chapter 3. Since none of these current methods deals explicitly with uncertainty considerations, this chapter also reviews
available probabilistic programming methods. These methods have evolved
within the framework of single objective optimization and are reviewed
in the hope that they will point out the difficulties to be encountered
in the task of incorporating uncertainty-handling capability into our
central multi-objective programming problem. Also, in the process, some
possible extensions of current theory on stochastic models are mentioned
for further discussion and development in Chapters 5 and 6.
Multi-objective Programming Methods
A prerequisite for the assessment of the various solution tech-
niques is the development of evaluation criteria. Loucks (1975)
25
26
evaluated multi-objective solution techniques in terms of four criteria,
based on his belief that a technique should: (1) simulate the bargaining process, (2) recognize uncertainty in tradeoffs and preferences of
the decision maker, (3) be sufficiently general in scope, and (4) be
able to predict the outcome of the decision-making process, reflecting
both the physical realities of the problem and the preferences of the
decision maker. The criteria which are proposed here relate mainly to
computational and decision making considerations. Specifically, the
technique (1) must be computationally feasible, (2) must present explicit
level tradeoffs between the objective functions, (3) must account for
the preferences of the decision maker, and (4) it needs to consider uncertainty in the parameters and provide the decision maker with information on the probability of level of achievement for the various objective functions.
The classification of multi-objectiva solution techniques that
follows has been cited by Cohon and Marks (1975) and has been modified
only slightly to include two recent techniques:
Curve-generating techniques:
• weighting method,
• e-constraint method,
• adaptive search,
• multi-criteria simplex method;
Techniques which rely on prior articulation of preferences:
• goal programing,
• utility function assessment,
27
• electre method,
• surrogate worth tradeoff;
Techniques which rely on progressive articulation of
preferences:
• step method,
• sequential multi-objective problem solving (Semops),
tradeoff development method (trade).
The various techniques in the "curve-generating" approach generate the
set of nondominated solutions and the nondominated set. They do not
attempt to find a satisfactum (satisfactory) solution, and only claim
that it is an element of the nondominated set. These techniques which
have appeared most frequently in public investment problems (Marglin
1967; MacCrimmon 1969; Major 1969) are now reviewed in some detail.
Weighting Method
The general weighting method proceeds by trading off the objectives to one another, i.e., by assigning weights to each objective.
The multi-objective programming problem (2.5) can be replaced by the
following formulation:
(4.1)
max E w Z (x)
i i —
i=1
subject to the constraints
xeX
where
X = {x: xe(R44 ) m , gk (x) < 0, k = 1, 2, • •
wi = 1, and wi > 0 .
i=1
k}
28
Clearly, a subjective determination of the levels of the weighting coefficients w
i is necessary. Successive solutions of this problem with
parametrically varied values of the weights trace out the nondominated
set. Variants of this technique have been presented extensively in the
literature (e.g., Zadeh 1963; Geoffrion 1967a; Major 1969; Reid and
Vemuri 1971; Haimes 1973).
An important drawback to this method is that it fails when the
nondominated set is nonconvex (Cohon 1972; Zeleny 1974). Another reason
why this approach to the problem is not advocated is that it places a
great burden on the decision maker to decide what the optimal weights
should be.
e-constraint Method
This method replaces (p-1) objective functions with (p-1) constraints as given below:
(4.2)
max Z.(x)
—
subject to
g (x) < 0,
k — —
Z. (x)
k = 1, 2, . .
k
e ijj, j = 1, 2, . • •
P
wherethee.are minimum acceptable levels.
Thelevelsofsatisfactorys.can be varied parametrically to
evaluate the impact on the single objective function Z i (x). Then, the
i-th objective function Z i (x) is replaced by the j-th objective function
Z.(x), the constraint set is modified accordingly and the solution procedure is repeated. This method generates the nondominated set by vary-
ingthelevelsofc.,j = 1, 2, . . . p. Haimes, Lasdon and Wismer
29
(1971) and Haines and Nainis (1973) show that this approach does generate the set of nondominated solutions for the two-objective cases.
Pasternak and Passy (1974) used a combination of the weighting and cconstraint methods to find a solution to a nonconvex, two-objective,
0-1 integer programming problem.
The e-constraint method would appear to be astrong technique
in generating the nondominated set. However, this technique becomes
computationally burdensome when the number of objectives increases beyond three. Recall that a constraint must be added in (4.2) for all
but one of the objective functions. Also, this approach does not provide a method to handle uncertain parameters.
Adaptive Search
Another approach suggested by Beeson and Meisel (1971) is an
adaptive search procedure which alleviates the problem of finding repetitive solutions as in the weighting and e-constraing methods. The
developers found this technique to be particularly useful in problems
with a small number of variables but a large number of objective functions, as in the case in some control problems. The search proceeds
from one nondominant solution x., and the value of the i-th objective
function is found by solving:
max Z(2)
subject to
X
e
X.
(4.3)
Then new solutions are generated with the recursive formula,
T (x.) w. + c
= x. - 1 -s
x
-a -1
—i+1
30
where a. controls the step size, J is the Jacobian matrix of partial
derivatives of the objective function with respect to the decision variables w controls the direction of the search, and ç
1 controls the
feasibility of the solution. Each new solution is then checked for nondominance: if any two of the gradients are of opposite sign, or the
point is on the boundary of X, then it is a candidate for nondominance.
The search is restarted P times to insure good coverage of the set of
nondominant solutions.
The drawbacks to this approach are that the computational requirements can become immense for problems with a large number of decision variables, as is typical in water resources planning problpms.
This is so because the analysis takes place in decision space and not
in objective space, and it is the dimensionality of the former that
dictates computational needs.
Multi-criteria Simplex Methods
Zeleny (1974) has extended the theory on parametric linear
programming for bicriterion cases (Geoffrion 1967b and 1968) to multicriterion cases, expecially in the direction of algorithmic developments.
Zeleny considers the multi-objective programming problem of (4.1) and
proceeds to find nondominated extreme points (basic, feasible solutions)
of X, the feasible region. In the process, he develops two algorithmic
strategies, discusses the problem of traversing from one nondominated
extreme point to another in an efficient manner and, finally, presents
an algorithm to calculate the entire set of nondominated solutions X*
from the previously calculated set of nondominated extreme points.
31
This work considers linear structures only, and these were found
to be sufficiently complex to merit concentrated attention. The insight
gained can now be advantageously applied to nonlinear problems, and
similar algorithms remain to be developed.
Attention is now focused on a class of noninteractive techniques
which rely on prior articulation of preferences. These techniques are
based on the premise that if a complete or partial ordering of the set
of nondominated solutions is possible, then the computational burden can
perhaps be alleviated as some of these nondominated solutions are eliminated and the selection process continues. The criteria for this ordering results from the articulation of the preferences of the decision
maker prior to the solution of the multi-objective problem.
Goal Programming
The goal programming method requires the decision maker to set
goals that he would like to achieve for each objective function. A pre-
ferred solution is then defined as the one which minimizes the deviations
from the set goals according to a specified criterion. Denote the vector
of goals set by the decision maker for the objectives by G, G
E
RP , then
the mathematical formulation of the problem is
min II
Z (x)
s.t.
x e X
G 11
(4.4)
where II-II denotes any norm. The general use of norms is discussed by
Yu and Lietman (1974). Charnes and Cooper (1961) are attributed with
the development of goal programming in the context of linear program-
ming. A difficulty associated with some of the norms is that the
32
objective function in (4.4) may not be continuous in the feasible region and, then, the need arises to find a continuous, either linear or
nonlinear, equivalent formulation. Using the sum of the absolute
values of the deviations as the norm, for instance, an equivalent linear
formulation (Wagner 1969) is given by
min
(4.5)
(Yi 4' Y i )
i=1
x e X,
subject to
Z i (x) - g i = y i - y i
Y1
where
+
Y
-
0 ,
= 1 , 2 ,
.9 P
and y. are
are the positive and negative deviations, respectively,
from the i-th target g i for the i-th objective.
Goal programming is computationally effective in relation to the
"curve-generating" techniques and has been shown to be a very useful
tool for multi-objective decision making in private sector problems
(Lee 1972; Lee, Clayton and Moore 1975). A feature which detracts from
the applicability of this approach to water resource problems is that
it places an equal importance on each objective. If weighting factors
are introduced to counteract this, then the problem of determining the
'weights for any real problem arises, and the value judgments that it
elicits may be the wrong ones since they are requested from the decision
maker without his prior knowledge of the objective tradeoffs in the
problem at hand.
33
Utility Function Assessment
Utility functions have been used extensively in consumer demand
theory and economics, and they have been applied to private and public
decision making problems. Aumann (1964) appears to be the first to have
considered utility functions for multi-objective problems. His basic
consideration was the impact of the partial order associated with multiple criterion functions on the completeness axiom of utility theory.
Raiffa (1969) and Keeney (1969) investigated the development and application of multi-attribute utility functions. Geoffrion (1968) developed a
method for proceeding from a specification of a utility function to a
"best-compromise" solution, bypassing the generation of the entire nondominated set in most cases. More recently, Fishburn (1970) and Keeney
(1973), have contributed to the theory itself and application of it to
realistic multi-objective problems (i.e., river basin problems, city
airport development, etc.).
Rendering decision analysis operational for multi-objective
problems entails assessing the decision maker's utility function
u(G) E u(Gi , G2 , . . ., G) , where Gi = G(x) is the i-th goal function.
It is called a multi-attribute (MUF) utility function because the argument of the utility function is a vector indicating levels of the several
attributes (in our case, goal functions). This function has two properties which make it useful in addressing the issue of tradeoffs between
goals:
1. U(G') > U(G") if and only if G' is preferred to G" , and
2.
in situations with uncertainty, the expected value of U is
the appropriate guide to make decisions, i.e., the
34
alternative with the highest expected value is the most
preferred.
This second property follows directly from the axions of utility theory
postulated first by Von Neuman and Morgenstern (1947).
Much of multi-attribute utility theory is developed as follows.
Assumptions about the DM's preferences are postulated, and the restrictions these assumptions place on the functional form of the multiattribute utility function (MUF) are derived. Then, for any specific
problem, the appropriateness of the assumption for a particular MUF
should be verified with the decision maker and checked for internal
consistency. Ideally, the functional form of the MUF would have the
following properties (Keeney and Sicherman 1975):
1. be general enough to allow application to many real problems,
2.
require a minimal number of assessment questions to be asked
of the decision maker,
3. require assessments which are reasonable for a decision maker
to consider,
4. be easy to use in evaluating alternatives and conducting
sensitivity analysis.
Fishburn and Keeney (1975) discuss the basic assumptions of preferential
independence and utility independence. The pair of attributes (i.e.,
values of our goal functions) {G1 ,G2 } is preferentially independent of
the other attributes {G3 , . . ., G} if preferences among {01 ,G2 } pairs
given that {G3, . •
where {G
3'
G} are held fixed, do not depend on the level
. . ., G} are fixed. The attribute (e.g., objective func-
tion, goal level) G1 is utility independent of the other attributes
35
{G2 , G 3 , . . ., G} if preferences among lotteries over G , given G
2'
1
Gp are fixed, do not depend on the level where those attriG3 , . .
butes are fixed. Keeney and Sicherman (1975) state the following: for
p > 3, if for some Gi , {Gi , Gi l is preferentially independent of the
other attributes for all j
i and Gi is utility independent of all the
other attributes, then either
u(G)
=E k.u.(G ) , or
1..1 1 1 i
1 + k u(G) =
E [1 + kk.u.(G.)]
1
1=1
(4.6)
(4.7)
where
utility functions scaled from zero to one, the
uandu.are
1
and k > -1 is a
lesarescalingconstantswith0<k.<1,
1
scaling constant satisfying the equation
1 + k = E
1=1
[1 + kki ].
(4.8)
The functional form (4.6) is referred to as the additive form, and
(4.7) is the multiplicative form.
Techniques for assessing single-attribute utility functions have
become fairly standard (Raiffa 1968; Schlaifer 1969), and computer programs have been developed for fitting single-attribute utility functions (Schlaifer 1971 and Sicherman 1975).
Soma of the shortcomings of existing procedures for assessing
and using multi-attribute utility functions include:
1. the necessity to ask "extreme value" questions to keep the
computational requirements at a manageable level,
36
2. the lengthy and somewhat frustrating task of assessing the
individual utility functions, U i (G i ), and determining the
scaling constants, K i ,
3. the absence of immediate feedback to the decision maker as to
the effect of his assessments on the attribute tradeoffs,
4.
the violations of the utility independence assumptions as a
functional form is "fitted" to a particular problem, and,
5. the absence of a procedure to "update" the preferences of the
decision maker as alternative solutions and tradeoffs are
generated.
In the case of utility independence violations, the particular
problem may be by far more sensitive to the scaling constants and/or
tradeoffs among the attributes than to the conditional single-attribute
utility function variations. Thus, in these cases the additive or multiplicative form may provide an adequate framework to reflect the value
structure of the decision maker. These two forms have been applied to
numerous situations such as city airport development (Keeney 1973),
river basin development (Duckstein 1975), and water pricing utility
(Duckstein and Kisiel 1971).
Electre Method
The Electre method described by Roy (1971) attempts to structure
a partial ordering of alternatives which is stronger than the incomplete
ordering implied by the nondominance condition. The method is based on
what Roy calls an "outranking relationship" R. The relationship is an
allegory to a preference ordering of alternatives. Transitivity,
37
however, is not required. The statement x R x means x is preferred
-12 - -1
However, R x. 2 andR 3 does not necessarily imply that
to
x
-x1 R x
Much of the method is concerned with building an outranking
relationship from value judgments supplied by the decision maker.
These value judgments take the form of weights on the objectives, a
"concordance condition," and a "discordance condition," to quantify the
decision maker's range of comparability. Once the outranking relationship is formulated it is used to construct a graph H in which a node
represents a nondominated solution. The arcs of the graph are drawn
such that an arc directed from node xl to node x2 implies =v1 RIt 2 . The
next step is to find the kernel H* of the graph. The nodes contained in
the kernel H* represent those laternative solutions which are preferred
on the basis of R.
The Electre method is suitable for problems with discrete alternatives and constitutes a searching procedure rather than a multiobjective one.
Surrogate Worth Tradeoff Method
The motivation for this method presented by Haimes, Hall and
Freedman (1975) is that the choice of optimal weights should be made
with the knowledge that tradeoffs are a function of the levels of the
various objectives. This method begins by finding the maximum value of
each objective function in (2.5). The next step is to formulate the
multi-objective problem in the e-constraint form (4.2). It is assumed
that the objective functions are differentiable functions of the
38
right-handsidelevelsoftheE.constraints. The tradeoff function between the i-th and j-th objective functions is denoted T..(x) and del.) —
as
fined
BZ.(x)
(x) =
1
(4.9)
DX.(x)
J
where
m
3Z i (x)
dZ i (x) = E
ax.
dxk
k=1
(4.10)
with the property
T..(x) = 1, j = j,
2.3 —
(4.11)
T. .(x) = 1 / T. .(x), for all i,j.
1.7 —
31 —
Of primary importance in the method is the derivation and determination
of the functions T..(x), however, the direct utilization of (4.10) is
3.3 —
clearly impractical for a reasonably large number of objective functions
and decision variables. Instead, the problem is formulated as follows:
max Z (2)
1
(4.12)
subject to
x e X ,
Z.(x)
J
> E.
J
where
e. = 2.(x) + e.
J
J
e. > 0,
j =
2, 3, .
Z.Wistheglobalmaximumofthei - nobjectivefunction , ands.will
J
be varied parametrically in the process of constructing the tradeoff
function. Now, from the generalized Lagrangian L to the system (4.12),
we have
39
L = Z (x) + E p g (x) + E X. [Z(x) — c.],
1
k k
j
k=1j=2
(4.13)
where the Uk and X ij are the Lagrangian multipliers. Denote by XT the
set of al x in (4.11) that satisfy the Kuhn-Tucker (Kuhn and Tucker
1950) conditions, and Q the set of all Lagrangian multipliers that
satisfy those conditions as well; also the set of active constraints
associatedwithE.by A(E.),
A(e j ) = fj: xEZ T , Z.(2) - e j = 0, j = 2, 3, . . ., pl .
It follows that A li = -3L/3E i and Z 1 (x) = L for xEXT , X 1i E2 and 1100 ,
andthusA=-3Z 1 00/9e j .Also,fortheactiveconstraint) =
and
Xli
=-3Z 1 (0
is relationship can be generalized
This
to yield:
X= -3Z (x) / 3Z.(x),
I
j
ij
j, i, j = 1, 2, . .
p
(4.14)
which is the relationship sought. The authors then suggest a computational procedure for evaluating the tradeoff function T(x) as a function of Z.(x), T [Z.(x)], using regression analysis. This operation
J
is repeated for all j
i. The result is a set of functions which relate the weights to the levels of the objectives and which can be displayed graphically. The number of tradeoff functions is in general P 2
but because of the property (4.11) the number of relevant tradeoff functions is reduced to
( P2 )
pi
(p-2)!2!
(4.15)
The tradeoff functions give the analyst the required information
toextract"surrogateworthfunctions"W..from the decision maker.
40
There is one surrogate worth function for every tradeoff function, with
a range varying between -10 and +10, with some arbitrary but predetermined value which indicates an acceptable tradeoff. Thus, the intent of
constructing the W ij is to attach preference values to the tradeoffs and
eventually identify a satisfactory solution.
For decision makers who experience difficulty in evaluating
tradeoffs the surrogate worth tradeoff method (SWT) can be a useful
tool, as it leads decision makers through a systematic comparison of
objectives, two at a time. Unfortunately, the computational requirements of the method increase rapidly with the number of objectives and
the regression analysis may consume substantial amounts of time. Again,
no provisions are made in the method to handle stochastic parameters.
Many recent techniques in multi-objective programming attempt to
incorporate the capability of progressive articulation of preferences
and use these in the final selection of a solution. The methods which
fall into this class can be characterized by a general algorithmic approach: (1) find a nondominated solution, (2) obtain the decision
maker's reaction as to whether it is a satisfactory solution or modify
problem accordingly, and (3) repeat steps 1 and 2 until a satisfactory
solution is obtained, or until some termination rule is applicable.
Step Method
The step method or Stem was proposed by Benayoun et al. (1971)
and is quite representative of this class of techniques. It begins with
the construction of a "payoff table" which is found by solving
41
max Z (2) = E c. k x.
k
j=1
subject to
(4.16)
x e X
for k = 1, 2, . . ., p. The solution to this problem xk gives by
definition the maximum value of the objective, Z (2s) . The values
k
of the other p-1 objective function implied by xk are Z.(xk ). These
J —
values are then used to construct the payoff table, shown in Table 1.
We note that the maximum along each column is attained on the principal
diagonal. The system of weights is selected as follows. First, all the
values in Table 1 are converted from absolute to relative ones. For
this it is necessary to determine the minimal values of the objectives,
i.e., to carry out successively the minimization with respect to each
objective. Denote by M k and ink the maximum and minimum, respectively,
for the k-th objective function. A linear transformation is then performed on the values of the k-th objective, taking M k equal to 1 and mk
equal to 0, after which all values in Table I will be transformed into
relative ones. Now, based on the information contained in Table 1, let
P k be the minimum value in the k-th column and define ak = 1 - pk . Compute the weights from the conditions
a.
1
a.
H.
1
H.
E II.
'1
1
i=
:= 1,
(4.17)
and proceed to find the largest values of the objective functions in
the domain of feasible solutions, X. The newly formulated problem is
▪
•
42
Table 1. Payoff table for the step method.
Z
1
x
1
1
Z (x )
1
2
k
x
Z
Z
2
1
Z (x )
2
•
k
.
.
Z
p
1
Zk(x )
Z (x 2 )
2
Z ( x1 )
1
Z2(x)
k
Z (x )
k —
Z ( Xk)
p
43
max E
i=1
subject to
H Z(x)
i
(4.18)
1
x e X
The solution obtained x is then used to generate the vector
/1 (251 ) = (Z i (k), Z 2 ( 1 ), • .
Zp(x1)). The following question is
now posed to the decision maker: "Do all the objective functions
represent satisfactory values?" In the case of an affirmative answer
the problem is solved and vector z 1 (x1 ) represents the desired result.
In the case of a negative answer, the decision maker selects the objec-
tive function Zk (x) which has the least satisfactory value and a value
Kk to redefine a new domain: x e X, and Z k (x) > K. Again, a solution
x 2 and vector .12 (x 2 ) are obtained. The procedure continues until an
acceptable lower bound Kk is found for each objective function.
Sequential Multi-objective Problem Solving (SEMOPS)
In this method, Monarchi et al. (1973) formulate the preferences
of the decision maker into "aspiration levels" to strengthen the decision
maker-analyst dialogue. The problem is defined as follows:
max E
i=1
1
x e X
subject to
Where
(4.19)
d.
(1. = A./y. is a dimensionless indicator,
1
1 1
Z.(x) - Z.min
1
1
Y. - Z.max - Z.min
1
1
1. (x)
—
44
E
0, if Z (x) > Z.
,
i —
min
(X) = {
10
-300
, if Z 1 (x) = Zi
—
.
Z.
= max Z.(x)
1 max
x e—
X
—
= min Z.(x)
—
x e X
i min
A.
1
=
AL. - Z
1
Z.
i
1 upper
lower
-Z
+
E
(X)
.
1 lower
r[Z(x)]
i
= r
-Zi lower ' Z i upper 3 '
and AL.eR is the aspiration level of the decision maker for the i-th
objective. The method now proceeds in a manner similar to the Step
method with the decision maker changing the values of his aspiration
levels, AL., as elements of the nondominated set are generated. The
authors present several functional forms of the indicator d i to suit
different problems. The procedure terminates whenever the decision
maker feels that a particular set of values is "close enough" to his
prescribed set of aspiration levels.
Tradeoff Development Method (TRADE)
Bulfin 1976; Goicoechea,
In this method (Goicoechea, Duckstein and
solution
Duckstein and Fogel 1976a), the DM proceeds from one nondominated
to another, evaluating tradeoffs between individual objective functions.
objective funcThis technique involves the formulation of a surrogate
of the
tion and has been applied to decision making in a case study
45
Charleston watershed in southern Arizona. No provisions are made in
this method to handle stochastic parameters.
Probabilistic Programming Methods
The development of probabilistic programming methods has resulted from the need to deal with at least three sources of error, (1)
the errors and variations in parameters which sometimes can be associated with probability measures, (2) the presence of risk which sometimes allows a meaningful numerical representation of the decision
maker's utility function, and finally (3) the requirement of developing
optimal decision rules. These sources of error exhibit themselves in
both single and linear objective and multi-objective programming problems but have been dealt with in the first class of problems, exclusively.
Three main approaches to probabilistic programming are recognized in the literature:
1. probabilistic sensitivity analysis,
2. decision-theoretic programming models, and
3. risk programming in linear programming (LP) models which include:
• chance-constrained programming (CCP)
• two-stage programming (TSP)
• stochastic linear programming (SLP)
• transition probability programming (TPP).
A typical problem in probabilistic sensitivity analysis is the specific
characterization of the way the errors enter randomly into the model.
For instance, if the errors are small and they preserve the indices of
the optimal basic activities for all admissible perturbations, then
46
Prekopa (1966) has shown that the objective function can be expanded
into a Taylor's series around the expected-value solution, and that
this series has an asymptotic normal distribution. The idea that the
choice between alternative risky solutions may be specified through
utility functions, where the utility function has arguments in the form
of expected profit, its variance, upper and lower bounds, etc., is of
considerable importance and has been treated by Freund (1956) and
Markowitz (1959) in portfolio investment selection.
Problems in the second approach involve specific features of
sequential decision-making, e.g., prior and posterior distributions of
parameters, penalty functions for nonfeasibility, introduction of aspiration levels through fractile criteria, etc. Over and above the
requirements of specification of probability variations, it is necessary to introduce rules of adjustment when expectations are not realized
for any part of the model. The deterministic approach to probabilistic
programming through fractile criteria (Geoffrion 1967b; Roy 1952), for
instance, assumes the vector of net prices in the objective function has
a multi-normal distribution and proceeds to relate aspiration levels to
fractiles of the distribution. Empirical application of this approach
to input-output agricultural studies have been made by Sengupta and
Tintner (1964) to determine optimal profits sensitivity to decision
variables. The basic difficulty with most of these economic models is
that the vector of net prices is not distributed like a multivariate
normal vector, and if random variables with non-negative ranges are used
the resulting total distribution is extremely difficult to deal with in
the context of nonlinear programming.
47
Methods of risk programming essentially convert a probabilistic
LP model into a nonlinear deterministic equivalent form. Of the various
methods in risk programming chance constrained programming alone will
be discussed in some detail.
Chance-constrained Programming (CCP)
The simplest case of chance-constrained programming developed by
Charnes and Cooper (1961) is the "expected Value," E-model, presented in
Chapter 2.
Some remarks about the CCP may be in order. First, the assumption of normality needs to be relaxed since negative values of ranges
are meaningless for economic elements such as price and agricultural
yield. Second, the development of deterministic equivalents which are
distribution-free would be an important generalization. And third, the
expected value criterion may be unsuitable for a gambling decision
maker and, conversely, one interested in protection against extreme
events.
Two-stage Programming (TSP)
This method is generally attributed to Dantzig and Madansky
element b i of the
(1961) and considers the general LP problem where the
or estimable distriresource vector b is a random variable with a known
i is divided into
bution function. The random parameter space of each b
and the other not
two disjoint classes, one satisfying the constraints
class is nonempty, then
satisfying the constraints. If the latter
constraint being violated,
there is a finite probability of the i-th
can be associated with the
and if a finite penalty cost f i per unit
48
i-th violated constraint then the total penalty cost can be formulated.
Dantzig (1963) has considered cases where the b i are uniformly distrib-
uted.
The TSP approach has not yet been successful in developing
optimal decision rules in a sequential fashion, when the distribution
of b is other than uniform, or the resource allocation coefficients
a are also random variables.
Stochastic Linear Programming (SLP)
Methods of this type can be divided into two main approaches,
e.g., the passive and active approach. In the passive approach the
distribution of the optimal solution vector and the optimal objective
function is derived under the simplifying assumptions that the random
errors around the optimal basis are specially structured. This is the
approach developed by Babbar (1955) and more recently extended by
Prekopa (1966). In the active approach each resource vector is decomposed in terms of additional decision variables, and the distribution
of the optimal solution vector and objective function is truncated or
conditioned by a specific choice of the additional variables. Tintner
(1955) originated this approach and has been extended by Sengupta,
Tintner and Millham (1963).
Transition Probability Programming (TPP)
System analysis through transition probabilities has been found
useful in formulating sequential decisions over time, when the stochasGirshick
tic parameter can be viewed as a Markov process (Blackwell and
in
1954; Bellman 1957; Howard 1960). Such situations may be found
49
queuing models, machine replacement problems, and inventory problems,
among many. This approach proceeds to obtain an equivalent deter-
udnistic programming problem with some important results (Ghellinck
and Eppen 1967).
The above review of multi-objective programming methods points
to a variety of approaches and operational frameworks in which the DM
can articulate his preferences. None of them, however, deals directly
with uncertainty, either in the objective functions or the set of constraints. On the other hand, the probabilistic programming methods reviewed address this element of uncertainty but do so within the framework of single-objective, noninteracting programming. All of these
methods, with one exception, present approximating techniques to deal
with functions of random variables. Chance-constrained programming is
the exception, and is able to work with exact forms, i.e., deterministic
equivalents, of functions of normal random variables because of the
unique properties of this type of random variable.
In Chapter 5, a multi-objective, stochastic programming algorithm is developed with uncertainty-handling capability. Chapter 6
will extend this capability to a variety of random variables.
CHAPTER 5
DEVELOPMENT OF THE PROTRADE ALGORITHM
In preceding chapters general concepts were presented, a formal
statement of the research problem was made, and the literature on multiobjective and stochastic programming methods were reviewed. In response
to that problem statement, this chapter develops a multiobjective algorithm for decision making within the framework of stochastic programming to allow the decision maker to search for alternative solutions.
Development
This probabilistic trade-off development method, labeled PROTRADE,
involves the formulation of an initial surrogate objective function (SOF),
the estimation of a multiattribute utility function reflecting the DM's
preferences, the redefinition of the SOF, and the use of a cutting-plane
technique to solve the general nonlinear problem. In the algorithm itself reference is made to normal random variables. It is realized, however, that this is done for reasons of convenience primarily, as we are
also able to work with the other continuous random variables to be discussed in Chapter 6. Also, the form of the multiattribute utility function chosen is arbitrary and intended for illustrative purposes. Once a
particular problem is considered, its nature should dictate the form of
the utility function.
In the multiobjective methods reviewed in Chapter 4 the DM was
able to trade the values or levels of the objective functions against
50
51
one another. In PROTRADE, as we will see next, the DM is able to trade
the levels of theobjective functions and their respective probabilities
of achievement against one another. A numerical example is presented.
There are twelve steps in the PROTRADE method:
1. Problem definition -- A vector of objective functions Z(x), and a
domain D=14m of admissible solutions is given,
Z(x) = (Z1 ( &
.
D
1
Z2 ( & . .
.
yx)),
(5.1)
= {x: xeRm , gp (x)<O, x>0,
—
Z!(x) = E xC..
., Z.(x) = E[Z i (x)],
1
j1
(5.2)
C-iJ
• ;.../1nT[E(C.•)' VAR(C..)]'
- 1J
-1J and the functions g (x) are differentiable and convex.
P
2. Range of objective functions. Let 1.c.c be such that
Z i (x)
max. Z.(x), iEI[1,q],
yD
1
(5.3)
and define the following,
Z 1 (x*
—1 )
= Z 2 (x*)
icI[1,q]l,
R =
Z.
1.
min
min Z.(x).
xeD1.1
(5.4)
(5.5)
(5.6)
3. Initial surrogate objective function. In order that all functions
G 1 (x) be in [0,1] let
52
F(x) = E G (x),
i=1
(5.7)
where
Z (x) - Z
i —
G (x)
—
Z (x * )
i min
(5.8)
Zi
min
4. Initial solution -- Maximize F(x), xeD i . The resulting solution xi
is then used to generate an initial, nondominated goal vector Gl ,
G (2s )
1
1
1
= G(x)
(5.9)
5. Utility function choice -- A multidimensional utility function u(G)
is selected to reflect the DM's goal utility assessment. The multiplicative form (Fishburn 1970; Keeney 1973 and 1974).
1 + k u(G) = H [1 +
i=1
ii
(5.10)
1
is considered for illustrative purposes. The procedure to determine
the parameters k, k
which is presented in those references, will
i'
be applied here.
6. Redefinition of the surrogate objective function
A new SOP is de-
fined using results of Steps 3 and 5 as follows,
S 1 (2) = E w. G.(x)
where
(5.11)
53
w
i
= 1 + 3u(G)
(5.12)
G (x ) [H7]
.q
and r is the step size required to yield a new goal vector in the
direction of a desired increment u(G). Accordingly,
(a) Compute u(Gi ).
(b) Decide on a value for o<Au(G)<1.
(c) Solve for the step size r in,
Au(G) = u[G i + rVu(G i )] - u(Gi ).
(5.13)
7. Generation of alternative solution -- Maximize S i (x), xeD i . The resulting solution x2 is then used to generate vectors G2 and 132 ,
U
G
—2
—2
8. Generate vector
2
(5.14)
which expresses trade-off between goal value and
its probability of achievement,
(G (x )
1 -2'
1 - a1 )
(G 2 (x 2 ),
1 - a 2 )
(5.15)
•
•
(G q (x2 ),
1- a )
where the element 1 - a is such that,
i
Prob [Zi(x) > Z i (x 2 )] > 1 - a i ,
or its deterministic equivalent (Charnes and Cooper 1961)
(5.16)
54
1/2
E E(CiJ)x
J + K [ xT A x]
J=1 a.1 — —
In (5.17) K
> Z (x )
— i -2
(5.17)
is a standard normal value such that
ai
K)
a
i
and (I) represents the cumulative distribution function. The variancecovariance matrix A is symmetric and positive-definite, and the
T
quadratic form x Ax is then positive-definite. Accordingly,
1 - a. —< 0.5, so far.
19. The DM now poses the following question: "Are all the Z.(x ) values
1 -2
satisfactory?" In the affirmative case U2 represents a desired
solution. Otherwise, continue.
10. Select the objective function Zk (x) with the least satisfactory pair
(G (x
-2 ), 1 - a ) and specify e e if ' a ° E. R[0.1], such that
k
k
k
k'
Prob
[Zitc. (x) > ek ] >_ 1 - cÇ .
(5.18)
The DM will specify the above if he is not satisfied with either the
value achieved for the kth goal, Gk (x2 ), or the probability of achieving that value, 1 - ak , or both.
11. Redefine the solution space -- Define the new x-space D 2 as follows.
g
p
(x)
0,
pcI[1,P],
T 1 / 2 .1 e
E E(Ckj ) x. + K a ° [x— Ax]
k
j
k
j=1
X > O.
(5.19)
From 5.19 it is seen that the DM is now able to trade directly the
value of the kth goal against theprobability of achieving such value,
as long as the inequality is satisfied.
55
12.
Generate the new surrogate objective function S 2 (x),
S2(x)
=
E
wi G i (x),
i4k
(5.20)
and go back to Step 7 to maximize S 2 (.10 under D 2 .
S 2 (.10 will con-
tain one term less since the kth objective function now forms part
of D 2 .
Repeat this sequence until a satisfactory vector V 2 , below,
is achieved
V-2
(e
1 - a°)
(e 2'
1 - a2)
(e
1 - a° )
(5.21)
By now, the DM has gained considerable information on trade-off
between various goal values and the effect of the physical limitations of the problem.
The DM is now in a position to reassess his
utility function, if he decides to, and go back to Step 6 to continue his sequential search for a satisfactum.
Numerical Example
The problem considered for this example is the following:
Let
= C-11 X, + C-12 X 2
f1 (X)
—
f 2 (X) = ç21 X1ç22 X2
3 (X)
—
C X2
= -31
C X, + -32
where Ç ijNORMAL [E(C ii ), VAR (C ii )] and
VAR [C 11 ] = 16,
VAR [ g12 ] = 4'
E [ çll ] = 2 '
E [C 12 ] = 3 ,
56
E Eç 21 3 = 1 '
VAR [C21]= 1,
E E ç22 1 = 1 '
VAR [C22]= 1,
[g 31 ] =
4,
VAR [C 31 ] = 9,
E [ g 32 ] = 1,
VAR [g 32 ] = 1;
E
also, let a feasible region be defined by
2
2
X, + X 2 < 25,
3X1 + X2 < 12,
X2 > 1,
Step 1 -- Problem Definition
Let Z 1 (X) = f l (X), Z 2 (X) = -f
2 (X),
Z 3 (X) = f 3 (X),
Z(X) = (Z 1 (X),Z 2 X),Z 2 (X)),
X e D1 ,
where:
+
D1 =
4
25,
3X1 + X 2 < 12,
X2 > 1).
Constraint set D
is shown in Figure 3.
2
Step 2 -- Range of Objective Functions
17.94
u
=
-1.00,
15.64
R
.
1 min.
Z
2 .
man.
3 min.
.
=
{(2.58,4.26),(0,1.00),(3. 66 4 . 00 )},
3.00,
@ (0,1)
-25.00,
@ (0,5),
1.00,
@ (0,1),
57
Figure 3. Constraint set Dl .
58
Step 3 -- Initial Surrogate Objective Function F(X)
Z 1 (x) - 3.0
G (X) -
1 -17.94 - 3.0
0.133X
Z (2) + 25.0
G (I) -
2
2
-1.0 + 25.0
1
+ .200X
- 0.20
2
2
2
-.041X, - 0.041X + 1.04
-
2
Z (X) - 1.0
3
(X) -
3
15.64 - 1.0 -
.273X
1
+ .068X
2 - 0.06
2
F(X) = .406X1 - .041X, + .268X 2 - .041X21 + 0.78
Step 4 -- Initial Solution
max. F(X) = .406X, - .0414 + .268X 2 - .0414 + 0.78
s.t.
+ X22 < 25
3X1 + X2 < 12
X2 1
The resulting solution X, = (3.09,2.73), a boundary point, is then used
to generate an initial goal vector Gl , see Figure 4,
0.756
G = 0.344
-1
_0.968_
Step 5 -- Generate a Multidimensional Utility Function u(G)
For purposes of illustrating the algorithm, use the form
3
ku(G) = H [1+kkC(1-e
1=1
and assume the following parameters:
b.G.
1 1 )]
- 1.0
59
Figure 4. Initial solution vector X i .
60
i
_
k
i
c.
1
1
0.40
1.156
-2.00
2
0.60
1.018
-4.00
3
0.15
1.030
-3.50
b.
1
—
and k = -0.40.
Step 6 -- Define a New Surrogate Objective Function
(a) Compute u(G 1 ),
kk u (G ) = -0.144
1 1 1
kk 2 u 2 (G 2 ) = -0.182
kk u (G ) = -0.059
3 3 3
u(G
=
-1 )
0.852
(h) Decide on a utility increment 0 < Au(G) < 1.
Let Au(G) = 0.10.
(c) Solve for the step size r,
0.160--
0.756
G + rVu(G 1
-1 )
=
0.344
+
0.968
r
0.520
0.010
Now, solving for r in the equation
u[..qi+rVu(G 1 )] - 0.952 = 0
yields r = 0.50.
w
= 1.0 +
w 2 = 1.0 +
W3
=
1.0 +
Hence
(.50)(.160)
(.756)
(.50)(.520)
(.344)
(.50)(.010)
(.968)
and the new SOF becomes
- 1.105,
- 1.755,
- 1.005,
61
S 1 (2) = 1.105G1 (2) + 1.755G 2 (2) + 1.005G (X)
3
= .421X 1 - .072X12 + .289X2 - .072X22 + 1.544
Step 7 -- Generation of Alternative Solution
Maximize S
(X) subject to X e D to yield
1 —
— 1,
interior point of D1 , also:
= (2.92,2.00), an
11.840.527
2 = -12.52
0.873
13.68
Step 8 -- Generate Vector V1
(0.588,0.500)
y1 =
(0.527,0.500)
(0.873,0.500)
andtheprobabilityofachievingthelevelG.is 0.500, or better.
Step 9 -- Assume U2 is Not a Satisfactum
Step 10 -- Select a Pair (Gk (x2 ), 1-ak)
The DM is not satisfied with, say, what he has obtained for G2 ,
e.g., (0.527,0.500), and would like to specify that
1
> -12.52] > 0.70
Prob[Z 2(X)
— —
Step 11 -- Define Constraint Space D 2
X
2
2
+ X < 25,
1
2 —
= X212,
X
> 1.
2 —
62
2
and
4
- X 22 - 0.53[X4
l+X 2 ] 1/2
> -12.52,
and where 0(-0.53) = 0.30. Space D 2 is shown graphically in Figure 5.
Step 12 -- Generate the New SOF S (X),
2
S 2 ( )= w i Gi (X) + w 3 G 3 (X),
•
= .421X, + .289X2 - .281
and maximize it subject to X e D 2 , to yield )13 = (2.28,2.00), as shown
below.
Also,
G (X ) = 0.503,
1 3
G 2 (X3 ) = 0.663,
G (.1() = 0.698.
3
Now, to determine 1-a i , for i = 1,
2
2
[16(2.28) +4(2.00 ]
2(2.28) + 3(2.00) + k
al
10.56 + ka
i) For e
1
= 11.84 (from II ),
k a >. .128, and 0(.128) = .550
1
that is, 1-al = .450.
Also, for e l = 17.95 (the "best" G 1 can do, from U 1 ),
> .741, 0(.741) = .770, 1-al = .230.
k _
That is to say, the DM can achieve
Z
1
= 17.94 with prob. 0.230, or better
or
Z 1 = 11.84 with prob. 0.450, or better
1
1/2
Le
1
9.95 —
> e1 •
63
Figure 5. Space D 2 and solution vector
.
64
and still maintain Z 2 = -12.52 with probability 0.700, or better.
ii) For i = 2,
for Z 2 = -12.52, 1-a 2 = 0.700, as just shown above.
For Z 2 = e 2 = -1.0 (from 1),
-(2.28)
2
1/2
4
[(2.28)+(2.00) 4 ]
> -1.0
2
- (2.0)+ k
a2
-9.198 + k
6.55 > -.10
a2
k > 1.25, (1, (1.25) = .894, 1-a 2 = .106
c1 2
iii) Similarly, for i = 3,
2
2
4(2.28) + (2.00) + k a3 [9(2.28) +(2.00) ]
and for e
3
1/2
> e
3
= 13.68 (from -2
U )
11.12 + k a 3 7.12 > 13.68
k
for e
k
>
3 = .361,
a 3 — .359, cD(.359) = .639, 1-a
3 = 15.62 (from 1)
a3
2. .632, cD(.632) = .735, 1 • a 3 = .265,
Thus, the DM can choose V to be
—2
(0.588, 0.450)
V2
—
(0.527, 0.700)
(0.873, 0.361)
-
There are several ways to structure the preferences of the DM
and bring them into the analysis. The utility function approach suggested here is only one of them and was intended, primarily, to stress
the need to reconcile the expectations of the DM with the physical
65
realities of the problem at hand. The nature of the problem, also, will
dictate the type of utility function to use. The above analysis illustrates how the DM can trade the level of achievement for each objective function directly against theprobability of achieving that
level. Then, as the various trade-offs are developed and new insight
into the problem is gained, the DM can reformulate his preferences if
he so desires and initiate a second cycle in the ana-ysis and search
for new alternative solutions.
The random variables presented in the algorithm itself are normal
random variables. The algorithm, however, will accomodate any type of
random variable, of the continuous or discrete type. In the next chapter
some theorems are developed to obtain deterministic equivalents for
functions of exponential, uniform, and beta random variables.
CHAPTER 6
DETERMINISTIC EQUIVALENTS IN STOCHASTIC PROGRAMMING
This chapter addresses the subject of continuous random variables in the set of constraints of a stochastic programming problem, to
be satisfied under specified probability limits, and presents a large
class of deterministic equivalents. These deterministic equivalents no
longer contain any of the initial random variables.
In the algorithm of the preceding chapter a deterministic equivalent was introduced for the case of a function of normal random variables. In this chapter the existence of deterministic equivalents is
established for functions of continuous random variables with any distribution function. When the random variables appear in the objective
function, the original stochastic problem can be transformed into an
equivalent deterministic problem. A numerical example is presented to
illustrate the above.
General Method
The mathematical approach discussed here makes use of the "change
of variable technique" (Hogg and Craig 1972; Lindgren 1968) to obtain
the distribution of a function of several random variables with given
distributions. Functions of exponential, uniform, and beta random variables will be considered.
66
67
Definition 1
Consider the inequality
Ecx< b
i —
i=1
where x
(6.1)
is a mathematical variable and some or all of the other
i
variables, b and c i , are random variables with known distributions.
Then, the probability statement
Prob [ E c.x. < b ] > 1-a
11 —
i=1
(6.2)
.
is denoted a chance-constrained inequality and is realized with a minimum probability of 1-a, aER[0,1].
Definition 2
Let a new random variable y be such that
y = E c.x,
^-1
i=1
with a cumulative distribution function F(*). Then the probability
statement
Prob [ E c.x. <—b ] > 1-a
i=1
is realized if and only if
F(b) > 1-a
(6.3)
and is termed a deterministic equivalent of (6.2).
c be mutually stochastically independent
Theorem 1. Let -i
c' -2
random variables having exponential distributions with parameters
68
X
1
and X 2 , respectively. Then, the random variable y = C x + C.. x,
where xl , x2 E R , is distributed as follows,
X
- 2
X
goo
= x
- 2
e
1 x2
Y
X
I
--Y
x
1
-e
, 0 < y < œ , (6.4)
À 2 xl
= 0, otherwise.
Proof: The joint distribution of c l and c 2 is given by
f(c1 ,c 2 ) = f(c i ) • f(c 2 )
= X
1
e
-X.c.-Ac
A e
2
' (cc 2 ) e A,
= 0, otherwise,
c ): 0 < c < , 0 < c 2 < oe}
2
Y 1 x2
= w1 (Y
c=
Lt: y = c x + c x
7
l'Y 2)
1 x 1 -x 1 2
2 2
l l
l
where A
= {(c
1,
C2
Y 2 = c 2'
y2=
w 2 (Y Y 2 )
and note that the set A maps into the set B,
31 1
B = {(y1 ,y 2 : 0 < y i < œ , o < Y2 < x 2
also,
,
g Y1'37 2 = fEw2(Y1 Y2 ) w2(Y1'37 2 3
(
,
)
where J is the Jacobian
1
3c1
X
J =
Dc
2
ci
Y2
DC
2
Y1Y2
x2
1
-
x 1
1
2
69
and
g()
.x l
yl
X X
_
2 2
2- 2
x
-Av
1 2
I
xi
Y l' Y 2
(Yl'Y )
2
6
B
= 0, otherwise.
Finally, the marginal probability density function (pdf) g(y 1 ) is given by
ft Y1
x
g 67 1 )
=
2 g(Y 1 ,Y2) dy 2
o
_ 1
Y1
1
e
x
12
x
f
X2
A y 2 _x y
2 2
1
1
)7 1
x
2
dy2
0
1
Y lix
Al
(A 1 x
- — Y
xl 1
AA
2 e
1
l
- A2) Y2
2
0
2c 2
( — - )x
1
2
x
X X
.
1 2
A 1x
x....
2 - X 2 i
Al
A2
-- Y 1----y1
x1 1 1
2
x
j,
- e
[ e
0<y <00
1
= 0, otherwise.
Theorem 2. Let c and c- 2 be mutually stochastically independent
-1
random variables having exponential distributions with parameters A 1 and
A 2 , respectively. Also, consider the random variable g l xi + q 2 x 2 ,
++
where x x c R, and the constant a C R [0,1]. Then, a deterministic
2
equivalent of the probability statement
70
Prob [q i xi + q 2 x 2b] > J. - a,
where b e range (9.1 x 1 + c 2 x 2 ), is given by the nonlinear inequality
X
X
2
X2
1
x1
X 2 xl e
X x e
1
2
- aX1x2 + ŒX 2 1 < O.
X —
(6.5)
Proof: From Theorem 1, the random variable y = c i xi + c 2 x 2 has the
distribution
X X 1 2
[ e
_
g(Y) = x
l x 2
X 2 x1
X A
l
2
--x 2 Y- y
- e
] , 0 < y < œ
= 0, otherwise;
Its cumulative distribution G(y), is given by
Y
G(y) =
g(y) dy,
Jo
Al
X 2
X1X2
X 1 x 2 - X 2 x1
X X
2
12x
[ - —
x1
Xx22
i
1
[ e
- ;E:;.
( e
] dy,
- e Y
X
2
-—
x2 _ 1,),
Al
, e
k
x
Y
x1_
1
X
1
- x Y
x2
x1
x1
X X
1
2
1
x
2
-_
r
(
7
- 57 ) + -,- e
X e
'
X x - X 2 x1
1
Al
2
1
1 2
for 0 < y < c° •
1) 1,
X2
],
(6.6)
71
Then, the probability statement
Prob [ g i x i + q 2 x 2 b ] > 1 - a
is realized if and only if
G(b) > 1 - a
that is,
X
-
X X
1
b
x1
x1
x 2
x 2
x1
-
e
)
+
7-(
-
[
X x - X x
2 2
1
1 2
2
2
Al
1 2
e
X
- x b
2
] > 1 - a,
Or
X
x
X x e
1 2
_
2
2
X
x
- X x1 e
2
1 b
1
- aX1x2 + aX2x1 <0,
as was to be proven. An alternative approach to the results shown above
has been suggested by Szidarovszky (1976) and it is presented in
Appendix D.
be mutually stochastically independent
Theorem 3. Let c and c
-2
1
random variables having uniform distributions with parameters (0, b i ) and
(0, b 2 ), respectively. Also let xl , x 2
E
++
R. Then the new random vari-
able y = s i x]. + g 2 x2 is distributed as follows.
Y
g(Y)
b b 2x1x2 '
1
0 <_y_<_b 2 x 2(6.7)
y <b x
1 1
b2x2 <
(6.8)
—
b xl + b 2 x 2 - y
l
b b x1 x 2
l 2
= 0, otherwise.
b x < y _< b l xl + b 2 x 2 (69)
1 1
72
Proof: The joint distribution of c. .1 and ç 2 is given by
1
f(c 1 ,c 2 ) = (17)
( 7 1 ) , for (c 1 ,c 2 ) = A = {(c i ,c 2 ):0<c i<h i ,i=1,2},
1
2
= 0, otherwise;
now, the transformation y l = c x +
l l
c2x2' y 2 = c 2 maps the set A into
the set B defined as follows
B = {(371 ,Y 2 ): 0 < y 2 < b 2 , y 2 Y1 0 < y l+
` x2
x2 y 2 }
as shown in Figures 6 and 7.
The joint distribution of y l and y 2 is then given by
1
g(57 1' Y 2 ) = b b x '
1 2 1
for ( ,
Y 1 Y2)
E
B
= 0, otherwise,
and, finally, the pdf g(y 1 ),
g(Y 1 ) =
J
Y1
x2g(Y1'372) dY 2'
0 <
b2x2
0
y1
g(Y1 )1)11)2 x1x2
fb„
0
1
b i xi
1
b 1b x
2 1
dy 2
b 2 x 2 < y l < b l xl
73
C2
C2 = b
A
(0,0)
2
C1 = b
1
Cl
Figure 6. The set A in the c 1 -c 2 plane.
Figure 7. The set B in the y i -y 2 plane.
74
b
2
1
_ xl b
X2
b1 b2 x1
dy 2'
b x < y _< b x + b x
1 1
1
l l
2 2
x2 1
b x + b x - y
l l
2 2
l
b b x x
l 2 1 2
= 0, otherwise;
whenever b x _< b x
the definition of g(y ) above can be modified
2 2
1 1'
1
accordingly.
Theorem 4. Let c and c be mutually stochastically independent
-2
- 1
random variables having uniform distributions with parameters (0, 13 1 ) and
++
(0, b 2 ), respectively. Also let xl , x 2 eR ,aeR[0,1]. Then, a
deterministic equivalent of the probability statement
Prob [q i x, + ç 2 x 2 < d]
1 - a
+ c- 2 x 2 ), is given by the nonlinear inequalities
2
(6.10)
d , 0<d<b 2 x 2
2(1 - a) b 1 b 2 x1 x 2<
where d e range (c
- 1 x1
< d,
(1 - a) bixi
2(1 - a) b 1 b 2 x 1 x 2 - 2db 1 x2 - 2db 2 x 2
b 2 x 2 <d<p 1 x1(6.11)
2
<.-d , b 1 x1 <d<b 1 x 1+b 2 x 2
(6.12)
Proof: From Theorem 3, the distribution of the random variable y =
+ c 2 x 2 is given by (6.7), (6.8), and (6.9). The cumulative distribution G(y) is then given as follows,
G(y) =
f
Y
g(y)dy
0
75
2
Y
2b b x x
1 2 1 2
0 .< y <b 2 x2
b x < y .< b x
2 2
1 1
2y(b 1x1 + b 2 x2 ) - y 2
b x < y .< b x + b x
2 2
1 1
1 1
2b 1b 2 x1x2
= 1,
+ b 2 x2 < y,
then, the probability statement
1 - a
Prob [gixi + q 2 x2 < d]
is realized if and only if G(d) > 1 - a , or
2
, 0<d<b 2 x 2
2(1 - a) b 1b 2 x1x2<
d
(1 - a) b ix/<
d , b 2 x2 <d<b 1x1
2(1 - a) b 1b 2 x1x2 - 2db 1x1 - 2db 2 x2 <7.3
2
, b 1 x1 <dfb 1x1+b 2 x2 .
Theorem 5. Let c-1and c- 2 be mutually stochastically independent
random variables having beta distributions with parameters (1,2) and
++
(2,3), respectively. Also, let xi , x2 c R . Then the random variables
y =
+ c x is distributed as follows:
-2 2
2
k k y
1 2 1 -
xl x2=L 2
g(Y)
2y
Y1
1 +
3x 2
2
4x 2 `
Y 1Y. 1
3x
2x 1 x 2
1
2
Y1
6x1 x 2
3
Y1
2 ] (6.13)
20x x
1 2
for 0 < y l < x2
k k
37 1
1
1 2
[ 12 - 12x,
x
1
x
2 ] for x2 < yl < xl
30 x1
(6.14)
76
k k
1 2
x
1
E
Y1
1
12 - 12x
(Y 1 - x 1 ) 2
+
2
2x
4. x 2
1 30x1
2 (Y 1 - x 1 ) 2(Y 1 - x1 )
3x
2
3
4x
2
4
4
1 3
Y 1 (Y1-x 1 ) 2
Y1 (Y1-x )
Yl
(Y4
1-xl
)
+
2
3
2x x
3x x
4x1x2
1 2
1 2
1
(Y1 -x )
3
2xi x 2
4
1 5
(Y1-x )
5
5x x
1 2
] for x
4
4
+
2
(Y1-xl)3 +
3x
13
2 <y < x l + x 2
(6.15)
r(1+2)r(2+3)
.
, k =
r(1)r(2)
2
r(2)r(3)
= 0, otherwise, where k =
1
Proof: The joint distribution of c i and c is given by
-2
f c1 c2 = f(ci f c2
(
,
r(a
)
)
+6
(
)
1
r(a )r(6 )
1
1
)
a
1 -1
1-c
c (
1
)
a -1
2
C2
6 -1 r(a +6 )
2 2
1
1
6 -1
2
,
r(a2)r(132)
(c 1 ,c 2 ) e A
= 0, otherwise,
where
A = {(c 1 ,c 2 ): 0<c 1 <1, 0<c 2 <1} .
Now, let the transformation y i = c i xi + c 2 x 2 , y 2 = c 2 map the set A into
the set B defined as follows:
Y3,
1
2
B = {(y 1 ,y 2 ): 0 < y 2 < 7
-
y 2 < 1, 0 < y 1 < x l + x 2 y 2 }
as shown in Figures 8 and 9.
The joint distribution of y i and y 2 is then given by:
a
a -1
2c
k k
, 1 [3_ _
2
l 2y l
X1
g(Y 1' Y 2 ) =
X - X Y2.1
1
1
1
X2 ,
X
1
'2
3131
a -1
-1
[ Y 2]
2
6 -1
El-y2]
2
77
C2
C 2 =1
Cl
A
(0,0)
=1
Cl
Figure 8. The set A in the c 1 -c 2 plane,
beta
distribution.
beta
distribution.
Y
1.0
(0,0)
X 2
x 1
Y
Figure 9. The set B in the y 1 -y 2 plane,
78
= 0, otherwise,
r(Œ.+ 8 )
k. 1 i
1 r(a ) ro
1
and where
for
)
i = 1,2.
To obtain the marginal pdf of interest, g(371 ), we have conveniently
chosen the values al=1, 81=2, a 2=2, 8 2 =3 to facilitate the integration
task but, obviously, the integration is still manageable for a large
combination of values,
Y1
0 < yl < x2
g(Y1'Y2) dY 2'
g(Y 1 ) =
f xl
0
k k
1 2
xi
Y1
Y 1
4
2 4. 2Y1
3
2x Y 2 ' 3x1 Y 2 - 4x 1 Y 2 .41
4
2
3 4. 1
1 2
Y
2
'
5
37
2
• 27 2
lx 2
3
2
Y 2
3x
x 2
x2
Y2
4 _,
-I5x
5
Y1
X2
Y2
o
2
k k y
Y
1 2 1
x
1 2
1_
_
2
2
2y
Y1
2
1 + 2 1
1
3x
3x2 4- 4x2 2 2x 1 x 2
1
3
2
Y 1Y 1
6x 1 x 2 20x1 x
22
1. 0
g( y 1) = j(
g(y 1 Y2) dY 2
0
X2 < y
,
1
< x1
k k
r 1
1x2
1 2Y
30 x ]
x
12 12x 1
1
1
x
X
2
X2
2
< y
l
< X
1
X2
k k
1 21
x
1212x
1
Y 1 (Y 1 -x 1 )
2
2x x
1 2
(Y 1 -
2
(y2-x02 + 2(y1-x1) 2
x
Y 1
1
2
30x
1
2Y 1 (Y 1 -x1 )
3
3x 1 x 2
2x
3
2
3x 2
2
Y 1 (Y 1 -x1 )
4
4x 1 x
2
4
79
(y1-x0 4
3
4x 2
(Y 1 -x 1 )
2
3x 1 x 2
3
4
(Y 1 -x 1 )
2x1 x 3
2
4
x1 ) 5
5x 1 x 2
4
= 0, otherwise,
as was to be shown.
Theorem 6. Let s i and 2 2 be mutually stochastically independent
random variables having beta distributions with parameters (1,2) and
++
(2,3), respectively. Also, let xi , x 2 eR ,aeR[0,1]. Then a
deterministic equivalent of the probability statement
Prob [c i xi + c 2 x2 .< d] >1 - a
where d E range [c i xi + c 2 x 2 ], is given by the inequalities
6
5
5
4 2
3
4
3
xi x 2 - 20d x 1 x 2 2 + 20d x1 x2 + 5d x 2 - 6d x1 - 4d x 2 <— -d ,
(1-(1)2
k k
1 2
0 < d < x2 '
E
L
(
1-
)
k k
-1
2
- 10d] x
1
- 4dx 2
(6.16)
< - 5d 2 ,
x 2 < d < x 1 '
(6.17)
6
5
4
5
2 4
4
xi - 5dx x
+ 5d x 2 + 4dx 2 - 5x 2 (d-x) + 4x 2 (d-x1 ) - (d-x1 ) k k
1 2
1 2
6
5 2
<0,
5x1 x2 + 14 5x 2 + 21x 1
xi < d < xi + x 2(6.18)
80
r(1+2)
and where k1 r(1)r(2)
k
2
r(21-3)
r(2)r(3)
•
Proof: From Theorem 5, the distribution of the random variable
y =c x
is given by (6.13), (6.14), and (6.15). The cumulative
- 1 1 +c2x2
distribution G(y) is then given as follows:
Y
G(y) =
g(y) dy
0
k k
3
1 2
Y
r
2
6
xi
x2
4
4
_Y 24x 1
6x2
5
20x22
4.
5
30x x 2
1
6
Y
2
120x x
1 2
3
'
0 < d < x2
.
G(y)
k
E L.
k
E
1 2
xl
y
24
x 1 (Y - x1 )
2
6x 2
(Y-x
_
x2
+-yj ,x 2 <d<x,
30x
1224x 1
1
k k
12
x
1
1
3
2
.
y2
24x
1
2(y-x1 )
15x x
1 2
6
)5(Y-x1)
10x1 x2
3
30x1 x2
4
(Y-x1 )3
x2Y
30x1
5
6x2 2
( Y-x 1 )
3
6x 2
4
3
1
(Y-x )4
6x 2 3
(y-x )
1
4
20x 2
5
(y-x1 )
2
8x1x2
4
(y-x1 )6
4
24x 1 x 2
4
3
4
3
2x 1
x 3
x 1 5
X x
1x 1
l
1
3
2 6x 3 20x 4 8x 2 6x 2
15x 2
6x2
2
2
2
2
5
3
5
54
4
x 1x1
x
xix
1
1x 1
4
2 10x 3
4
4
3
30 x
12 x2
20x
6x
24x
2
2
2
2
2
then, the probability statement
Prob [cl x, + q 2 x2d] 2. 1 - a
is realized if and only if G(d) > 1 - a , or
xl
' x 2 < d2
Ix
81
(1-a)
2
3.
,.4
, 5
- 20d
x
3 x 1 x 2 2 + 20,1
4 x 1 x 2 +
x 2 - bd x - 4d 5 x < -d e ,
2
2
1
2 —
k1 k2 x
l
0 < d < x
2
(1-a)
10d] x - 4dx _< -5d 2 ,
1
2
k k
1 2
x2 < d < x
1
6
4
5
(1-a)
2 4
4
5
xl - 5dx1 x2 + 5d x2 + 4dx
- 5x (d-x ) + 4x (d-x 1 ) - (d-x1 ) 2
2
1
k k
2
1 2
45x
4
1
x
2
2
+ 14
5
6
< 0
+ 21x
1 —
x1 x2
x
l
< d < xl + x 2 .
We note that the above results can now be generalized for a function of
any number of random variables of the continuous or discrete type, and
linear or nonlinear in the x coefficients, although the development may
be difficult, particularly for discrete random variables.
A Deterministic Transformation
This next section addresses the case where some or all of the
parameters in the objective function are random variables with known
distributions.
Consider the following problem. Let z(c,x) be a function of
cn), where c i is a random variable with distribution
++
.
x ), n > 2, such that xiR
and x = (xi , x2 , . .
n
funcis linear in c. Also, let A(x) represent an m-column vector of
g = (P. 1 ,
q 2 , • .
tions of x. Now, suppose the solution to the problem
max E [z(ç,2)]
s. t.
il(x) < 0,
yields the solution x*. Then, the following theorem holds.
(6.19)
82
Theorem 7. Given the problem
max z(c,x)
(6.20)
S.t.
2(x) < 0 ,
F[z(c,x)] = a
where z(c,x) e range [ E(c,x)], and F[.] is the cumulative distribution
,
of z(c,x), there is a unique a
R[0,1] for which (c*,x*) is a solution,
and z(c*,x*) = E[z(c,x*)].
Proof -- The solution x* of problem (6.21) implies that
Prob {4(c,x*) < E[4(c,x*)])
.
= F[E[z(c,x*)]]
= F[z(c*,x*)], if z(c*,x*) = E[z(c,x*)]
=a
since this constraint is satisfied as an equality in (6.22). Also, we
note that z(c*,x*)= E[z(c,x*)] for an infinite number or combinations of
the c's, e.g.,
c* z
E[z(s.,x*)]1
and this set will generally contain more than one element.
An important realization in Theorem 7 above is that when random
variables are present in the objective function the problem at hand is
one in the realm of distribution theory. The problem, then, becomes
that of finding the distribution of the objective function, itself a
random variable, in the (c,x)-space. The problem formulation in (6.20)
is termed the a-model, for reference convenience, which can now be
solved for different values of a e R[0,1].
83
In obtaining the a-model above, we are going from a stochastic,
either linear or nonlinear, n-dimensional model to a deterministic, nonlinear, m-dimensional model where m > n. We note that whereas in the
stochastic formulation the c's are random variables, in the a-model they
become additional mathematical variables constrained by their distribution function.
An Illustrative Example
Consider c and c to be exponentially distributed with param-1
eters X 1 = 1/10, X 2 = 1/5. We would like to know how the new random
is distributed subject to the constraints
variablez=c1x1 +c2x2
-
-
xl
2
+ x2
2
(6.21)
_< 25,
3x1 + x2 _< 12,
xi x2 >
and for a given value a
e
0,
R[0,1], determine a value z range (z) such
that
Prob [z < z] = a
.
Our a-model can now be formulated with the aid of Theorem 7 previously stated as follows,
max
s.t.
c 'x1 + c2x2
1
(6.22)
< 25,
< 12,
84
X
X X
2 X1
1 2x
1
(7-- -
+
A2
1
1
X1x2 X 2x1
e
X
1
- x— (c x
1 1
1
X
c x )
2 2
x
2 e
- —
X
.
2
2
(clxl+c2x2)
x2 -
c , c 2 >— 0 '
1
xx 2 >— 0.
The problem above is nonlinear and nonconvex, as can be verified
by generating the Hessian matrix of the objective function. Obtaining
a solution to this nonconvex problem is not a trivial matter, as none of
the classical techniques can be applied directly here. An approximation
technique, however, known as the cutting-plane (Monarchi 1972; Goicoechea,
Duckstein and Fogel 1976a,b) was successfully implemented in computer
program SEARCH (Appendices A and B) to solve for the vector (c 1 , c 2'
xi , x2 ) as a was varied parametrically. The following computer results
were obtained, and are shown in Table 2 and Figure 10.
Now, if the optimization problem is solved using the expected
value of the objective function (E-model),
max E(g.1 )•x1 + E(c 2 )x2 = 10x 1 + 5x2(6.23)
s t.
.
x1
2
+ x2
2
3x 1 + x 2
x1
'
x2
25,
12,
0,
an objective funcWe obtain the solution xl * = 2.570, x 2 * = 4.289, with
tion value z = 47.145; to obtain the a' value associated with this
solution we evaluate F(z), yielding
85
Table 2. Computer program results.
Œ
0.2
Range of c i and c 2 such
that c x * c 1 x = z
l l
2
xX1
1
2.234
4.474
c
1
e [0, 11.126]
Objective function z
24.857
c 2 e [0, 5.555]
0.4
2.570
4.289
c
c
0.6
2.570
4.289
c
1
2
1
38.122
e [0, 14.833]
e [0, 8.888]
51.616
e [0, 20.084]
c 2 e [0, 12.034]
0.8
2.570
4.289
c 1 e [0, 28.478]
c 2 e [0, 17.064]
73.190
86
1.0
0.8
0.6
0.4
0.2
0.0
20
40
60
80
Objective function, z = c l xl + c 2 x 2
Figure 10. Cumulative distribution of the objective
function z.
100
87
F(z) = F(47.145) = a' = 0.586 .
Referring back to Figure 10 we notice that the point (47.145, 0.586)
is precisely a single point in the curve obtained solving the a-model
problem. The cutting-plane technique used in the solution involves the
variable step of size 0.4 which controls the accuracy and rate of convergence. A smaller value for step should produce further accuracy in
the results.
Discussion
It has been shown that it is possible to deal effectively with
random variables in the set of constraints and objective function of a
stochastic programming problem. When the random variables appear in the
set of constraints, deterministic equivalents can now be derived to replace the original chance-constrained inequality. Thus, the applicability of the multi-objective algorithm PROTRADE, developed in Chapter 5,
is enhanced and extended to a larger, more realistic, variety of problems in water resources management.
When the random variables appear in the objective function the
a-model has been structured so that the range of values of the random
variable involved is taken into account in the constraint set by means
of the cumulative distribution function for those variables, evaluated
at a value of a E R[0,1]. The same concept can be used to deal with
probability statements and random variables in the constraint set.
It is important to realize that solution of the stochastic
programming problem via an a-model provides all the information that
the decision maker would want to extract from such problems, e.g., it
88
completely specifies the magnitude of the objective function as it
varies with a, the probability of achievement. In the process it is
seen that such relationship, or "curve," contains the solutions given
by both the E-model and P-model of Charnes and Cooper (1961).
Furthermore, from reference to Table 2, it is noticed that the
vector (xl *, x 2 *) remains unchanged after a value for 0.2 < a < 0.4.
It means that his decision to implement activities 1 and 2, to the extent indicated by the vector (xl *, x 2 *), is a valid one for values of
a > 0.4. The solution to the a-model represents, then, an analytic,
closed-form solution to the stochastic problem whose objective function,
up until now, had been solved for via a Monte Carlo simulation. Finally, the a-model represents an alternative approach to the problem
formulated in Chapter 3, and can be used to develop another algorithm,
if desired, similar to PROTRADE. This new algorithm would be able to
consider, to begin with, values other than expected ones in the objective function.
In the following chapter, PROTRADE is applied to a multiobjective decision problem in watershed management to demonstrate its
feasibility.
CHAPTER 7
A MULTI-OBJECTIVE PROBLEM IN
WATERSHED MANAGEMENT
With the advent of ever-increasing energy needs, large scale surface mining has gained new impetus and there is much concern about reclaiming the mine spoils to bring about positive land uses. In the
Black Mesa region of northern Arizona, on the lands of the Navajo
Nation, an area of some 5,700 ha. will eventually be turned upside-down
to strip-mine for coal over the next 30 years.
This chapter considers a multiple-use approach to the reclama-
tion and management of the Black Mesa region and, towards that end,
applies PROTRADE, the multi-objective algorithm developed in the preceding chapter.
The Black Mesa Region in Northern Arizona
This semiarid area, shown in Figure 11, has been and still is
being used as a rangeland, a practice which has been abused and has resulted in heavy overgrazing with detrimental consequences (Verma and
Thames 1975). Considering the poor range conditions of the Black Mesa
region, surface mining and subsequent reclamation programs offer to the
appropriate managing agency the opportunity to design and implement
made.
multiple land uses, once the decision to mine for coal has been
the
Current coal mining activities in the area are being conducted by
Peabody Coal Company.
89
90
91
Five objectives are considered: (1) livestock production, (2)
augmentation of water runoff, (3) farming of selected crops, (4) control
of sedimentation rates, and (5) fish pond-harvesting. Verma and Thames
(1975) and Brinck, Fogel and Duckstein (1976) have reported preliminary
findings to the effect that reclaimed watersheds in the area have a potential for use as rangeland, in harmony with the preferences of the
Navajo Nation. Opportunities for water yield augmentation through vegetation and soil treatments exist and results in experimental watersheds
have been reported by Cluff et al. (1971). Some of these treatments
also have the potential to curve down sedimentation rates. Current research on fish pond-harvesting by Kynard and Tash (1975) is also used
in this study to ascertain the feasibility of fish production and the
extent of it. Also competing for the use of water made available through
runoff practices and rainfall will be the farming of selected crops in
the area and, again, this is an activity in harmony with the preferences
of the Navajo Nation.
Formulation of Objective Functions
In reference to our study area, the managing agency must decide
on the extent ofpresent management practices, which essentially do not
call for a reclamation program, and the extent of new practices which
would contribute to achieve the five objectives outlined above.
the
The practices considered in this study are represented by
following decision variables:
xl = hectares of mined land with current management practices (no land
reclamation program),
92
x2
= hectares of mined land with contoured-furrowing and good range
x
= hectares of mined land with contoured-furrowing and poor range
conditions, .
3
x
4
= hectares of mined land with compacted earth (CE) treatment to in-
5
= hectares of mined land with compaction and salt treatment to in-
6
= hectares of mined land treated with plastic cover and gravel to
x
x
x
conditions,
7
crease water runoff.
crease water runoff.
increase water runoff,
= hectares of mined land farmed for wheat production,
X8 = hectares of mined land farmed for corn production,
x
x
x
x
9
10
11
12
= hectares of mined land farmed for alfalfa production,
= hectares of mined land farmed for barley production,
= hectares of mined land farmed for sorghum production,
= hectares of mined land to be allocated for fish pond construction.
With these practices in mind, the five objectives have been cast into
the form of linear functions of such practices as follows:
livestock production,
12
f (x) =
1
—
E
i=1
animal units
(7.1)
r.x.
cubic meters
(7.2)
c.x.
kgms.
(7.3)
i,
water runoff,
12
f2
(
a) =
E
i=1
11
selected crops,
12
f (E) =
3
E
i=1
a. a.
93
sediment,
12
f4( ) =
2
E
i=1
s x
i i '
12
E
i=1
f x
i i
cubic meters
(7.4)
fish yield,
f (2) =
5
kgms.
(7.5)
In the above functions, k i represents the number of livestock heads (e.g.,
animal units (AU) per hectare of land) with practice or treatment i , r i
is the water runoff yield in cubic meters/ha, c i is the crop yield in
kgs/ha., s.
1
the sediment yield in cubic meters/ha. Of these non-
commensurate objective functions the one corresponding to sediment, (7.4),
is to be minimized and the others are to be maximized subject to land,
water, and capital constraints yet to be specified.
A time horizon of 30 years was chosen for this case study and
it is intended to correspond with the effective life of the mechanical
soil treatments and seeding to be implemented (Cluff et al. 1971; Bartlett 1974), and the mining program presently envisioned. The division
of this 30 year period into subperiods was deemed necessary because
water runoff rates, sedimentation rates, operating costs, and so forth,
presents a
will not remain constant over the entire period. Figure 12
subperiod
schematic spatial allocation of the land available in one
with reference to our decision variables.
/
94
Total area scheduled
to be strip-mined
/7
-
x
\
/
//
x7
//////
/
X5
x
/
/'
/
/
k
n ,........
/
/
/
/
/
/
/s.
/
;
x8
9
1
//
X2
--L-e x 10
x
//
n /---,
\
/
I
,
\
1
xll
x
6
\
I /:
x
_. j ..-/
12
‘N
x
4
X
3
strip-mined area available for multiple use
at the beginning of a
subperiod.
Figure 12. Land allocation alternatives.
95
List of Assumptions
The following list of assumptions are believed to be the most
important ones, and no sensitivity analysis has been performed to support any ranking:
1. A land area of 5700 ha. is to be strip-mined and reclaimed,
over a 30 year period.
2. A subtotal of 5700/15 380 ha. are to be strip-mined every
two years. The economic analysis, then, encompasses 15 2-
year subperiods. During the first year of each subperiod,
the land is prepared for production, e.g., construction of
irrigation channels, contour furrowing, ponds, etc.
The second year, and remaining ones in the 30-year period,
are then used to produce the different goods, e.g., crops,
livestock, runoff, etc.
3. During strip-mining operations, unaffected land is being used
to carry livestock under poor range conditions, 240 acres/AU.
4. The set of five objective functions are to be "satisfied"
over that second year in a subperiod over land, water, and
capital constraints. This calls for implementation of the
PROTRADE algorithm.
5. Discount rates are introduced to account for the time distribution of the subperiods. In general, each objective function may
call for a different discount rate.
6. The coefficients in the set of objective functions are random
variables normally distributed. Given a random variable, the
parameters of the distribution are simulated in the analysis via
96
a Monte-Carlo computer sequence, using physical characteristics
and mathematical models.
7. Several levels of optimization are required in the analysis, to
observe the constraints on land, water and capital. For instance, when optimizing the objective function for selected
crops, it is done for a particular allocation of water. Each
time, the optimal solution is represented by the amount of land
to be farmed and the amount of each crop. The remaining land,
if it is to be treated to produce water runoff, must be large
enough to produce that water allocation, at least.
8. A maximum allocation of $35,000 per subperiod is considered.
9. Reclamation costs include grading costs. Costs not included in
the reclamation program and paid for from the $35,000 allocation
include contour-furrowin-, $50/ha. and with a life span of 10
years, reseeding @ $25/ha. and 10-year lifespan, reservoir
(pond) construction with a set-up cost of $1,000/unit + $1.50/m 3 ,
and good range maintenance @ $25/ha.-year.
10. The effect of evaporation on water resources is considered
negligible to keep the physical modeling aspects of the problem
within manageable bounds. This may not be a reasonable assumption in water problems, in general.
Set of Constraints
are
For each 2-year subperiod the five objective functions
satisfied subject to specified constraints on land, capital, and water,
e:g.:
97
land,
capital,
water,
X 1 + x2 + . • • + x
= b
12
1
ci x1
1
w x1
(8.6)
cl 2 x 2
• • • 4- c1 12 x 12 = b
w 2 x 2
• • •
w 12 x12
=
(8.7)
(8.8)
where the parameter q i represents the cost of implementing the i-th
practice (e.g., treatment), wi the water consumption of the i-th practice, b = 380 ha., b = $35,000, and bw is the water available for that
2-year subperiod through runoff practices and rainfall.
Livestock Production
The livestock production model used in this case study is the
one previously developed by Brinck et al. (1976). It is an event-based
model which accounts for precipitation, infiltration, runoff, and sedimentation to describe discrete storm events and their effects. The
storm events occur in a sequence throughout the years of program lifetime separated by random time intervals. For each storm event a pair
of dependent drawings of rainfall depth and event duration are made
any, are
from their joint distribution. The runoff and peak flow, if
each
computed with the Soil Conservation Service (SCS) formulas for
event for furrowed and for unfurrowed slopes, separately. Also, first
random
year water storage capacity for the furrows was modeled as a
for a dry
variable. The year was divided into three seasons to allow
of the livestock
spell with very few events. Figure 13 is a schematic
production model.
Computational tasks carried out by this model include watershed
geometry, range carrying capacity, feasible stocking level (determined
over the program
by forage and water availability), and expected profit
98
Generation of random:
• time interval between
storms,
• rainfall depth,
• storm duration,
• furrow water storage
capacity,
• other
natural
climatic
uncertainties
livestock
production
model
new a
search for combination of
decision variables which
maximizes production
a
Figure
13.
=
(a 1
,
a 2'
a
3
)
Stochastic livestock production model.
99
lifetime as a function of average watershed slope a
l'
a fraction of
watershed subject to contour-furrowing a 2 , and range conditions a 3 . A
Monte-Carlo computer simulation was then executed for various reclamation schemes, e.g., for various values of the vector a = (a
l'
a , a 3 ).
2
Some of the results of that simulation are shown in Table 3 and are used
in our case study.
Water Runoff Augmentation
Existing water supplies for livestock production and irrigation
in the Black Mesa area are limited. In this, and many other semi-arid
lands where groundwater is used for irrigation the water table is falling.
The use of runoff farming tec-niques offers an economic alternative to
these lands which, otherwise, might revert back to desert when the
groundwater supplies are exhausted or no longer economically feasible.
Opportunities for water yield augmentation through mechanical treatment
of the mine spoils are considered, and a program of soil treatments and
maintenance requirements is suggested. Performance and cost parameters
for this program have been made available through the water harvesting
studies at The University of Arizona (Cluff et al. 1971) and which was
in
initiated in 1963. Table 4 lists the three catchment methods used
the analysis.
Compacted earth (CE) catchments are shaped with a road grader
roller.
and compacted either with a drum roller or a pneumatic tire
The length and degree of slope are controlled to minimize erosion. Results from this research at The University of Arizona indicate that
desert
small amounts of sodium chloride when applied to the surface of
100
Livestock production parameters.
Table 3.
Expected
Value
Standard
Deviationa
£2
0.1375
0.0550
AU/ha.
£3
0.0365
0.0130
AU/ha.
0
0
AU/ha.
3
NR
$/ha.
3
NR
$/ha.
NR
m3/ha.
NR
m3/ha.
Parameter
9-,
i i
2,3
cl 2
0.1000 x 10
q3
0.0750 x 10
w2
0.0019 x 10
w
0.0005 x 10
3
a.
3
3
Units
NR, not required in the analysis.
Table 4.
Soil treatments for water runoff.
Catchment Methods
Approximate
cost per ha.
Efficiency
in percent
Compacted Earth a
$ 50.60
30-60
indefinite
Compacted Earth a
sodium treated
85.20
40-70
indefinite
191.60
60-80
20-25 years
Graveled Plastic
b
Estimated life
a.
Prices and efficiency are dependent on soil type, cost of clearing
and shaping. Maintenance consists of weed removal and recompaction
as needed.
b.
Price of catchment is primarily dependent on the cost of the gravel
and to a lesser extent on the cost of clearing and shaping. 10 mil
black polyethylene is used.
101
soils with little or no vegetation, will cause a dramatic reduction in
infiltration water losses. This effect, however, is temporary and
sodium chloride must be reapplied periodically. Experimental graveled
plastic catchments, of about half-ha. in size, have been reported in
that study with efficiencies between 60 and 80%. A ten mil. polyethylene plastic liner is installed using a truck-mounted chute and
covered with gravel derived from the soil. Catchment construction costs
are primarily dependent on the cost of the gravel and to a lesser extent on the cost of clearing and shaping.
Efficiency estimates of these catchment methods, as well as
those for contoured-furrowing, were then used in a Monte-Carlo simulation to generate the expected values and standard deviations shown in
Table 5 for the runoff yield parameters.
Farming of Selected Crops
Another activity which is in harmony with the preferences of
the Navajo Nation is the farming of crops that require relatively small
amounts of water. To compete for the water made available through rainfall and runoff augmentation practices the farming of some selected
crops is suggested, such as wheat, corn, alfalfa, barley, and sorghum.
This choice of crops is arbitrary and any other number could have been
considered in the analysis.
Fortunately, agricultural statistics in Arizona are readily
available, including yield, water consumption, and production costs.
These parameters, with reference to Cochise County are presented in
Table 6 (Arizona Crop and Livestock Reporting Service 1975). Consumptive
102
Table 5.
Water runoff parameters.
Parameter
r
r
r
1
Expected
Value
Standard
Deviation
0.428 x 10 3
0.223 x 10
0.098 x 10
2
0.079 x 10 3
3
r4
0.990 x 10
r
1.410 x 10
5
3
3
1.980 x 10 3
r6
r., i
3
[1,6]
cl l
0.152 x 10
0.089 x 10
Units
3
a
m3/ha.
3
3
m /ha.
3
m3/ha.
a
0.223 x 10 3
3a
0.223 x 10
3a
0.223 x 10
3
m /ha.
3
m /ha.
3
m /ha.
0
0
3
m /ha.
0
0
$/ha.
cl 2
0.100 x 10 3
N Rb
$/ha.
0.075 x 10
3
(1 3
N Rb
' $/ha.
(1 4
0.056 x 10
3
N Rb
$/ha.
0.085 x 10
3
(1 5
N Rb
$/ha.
cl 6
0.191 x 10
3
N Rb
$/ha.
a. assumed value
b. NR, not required in the analysis
103
Table 6.
Parameter
Crop model parameters.
Expected
Value
Standard
Deviation a
Units
C7
3.024 x 10 3
0.505 x 10
c
1.568 x 10 3
0.249 x 10 3
kgs./ha.
C9
7.392 x 10 3
1.037 x 10 3
kgs./ha.
c
3.169 x 10 3
0.102 x 10 3
kgs./ha.
2.576 x 10 3
0.249 x 10 3
kgs./ha.
q7
0.232 x 10 3
NR
$/ha.
q8
0.323 x 10 3
NR
$/ha.
q9
0.262 x 10
3
NR
$/ha.
q 10
0.242 x 10
3
NR
$/ba.
0.278 x 10 3
NR
$/ha.
3
NR
m3/ha.
3
NR
m3/ha.
3
NR
m3/ha.
3
NR
m3/ha.
3
NR
m 3 /ha.
c
8
10
11
q 11
w
7
w8
w9
w
10
w 11
5.830 x 10
5.010 x 10
18.850 x 10
6.390 x 10
13.700 x 10
a. NR, not required in the analysis
3
kgs./ha.
104
use of water by crops is affected by many factors (Agricultural Experiment Station 1968). Some are man-made, others are natural. The
more important natural factors are climate, soils, and topography. The
man-made factors include water supply, water quality, date of planting,
crop varieties, plant spacing, water management, and chemical sprays.
All of the above factors may influence plant growth and, thereby, the
consumptive use. In this analysis the expected value and standard
deviation of yield, and expected values for production costs and water
consumption over the entire growing season were used. No effort was
made to spatially allocate water requirements. If desired, however,
production yield sensitivities to water management and other man-made
factors can easily be incorporated in the analysis.
Control of Sedimentation Rates
An undesirable by-product of the runoff augmentation treatments
suggested to increase water availability is the production of large
amounts of sediment. Runoff from rangelands and strip-mined lands,
particularly, is the primary force in initiating soil movement and
transporting sediments to nearby reservoirs and rivers. This sediment
adversely affects water quality and operational costs, as the stock
water reservoirs (also used for fish harvesting) need to be dredged
from time to time.
Sedimentation is estimated with the modified soil-loss equation
(Smith, Fogel and Duckstein 1974):
S = 11.78 (VT q p )
0.56 KCP(LS)
(7.9)
105
where: s = sediment yield in metric tons per storm event,
V = total runoff volume in cu. m.,
q = peak flow in cu. m/sec.,
K = soil erodibility factor,
C = cropping management factor,
p = erosion control factor,
LS = slope length and gradient factor.
Field values for the factors K, C, p, and LS are not available for the
Black Mesa region, and those presented in Table 7 represent estimates
only. The total runoff volume and peak flow were simulated with the
stochastic model of Brinck et al. (1976) in a Monte-Carlo fashion, to
yield expected value and standard deviation for sediment production with
the eleven land treatments considered.
Fish Pond Harvesting
Fish production in reclaimed spoils catchments would provide a
protein source, job opportunities, and would help develop the recreational potential of the area. However, for fish to survive and grow
normally in these catchments the physico-chemical conditions necessary
must be present. To determine the feasibility of fish production, a
cooperative research effort was conducted during the summer of 1975 in
the Black Mesa region by The University of Arizona School of Natural
Resources and the Peabody Coal Company (Kynard and Tash 1975).
A small, shallow reclaimed spoils catchment at 1,952 m. elevation was the site of the experiment. The pond was unifoxinly 1.22 m.
deep and covered 0.20 has. The pond's water level was dependent on
106
Table 7. Sediment parameters.
K
Treatment i
a
LS d
PC
E(si)
[VAR(si)]
m3 /ha.
m3/ha.
1
0.40
1.00
1.00
4.50
19.86
20.63
2
0.40
0.10
0.50
0.50
0.11
0.12
3
0.40
0.15
0.50
0.40
0.15
0.17
4
0.30
0.25
0.70
0.40
0.24
0.36
5
0.25
0.20
0.70
0.40
0.16
0.37
6
0
0
0
0
0
0
7
0.40
0.20
0.60
0.40
0.24
0.40
8
0.40
0.30
0.60
0.40
0.33
0.33
9
0.40
0.10
0.60
0.40
0.11
0.12
10
0.40
0.20
0.60
0.40
0.21
0.29
11
0.40
0.30
0.60
0.40
0.35
0.44
12
a. K = soil erodibility factor
b. C = cropping management factor
c. P = erosion control factor
d. LS = slope length and gradient factor
2
107
runoff from a reclaimed watershed of approximately 12 has. On July 1,
110 young tilapia zillii (a warm-water fish from Africa) were introduced
Into a 1 x 1 x 1 m 3 cage used for rearing fish. These fish, about one
month old, averaged a weight of 14 grms , and a length of 2 cuis, and
were fed twice daily with an excess of food pellets. On September 16,
78 days later, the fish were removed from the cage, measured and weighed.
Eighty-eight percent survived the experiment with a thirty-fold increase
in weight. The experiment was considered a success. It showed that
(1) the condition of the water quality was amenable to high fish survival over an extended period of time, and (2) that fish could grow
normally with supplementary feeding.
Channel catfish, ictalurus lacustris, would be a more profitable
species and is considered in this analysis. Water temperatures of 18 ° 20 ° C during the summer nights increase dramatically during the days.
Duiing the winter months, on the other hand, these shallow ponds would
freeze thus limiting fisheries work to the months of May through
November, approximately. In order to circumvent this water temperature
problem the following schedule of events is suggested:
• buy fish stock during the first week of June,
• feed and grow stock in reclaimed ponds in the Black Mesa
region during the months of June through October,
• on the first week of November, remove fish cages from the
Black Mesa ponds, transport to and relocate in Morgan Lake,
New Mexico, near power plant water outflow into the lake
(see map in Figure 11), a distance of about 166 km.,
108
• on the first week of May move cages back to the Black Mesa
ponds, and
• on the first week of November market the fish at an average
weight of 0.5 kg. per unit.
Estimates of the cost and yield parameters used in the analysis are
shown in Table 8.
Implementation of the PROTRADE Algorithm
Now that the physical characteristics of the Black Mesa region
have been discussed in some detail, the objective functions formulated,
and the model parameters defined, we are ready to begin the implementation of the PROTRADE method. As the various steps of the method are
executed and the alternative solutions are made available, the decision
maker (DM) is asked to decide on a course of action in the analysis.
In this case study, Thames (1977) was asked to assume the role of the
DM. Several meetings were held over a 3-week period to arrive at the
utility function reflecting the DM's preferences. Each meeting lasted
2 hours, approximately. The DM's response to a series of questions was
consistent for the most part. Whenever an inconsistent response was
elicited, the entire question-answer exercise was repeated. Computer
program Search (Appendices A and B) with its cutting-plane technique was
used to solve the nonlinear programming problems.
Steps of the Method
Step 1. The objective functions, (7.1) through (7.5), were
formulated in the preceding section. Here, they are slightly rearranged
so that the optimization process will involve maximization, only. Let
109
Table 8. Fish harvesting parameters.
Costs and yield estimates:
• $50/cage,
• 30 cages/ha. of pond,
• pond depth is 1.5 meters,
• 0.5 kg. of fish require about 2.0 kg. of food pellets in 1-year
term,
• 500 fish units/cage, assume an expected mortality rate of 50%
over a 1-year term,
• $0.374/kg. of food pellets,
• $0.05/fish unit, initial cost
• 0.5 kg./unit, market weight
Cages,
$ 1,500/ha.
(30 cages/ha.)($50/cage)
Food pellets,
(30 cages/ha.)(250 units/cage)(2.0 kg/unit)($0.374/kg.) $ 5,610/ha.
Initial cost of stock,
$
(30cages/ha.)(500 units/cage)($0.05/unit) 750/ha.
Digging of pond (reclamation program),
Transportation to and from power plant,
($10.15/man-hour)(100 man-hours/ha)
$ 1,015/ha.
E q12
(
Total
$ 8,875/ha.
),
E(f12)' expected yield,
(30 cages/ha.)(250 units/cage)(0.5 kg./unit)
VAR(f 12 ), yield variance, (assumed),
3,741 kg/ha.
(2,000 kg/ha)2
110
z i (2) = f i (2) for i = 1,2,3,5,
z 4 (x) =
Step 2. Maximization of each individual objective function,
subject to constraints (7.6), (7.7) and (7.8) (constraint set D 1 ) yields
vector 1
'
52.00 x 10 0
541.91 x
1Ll =
331.97 x
3
10
3
10
-58.14 x 10 °
_
14.78 x
3
10
_
AU,
livestock,
cu. m.
runoff,
kgs.
crops,
cu. m.
sediment,
kgs.
fish.
Similarly, minimization yields the following vector,
0.00
30.00 x 10
3
0.00
-7549.08
0.00
Crop yield is very much dependent on water obtained through runoff practices, rainfall and land allocations. To show this dependency,
some additional computer runs were made and shown in Appendix C. With
reference to Figure 14, land treated for runoff vs. water available for
crops, computer run 1 represents the maximum runoff possible of
541.91 x 10
3
Cu. meters, that would occur through treatment of the
entire 380 has. of land, with no land left for farming of crops. In
runs 2 and 3 amounts of land are allocated for both runoff treatment and
farming; the runoff made available turns out to be much more than what
is needed for farming of the crops. Through runs 4 and 5 the search
1 11
Water available
(10 3 cu. m.)
800
700
600
500
400
rwater
300
used tor
1
I
I
run
200
100
100
200
300
i
max. land
i
\
0 i\--available
Xlrun 2
1 .
400
Land treated for runoff (ha)
Figure 14. Land treated for runoff vs. water available for crops
(mean values).
112
for the optimal point continues to match maximum farmland and water
availability. This point was found to correspond to a land allocation
of about 280 has. for runoff treatment, and the balance of 100 ha. for
farmland. This optimal point, however, is for these two objective
functions alone, and will vary significantly as the remaining objective
functions are brought into the analysis.
Step 3. An initial surrogate objective function (SOF) is
formulated as follows,
F(x) =
5
E
G.(x)
1=1 1
5
Z.(x)1
- Z. min
.
1—
E
Z (x.*) - Z.1 min
.
i=1 i -s
.
Step 4. Maximization of F(x) subject to x e D 1 yields an initial
solution
x
and goal vector G1 ,
-1
157.37 ha.
69.06
.00
69.06
69.06
.00
.00
.00
13.75
.00
.00
1.70
—1
G
=
0.183
0.401
0.306
0.585
0.431
-
from G we observe that the level of achievement of livestock production,
-1
for instance, is only 18.3% of the maximum possible value, and that the
other levels fall quite short also. These levels, however, conform with
the physical realities of the problem. We would like, at this point,
to have the DM express his preference structureor set of "worth values"
to search for another solution, if necessary.
113
Step 5. To assist the DM in articulating his preferences for
this particular problem the following multi-attribute utility function
(Fishburn 1970; Keeney 1974) is suggested:
5
1 1 ku(G)=11[1-Fldc i u i (G.)].
1
i=1
- -
To evaluate the parameters k, k
i
and the form of the single-attribute
utility function u i (G i ) a series of questions to the DM were asked,
(a) The DM was asked to rank the five objectives in order of
importance or worth to him, and the following ranking was
arrived at:
G3 , 'cropsGl , livestock > G 4 , sediment > G 2 , runoff › G5 , fish,
e.g., G 3 is more important than Gl , and so on;
(h) Considering two objectives at a time, the DM was asked to assign
a relative worth to each objective, with the following response
crops
I
1
1
un ts
livestock
I
sediment
runoff
fish
I
i
1
1
I
I
I
I
I
I
I
I
I
I
1
i
I
I
I
I
I
1
i
I
I
i
I
I
I
I
I
1
e.g., the DM is indifferent between 60% of the crops and 100%
of the maximum livestock possible;
114
(c) With regards to the vector (G1 , G 2 , . .
G5) the DM was
asked to decide on a probability p for the lottery
(0,1,0,1,0)
(0,1,0,0,0,)
1-p
(0,0,0,0,0)
e.g., the DM is indifferent between the vector (0,1,0,0,0)
which can be obtained with a probability of 1.0, and vectors
(0,1,0,1,0) and (0,0,0,0,0) with probabilities of p and 1-p,
respectively; initially a value of p = 0.8 was voiced but this
proved to be inconsistent with the relative worth assignments
in (h) and was later changed to 0.6;
(d) The DM was asked to select a value 0 < G
3 < 1 to correspond
with the lottery below
(0,0,1,0,0)
0.5
(0,0,G
3' 0,0)
and a value G
3
0.5
= 0.3 was elicited; these assessments were then
used to evaluate the constants k and k i in the utility function
to be the least and most desirable
u(G). We define G? and Gt
1
1
amountsofthei-thgoalfunctionG.=G.00,andu(G.)=
—
1
u(G1, . .
Gi, G1 +1 , . .
G;)3 also, we define u(G i ) =
k.0 (G.) where u.(G.) is the individual utility function for the
1 1
ii 1
.
The following
= 0, 1u(G)
i-th goal, and such that u.(G?)
i = 1.
a. a_
nonlinear system of equations was then arrived at:
115
k
1
u (0.6)k - k
3
3
1
=
0,
(7.9)
u (0.8)k - k
1
4
1
=
0,
(7.10)
u4 (13.9)k4
k2
=
0,
(7.11)
u 2 (0.5)k 2 - k 5
=
0,
(7.12)
=
0,
(7.13)
=
0.
(7.14)
- 0.6 (k
2
+k
l
+kk k )
l 2
5
1.0 + k -
H
(1.0 + k k 1 )
i=1
Tosolveforkandk.in the system above it is necessary, first, to
define the form of the individual utility functions, u i (Gi ). From (d),
u(0.3) = 0.5 u(1.0) + 0.5 u(0.0) = 0.5. This defines the point (0.3,
0.5). Furthermore, the DM was willing to adopt a function of form
u 3 (G 3 ) = c(1.0 - exp (b•G 3 )) for the crops objective, and the resulting
utility function is as shown in Figure 15(a). Similar considerations
led to the utility functions in Figure 15(b) through (e) for the other
objectives.
Equations (7.9) through (7.14) were then solved simultaneously
to yield:
k3
=
k1
=
0.519,
0.260,
k4
=
0.223,
k2
=
0.201,
k5
=
0.081,
k=
-.534.
Step 6. Redefine the surrogate objective function and use
results from steps 4 and 5 as follows:
116
1.0
(a)
1.0
(b)
(d)
0.805G 2
-1)
(c)
1.0
u (G 2 )=0.819(e
2
1.0
u5(G5)=0.431(e
(e)
Figure 15. Single-attribute utility functions.
1.210 G 5
-1)
1. 0
117
5
S1—
(x) =E w.G.(x)
—
i=1
where
w i = 1.0 + 11(_CD
G(2sI )
G.
G
—1
Accordingly,
(a)
Compute u(Gi ),
u1 (G1 ) = u1 (0.183) = 0.249,
u2 (G2 ) = u2 (0.401) = 0.312,
u3 (G3 ) = u3 (0.306) = 0.507,
u4 (G4 ) = u4 (0.585) = 0.585,
u5 (G5 ) = u5 (0.431) = 0.295,
u(G1)
and
= 0.493;
(b) Ask the DM to decide on an incremental utility, 0 < Au(G) < 1;
a value Au(G) = 0.20 was suggested;
(c) Solve for the factor r in
= 0.493 + 0.200
u(G1 + r.Vu(G
— —1 ))
0.183
0.401
+
0.306
= r•Vu(G
=
— —1 )
0.585
0.431
r
•
0.250
0.139
0.553
0.176
0.053
+ r•Vu(G )) = 0.693 is satisfied for r = 0.588. Finally,
— —1
theelementsw.are evaluated as follows,
and u(.2
1
wl
w2
=
=
1.807
w3
=
2.067
w4
=
w5
=
1.178
1.073
1.205
•
118
Each w. value can be thought of as the relative "weight" that the DM
places on the i-th objecrive. For instance, G
than G
1 , G1
is weighted more heavily than G
3 is weighted more heavily
4' and so forth. We notice
that whereas in step 5(a) the DM ranked the objectives in order of
importance, here the DM has actually quantified the relative worth of
the objectives.
Step 7. Generate an alternative solution, this time reflecting
the preferences of the DM,
max
Si(x) = 1.807 G1 (x) + 1.205 G2 (x) + 2.067 G 3 (x) + 1.178 G4 (x) +
1.073 G 5 (2)
subject to x
E
D
1.
The optimal solution .)j is given below and was used
to generate vectors G and
••••nn
-S2
1
.00
140.00
.00
100.73
118.05
.00
8.82
.00
12.32
.00
.00
.06
G2 =
—
0.370
0.480
0.354
0.998
0.016_
U2 =
—
19.25
275.75
117.71
-63.73
.24
x 10
x 10 33
x 10
x 10 3
Several iterations were required to match the amount of water used (mainly for crops ) and the amount of runoff (plus rainfall) needed, 300 x 10 3
cu. meters, approximately.
Step 8. Goal values and their respective probabilities of
achievement are then given by vector V1,
119
_.
(0.370
(0.480
(0.354
(0.998
(0.016
V
X.
,
,
,
,
,
0.500)
0.500)
0.500)
0.500)
0.500)
e.g., Prob [G 1 > 0.370] > 0.500 .
At this point in the analysis the DM has been able to state his preferences for the various goals (which may generally be in conflict with
the "realities" of the problem, e.g., the constraints of the problem are
not satisfied), and steps 7 and 8 have now reconciled these preferences
and realities.
Step 9. The vector U 2 is not satisfactory to the DM. Continue.
Step 10. The DM is asked to select the objective function
z k (x) with the least satisfactory pair (Gk (x 2 ), 1 - a k ). The DM
specifies that he would like to have G3 , crops, increase from 0.354 to
0.450 and with a probability of 65% or better,
Prob [Z(x) > (0.450)(331.97 x 10 3 )] > 0.650.
Step 11. A new solution space, D 2 , is now defined to include
the DM's requirement in step 10,
D2 :
x1 + x 2
. . . + x 12
= 380,
g l x1
q 2 x 2
• • •
(1 12 x12
1
w x1
w 2 x 2
• • •
w 12 x12
12
E
j=1
[X
E(c.)x. + k a3
J J
35 land,
capital,
water,
1/2
T AX]> (0.450)(331.97 x 10 3 )
where the variance-covariance matrix A is given by
120
VAR (c i )
• • •
COV (c i ,c12 )
A=
•
COV
(c12 ,ci) .
VAR (c 12 )
For our problem, it is reasonable to assume mutual stochastic indepenthat coy (c.,c.) = 0 for
dencebetweentherandamvariablesc.,such
1
13
i 0 j. Estimates of these variances are presented in Table 6. Also,
k
a
3
is such that:
NORMAL (k
) = 1 - 0.650
a3
and from standard normal tables we find kn3= -0.385. The last constraint then becomes,
1/2
12
12
3
2
> (0.450)(331.97 x 10 ),
E E(c.)x. - 0.385 [ E VAR(c.)x. ]
JJ j=1 3
j=1
W, in the water constraint, is varied parametrically to match the amount .
of water made available through rainfall and runoff.
Step 12. Maximize S i (x) subject to x e D 2 . This optimization
yields vector X 3 .
2_c3
0.00
120.10
.00
120.10
96.21
.00
22.76
.00
13.10
.00
.00
.00
121
With this new vector, x3 , we are able to achieve G3 = 0.450 with a minimum probability of 0.650. We would like to know, now how the other goal
values and their probabilities trade off against this request by the DM.
Todeterminel.-a.for i=1, LIVESTOCK,
1
12
12
2 1/2
E(2..)x. + k
[ E VAR(2..)x. ]
J J
j=1
"
al j=1
52.0 G
1
and for the variances given
0.1375 x 2 + 0.0365 x
+ k
3 a
[(.0550)
2
x2
2
+ (.0130)
2
2 1/2
x3 ]
> 52.0 G1
1
and so, at x3 we have the following relationship,
[(.0550) 2 (120.1)
(.1375)(120.1) + k
2 ] 1/2
> 52.0 G
al
16.51 + 6.60 k
252.0 G
al
and for G
1
= 0.370 (from V1 ) we have
K
a
1
1
.> 0.413,
and from normal tables 4)(0.413) = 0.660 = a l , or 1 - a l = .340.
That
is, we are able to retain the level G = 0.370 but with a minimum prob1
ability of achievement of 0.340, as opposed to 0.500 as was the case
before the DM made his request of a higher crop level in step 10.
For i=2, RUNOFF,
12
[
E(r.)x. + K
j=1
and for
26'
a2
1/2
12
2
E VAR(r.)x. ]
J J
j=1
>
511.91 G + 30.00
2
and the variances
(.098)(121.1) + (.990)(120.1) + (1.410) (96.21) +
K
1/2
22
2
22]
2 (120.1)+
(.223)(120.1) + (.223) (96.21)
[(.152)
a2
2
122
511.91 G
2 + 30.00
and for G2 = .480 (from
yl )
we have
K > 0.231
and from normal tables (1)(.231) = 0.591 = a 2 , or 1 - a 2 = .409.
For i=4, SEDIMENT,
12
12
2 1/2
- E E(S.)x. + K
[ E VAR(S.)x.
]
J
j=1
3
a4
j=1
> 7490.94 G4 - 7549.08
wheretheE(S.)andVAR(S.)are given and for G
4
= 0.998 (from V1 )
we have
K
> -0.065
a4
and from normal tables (1)(-0.065) = 0.475 = a 4 , or 1 - a 4 = 0.525.
For 1 =5, FISH, we notice that x
1 - a
5 = 0. Vector
12
= 0 and, consequently,
can now be submitted to the DM for his considera-
tion,
=
(0.370 ,
(0.480 ,
(0.450 ,
(0.998 ,
(0. ,
0.340)
0.409)
0.650)
0.525)
0. )
Step 13. The DM has gained knowledge about how the various
goals tradeoff, from V and is willing to accept the levels and prob-2 ,
ability of achievement for the first four goals. However, he finds
fish to have an unacceptable level, G5 = 0. He would like to have
G5 = .10 with a minimum probability of achievement of 0.70. Now, the
optimization problem becomes
max S 2 (2) = E w G.(x)
i —
1_03
123
subject to:
x
12
E
j=1
1
+ x
+ . . . + x
2
1
g x1
q 2 x2
• • •
wl x1
w2 x2
• • •
= 380,
12
(112 x12
w12 x12
12
1/2
E(c.)x. - 0.385 [ E VAR(c.)x. 2 ]
J J
J
j=1
E(f
12
)x
12
- 0.525 [VAR(f
12
)x
2
12
<
35,
< w,
> (0.450)(331.97 x 10 3 )
] 1/2
> (0.100)(14.78 x 10 3 )
where (1)(-0.525) = a 5 = 1 - 0.70 .
Step 14. The above optimization yields vector Z4 ,
0.00
48.10
.00
168.00
122.30
.00
.00
.00
20.66
.00
.00
.66
X =
-4
n•••••
And, again to determine 1 - a l , for LIVESTOCK,
12
E
3 =1
E(.9...)x. + K
J Jal
1/2
12
2
[ E VAR(Z.)x. ]
J J
j=1
> 52.0 G
1
where x2 = 48.10, x 3 = O.
1/2
[(.055) 2 (48.10) 2 ]
(0.1375)(48.10) + K
> 52.0 G
—
al
and for G
1
= 0.370 (from V 1 ) we have
K
a
1
> 4.777
—
1
124
And from normal tables 4)(4.777) = a, = .999 or 1 - a l = .000. The DM
may then want to know what would be the value of G
Kai
1
for 1 - a
1
= 0.5,
= 0.000
and
G1 < 0.127,
—
That is, he will have G1 = 0.127 with a probability of .5 or better.
To determine 1 - a 2 , for RUNOFF,
12
E(r.)x. + K
j=1
12
1/2
[ E VAR(r)x 2 ]
a 2
j=1j j
>
511.91 G 2 + 30.00
where X2 = 48.10, X 4 = 168.0, X 5 = 122.30, X, = X 3 = X 6 = O.
(.098)(48.10) + (.990)(168.0) + (1.410)(122.30) +
1/2
+ K
[(.152) 2 (48.10) 2 + (.223) 2 (168.0)2 + (.223) 2(122.30) 2 ]
a
2
511.91 G 2 + 30.00
and for G 2 = .480 (from .y1 ) we have
K
= -68.19
a2
and (1)(-68.19) = a 2 = 0, or 1 - a 2 = 1.0.
For i = 3, CROPS, the solution vector
4
allows G 3 = 0.450 with
a probability of .65, or better.
To determine 1 - a 4 , for SEDIMENT,
12
- E E(s.)x. + K
j=1j a4
1/2
12
2
[ E VAR(s.)x. ]
J J
j=1
7490.94 G
4
- 7549.08
-(.11) (48.10) - (.24)(168.0) - (.16)(122.3) - (.11)(20.66)
1/2
2
+K
[(.12)(48.10) 2 + (.36)(168.0) 2 + (.37)(122.3) + (.12) (20.66)]
a4
> 7490.94 G 4 - 7549.08
125
and for G
4 = 0.998 (from! ) we have,
K>-0.028
a4
and 0(-.028) = a 4 = 0.489, or 1 - a 4 = 0.511.
Finally, for i = 5, FISH, our solution vector
for G
5
24
again allows
= 0.100 with a probability of 0.70 or better, as previously
requested by the DM.
A new alternative solution, vector V 3 , can now be submitted to
the DM for his consideration,
(0.127
(0.480
(0.450
(0.998
(0.100
,
,
,
,
,
0.500)
1.000)
0.650)
0.511)
0.700)
It is important to realize that in order to provide additional water for
fish harvesting, G5 = 0.100 with a probability of 0.700 or better, the
last optimization allocated additional runoff (with a total runoff +
rainfall of 400 x 10 3 Cu. meters, approximately). This is the reason
why G 2 = 0.480 with a probability of 1.0.
Present Value of the Objective Functions
The values of the objective functions considered by the DM, so
far, are values for the first 2-year period in our 30-year time horizon.
If the analysis is to consider this 30-year time horizon, discount rates
can be used to take into account the time value of each objective function. Also, each objective function, z i (x), can have a different discount rate, r i . For returns occurring periodically every
present value factor of those returns is given by
E
years the
126
D = d
l
+ d
2 + . . . + dn
= (1+0 -E + (1+0 -2E + . . . + (l+r) -(n-E) + (1+0 -n
where: D = present value factor for constant term recurring with
E.-year period,
E = period in years,
n = number of periods,
d
i
= discount factor for the i-th period.
Let z(x) be any one of our five objective functions. Now, z(x) itself
is a random variable (RV) since it has been defined as a linear function
of other random variables,
z(x) = c l x, + c2x2
• • •
2
12 x12
wherec.f%,normal[R(c.), var(c.)]. Now let z i (x) be the objective
function value for the i-th period; then
4(2) =
1
2
(10 + d 2; (10 + . . . + d n e(x)
represents the present value of the i-th objective function over the
n-period horizon. Z(x) is also a normal random variable
Z(x)
NORMAL [E(Z(x)), VAR(Z(x))],
where
E(Z(N))
12
d. Z E(c.)x.
-J J
i=1
j=1
12
VAR(Z(4)=Ed 2 Ex. 2 VAR(c.).
-J
j=1
i=1
A new random variable, y, can now be formed such that
g(x) - E(Z(x))
1/2
[VAR(Z(x))]
127
and y (k, normal (0,1). Now we can say
Prob [y > Z(x)] > 1 - S, or
the equivalent deterministic statement
Z(x) - E(Z(x))
[VAR(Z(x))]
< K
-
1/2
where (K) = S, and (1)(*) is the C.D.F. of y; that is
6
1/2
E(Z(x)) + K [VAR(Z(x))]
6
Z(x)
And, in terns of our discount factors, for the first 15 periods,
15 12
EZd.E(c.)x.
1
J
i=1 j=1
-,
15 12
+K[EEdx.
a
i=1 j=1
2
2
VAR(c.) ] ij
J
1/2
>
Z(x).
-
To decide on the value of Z(x), commensurate with the values in the
vector
y we note that
15
Z00 --"Z(1 .[G00*(z max - z.)
min
nun -Fz],
1=1 1
substituting the value of Z(x) in the above inequality one can solve for
K , and 1 - S the probability of achievement for this objective function
6
over the entire 30-year period, based on the decision vector
Assume r
1
= .08, the discount rate for LIVESTOCK, then
(l+r)
15
-30
1 - (l+r)
-
2
1=1 i( 1+r) - 1
d
Z
-2i
1 - 0.099
5.427
1.166 - 1
(x ) = (5.427) [(.127)(52.0)]
1 -4
and the inequality above becomes
=
35.839 AU's
128
(5.427)(.1375)(48.1) + K
al
1/2
[(2.750)(.055) 2 (48.1) 2 ]
>
K
35.839
0.004
1
my
a
and (1)(0.) = l = .5, and 1 -
al
0.0
= .500. And, this value happens to be
identical to that for 1 - a l = .500 in
y3 .
The above shows that the
introduction of a discount rate affects the present value of this objective function over the entire time horizon, but the probability of
achievement for that first period is also the probability of achievement
for the objective function over the 30-year horizon.
Assume r 2 = .07, the discount rate for RUNOFF, then
d. = (1+.07)
1
15
E
i=1
d. =
1
-2 1
-30
1 - (1.07)
2
(1.07) - 1
- 6.034
E d. 2 = 3.163
Z
(x ) = 6.034 [(.480)(541.91-30.0) + 30] =
2 -4
3
1663.67 x 10 cu. m.
and the inequality to satisfy is,
(6.034) [(.098)(48.10) + (.990)(168.0) + (1.41)(122.30)] +
2
2
2
2
2
2
{(3.163) [(.152) (48.1) + (.223) (168.0) + (.223) (122.3)
K
1/2
2
> 1663.67,
K 2
and (1)(-4.90) = (3 2 = O., and 1 - (3 2 = 1.0.
> -4.90
129
Assume r
3 = .07, the discount rate for CROPS, then
Z 3 ()) = (6.034) [(.450)(331.97)] = 901.39 x 10
--4
3
kgs.,
and the inequality to satisfy is,
2
2
[(3.170)(1.037) (20.66) ]
(6.034)(7.392)(20.66) + K
1/2
> 901.39,
s3
K
f3. 3
> -0.527
and cD(-0.527) = f3 3 = 0.299, and 1 - f3 3 = 0.701 which is larger than
0.650, the probability of achievement over the first 2-year period.
Assume r 4 = .06, the discount rate for SEDIMENT, then
d. = (1 + .06)
15
E
i=1
-2.1
-30
d =
1 - (1.06)
(1.06)
2
= 6.682
- 1
-60
15
1 - (1.06)
E d. =
4
(1.06) - 1
i=1
- 3.701
) = (6.682) [(.998)(-58.14 + 7549.08) - 7549.08]
z 4 ()
--4
= -488.60 cu.m.
and the inequality to satisfy is
-(6.682) [(.11)(48.10) + (.24)(168.0) + (.16)(122.30) + (.11)(20.66)]
+ K
2
2
2
2
2
2
{(3.701)[(.12) (48.1 0 ) + (.36) (168.0) + (.37) (122.30) +
s4
1/
2
-488.60
(.12) 2 (20.66) 12
>
-0.168 ,
130
and (1)(-.168) = 0 4 = 0.434, 1 - $ 4 = 0.566 which, again, is greater than
1 - a 4 = 0.500 for the first 2-year period.
Assume r
5
= .05, the discount rate for FISH, then
d.
1
(1 +
15
Ed i
i=1
-
1 - (1.05 )
(1.05)
2
-30
7.499,
- 1
15
2
E d.
= 4.392,
i=1 1
Z
= (7.499) [(.100)(14.78)] = 11.08 x 10 3 kgs.
(x )
5 -4
and the inequality to satisfy is,
(7.499)(3.741)(.66) + K
2
2
[(4.392)(2.0) (.66) ]
1/2
> 11.08,
> -2.686,
5
and 1)(-2.686) = $ 5
rn-,
.003, and 1 - 0 50.996, again a value much higher
than 1 - a 5 = 0.7 as had been requested for G 5 = 0.1 during the first
2-year period.
Accordingly, based on decision vector 2;4 , the following vectors
apply to the entire 30-year period:
35.83
3
1663.67 x 10
3
901.39 x 10
-488.60
3
11.08 x 10
and,
Airs,
cu. in.,
kgs.,
cu. m.,
kgs.,
livestock,
runoff,
crops,
sediment,
fish,
131
Iowa
(0.127
(0.480
(0.450
(0.998
(0.100
,
,
,
,
,
0.500)
1.000)
0.701)
0.566)
0.996)
••••-
Discussion
The preceding example has attempted to demonstrate the applicability of PROTRADE, a multi-objective algorithm, to realistic problems.
The following observations are made:
1. The algorithm allows for a dynamic weighting of the objectives
as the preferences of the DM are articulated. Also, once the
tradeoffs among the objectives are quantified the DM is able to
"change his mind" if he so desires to accomodate his new
expectations.
2. Analysis results for each subperiod and the entire 30-year
period reflect that particular value structure exhibited by
our DM. We can imagine that replacing the single DM with a
group of decision makers able to reflect the needs of each
constituent, in some manner, and able to cast these into a
single vote would alter the choice of an acceptable policy.
3. The randomness of some of the parameters was effectively handled
in the analysis, thus making the algorithm applicable to real
world situations. The random variables considered were assumed
to be normally distributed, for convenience. The mathematical
developments of Chapter 6, however, now make it possible to
consider random variables with any type of distribution, as
the problem in question may dictate.
1 32
4. With this uncertainty-handling capability the DM is no longer
limited to considering expected values alone as he trades off
the various objectives against one another. The DM can now
demand probabilities of goal achievement higher (or lower)
than .50 and thus provide for project success and safeguard
more effectively his personal reputation, if he so desires;
5.
Computational requirements are kept to a minimum since there
is no need to resort to Monte-Carlo simulations to arrive at or
maintain a given probability of achievement. This is possible
through the deterministic equivalents developed in Chapter 6.
6.
Once an acceptable policy is identified for a 2-year subperiod,
present values for the various objective functions over the
entire 30-year period can be obtained readily by considering
the sum of discount rates operating on the appropriate objective, itself a random variable.
7. The use of the cutting-plane technique to solve this nonlinear
problem has been demonstrated to be an effective tool.
8. Now that the DM has been able to see the results above, which
adhere to the physical constraints of the problem and reflect
his own preferences, he is still able to initiate a second
iteration, with new expectations, if he chooses to do so.
CHAPTER 8
CONCLUSIONS, DISCUSSION, AND EXTENSIONS
This chapter contains brief concluding statements about the research objectives outlined at the outset, followed by supportive discussions. These concluding statements will attempt to ascertain what
research objectives were accomplished, in particular whether the problem
stated in Chapter 3 was effectively resolved, and the advantages and
limitations of the approach demonstrated in Chapters 5, 6, and 7. Some
opportunities for further research, as suggested by this work, are also
presented.
1. In response to the question of whether the problem stated
in Chapter 3 was solved, the answer is an affirmative but
cautious "yes."
The decision-making environment postulated was to include a set
of objective functions to be satisfied, a way or procedure for articulating the value structure of the decision maker, a mathematical formulation that would account for the randomness of the various parameters in
the objective functions and set of constraints, a means of generating
tradeoffs among the objective functions and associated probabilities,
and an iterative scheme for updating alternative solutions to conform
with the new expectations of the decision maker. This was accomplished
through the development of the algorithm PROTRADE. Certainly, this algorithm represents but one approach to the problem, and this approach
133
134
was pursued within the framework of stochastic mathematical programming
(Goicoechea, Duckstein and Fogel 1976b).
2. PROTRADE represents an interactive algorithm which allows
the DM to assume a dynamic preference structure as he
searches for alternative solutions.
As the algorithm develops a set of alternative solutions and
the DM gains insight into the problem, he has the opportunity to "change
his mind," reevaluate his preferences, and generate another set of alternative solutions. The use of a static goal weighting scheme, as
featured by some of the multi-objective methods reviewed, requires the
DM to develop a complete ordering of the goals before receiving any
information concerning available alternatives and tradeoffs.
3. The utility function approach to the structuring of the
preferences of the DM effectively reconciles his expectations with the physical realities of the problem at hand.
There are several ways to articulate the preferences of the DM
and bring them into the analysis. The utility function approach suggested here is only one of them. An alternative to the exact nature
of the parameters in a utility function, or some other representation,
is that presented by the theory of Fuzzy Subsets (Bellman and Zadeh
1970; Zeleny 1975). When the preference structure of the DM cannot be
sharply defined, the concept of fuzziness and membership may accommodate
the situation at hand more realistically.
4. Objective function values and probabilities of achievement
can be obtained readily and traded against one another.
135
The problem stated in Chapter 3 introduced another dimension
into the task of multi-objective analysis, that of determining the
probability of achievement associated with each objective function value.
This new dimension added complexity to the solution of the problem, in
both computational requirements and decision making requirements. It is
difficult enough for the decision maker to decide on what objective
values he wants, let alone his having to decide on their probabilities
of achievement as well. The dimensions of the problem are doubled, essentially. This problem, however, is a very realistic one, and every
day problems of this type are dealt with in a variety of economic,
political and social situations, in one way or another.
5.
The deterministic equivalents presented in Chapter 6
represent exact developments.
Other approaches in the literature (Sengupta 1969; Allen,
Braswell and Rao 1974; Lingaraj 1974; Lingaraj and Wolfe 1974) are
methods for approximating chance constraints and the resulting nonlinear
programs are generally different, i.e., not equivalent. The change of
variable technique used in Chapter 6 allows the formulation of exact
deterministic equivalents for random variables with any type of probability density function (Goicoechea 1977) thus enhancing the applicability of PROTRADE.
6.
It was shown in Chapter 6 that it is possible to deal effectively with random variables in the objective function
by transforming the original stochastic programming problem
into an equivalent deterministic problem.
136
This new problem formulation, labeled an a-model, allows an
analytic, closed-form solution to the stochastic problem whose objective
function had been solved previously via Monte-Carlo simulation. The
price paid in the process is that the dimensionality of the problem is
increased (generally doubled), and one may go from a linear problem to
a nonlinear, nonconvex problem.
7. A case study of the Black Mesa region in northern Arizona
provided a scenario in which to implement and test the
algorithm, and showed it to be effective in this case.
The algorithm was effective in providing a framework within
which the elements of randomness, preference, and choice can be related
and manipulated. Analysis results for each subperiod and the entire 30-
year period reflect that particular value structure exhibited by our DM.
This analysis also demonstrated that once an acceptable policy for a 2-
year subperiod is identified, present values for the various objective
functions over the entire 30-year period can be obtained readily by
considering the sum of discount rates operating on the appropriate objective function, itself a random variable, provided the DM's preference
structure is time-invariant. The use of the cutting-plane technique in
solving the many nonlinear problems in the analysis proved this technique to be highly effective with favorble convergence characteristics
and reasonably short computer time requirements. A typical computer run
in the CDC 6400 system at The University of Arizona required some 30
seconds, approximately.
137
Topics for Future Research
Various topics for future research can be considered as the ideas
and concepts put forth in this research are extended, as follows:
So far in the analysis, we have dealt with mathematical variables of the continuous type. There are many situations of mathematical
interest where these variables are of the discrete type. The approach
developed in Chapter 6 can be extended to functions of discrete random
variables and, this time, instead of using the integration operator, the
summation operator would be used. The convergence characteristics of a
series will be a determining factor in the analysis. The applications
of this analysis to integer programming will be immediate.
There are situations of interest where the coefficients in the
objective function or constraint inequalities are neither fixed quantities nor random variables with a known distribution. Instead, a sample
of measurements is available from which these parameters can be estimated.
When this is the case, regression analysis can be performed to arrive
at those parameters. With the assumption of normality (in the error
made as a measurement is taken) regression analysis shows that those
parameters take the form of random variables normally distributed. The
next step would be to obtain an exact deterministic equivalent as shown
in Chapter 6. With the approach followed in Chapter 6, however, it is
also possible to consider samples where the error is nonnormal and
arrive at the distributions (whatever they may be) of the regression
parameters.
The concept of nondominance, as presented in Chapter 2, makes
reference to the magnitudes of the objective functions as these respond
138
to changes in a policy in the decision space. Now that we have been
able to relate these magnitudes to their probabilities of achievement
we may want to extend the concept of nondominance to include these
probabilities. Many directions are available and these will dictate
the content of the definitions, concepts, and properties to follow.
Many control problems can be modeled more realistically in a
stochastic environment, rather than a deterministic one. In terms of
control theory two different means of representation are of interest:
input-output, and state variable representatin. Whenever an n-th order
differential equation of form:
n
d y(t)
+ a
n
n
dt
d
n-1
dt
=
y(t)
n-1
Cm+1
+ . . . + a y(t)
1
m
d u(t)
+ c
m
m
dt
d
m-1
dt
u(t)
m-1
+ . . . + c u(t)
2
is used to describe an n-th order plant, say, the a i 's and c i 's can take
the form of random variables rather than fixed quantities. The solution
y(t) to this equation will then be in terms of those random variables.
The analysis can then be extended to find the distribution of y(t) in
two dimensions: itself and time t. Whenever random variables appear
in the state variable representation, this can be related to the inputoutput representation through the transfer function of the plant in the
s-domain.
In many problems it is customary to treat imprecision by the
use of probability theory, just as we have done here. It is becoming
increasingly apparent, however, that in the case of many real world problems involving large scale systems (i.e., mass service systems, economic
139
and social systems, portfolio selectin4, health problems, . . .) a major
source of imprecision should more properly be labeled "fuzziness" rather
than randomness. By fuzziness we mean a type of imprecision which is
associated with fuzzy sets as put forward by Bellman and Zadeh (1970).
Fuzzy statements such as "x is larger than y," "the stock market has
suffered a sharp decline," or "symptom a is exhibited more strongly by
disease b" convey information despite the imprecision of such statements.
A fuzzy set is a class of objects in which there is no sharp boundary
between those objects that belong to the class and those that do not.
Specifically, let X = {x} denote a collection of objects denoted by x.
Then a fuzzy set A in X is a set of order pairs:
A= {(x, pk (x)): x
where pk (x) is termed the grade of membership of x in A, pk is a function
defined on X with values in a space M called the membership space.
Generally, M is assumed such that M = R[0,1] with 0 and 1 representing
the lowest and highest grades of membership, respectively. Fuzzy objectives and fuzzy constraints can be defined precisely as fuzzy sets in
the decision space X. A fuzzy decision, then, may be viewed as an intersection of the given objectives and constraints. In our own work the
set
{(z(x), F(z(x))1 , for x e X, z(x) c Z (x)
where z(x) is the value of the objective function at x, and F(z(x)) =
Prob (z(x) < z(x)) is, in fact, a fuzzy set, and the membership function
p
A was given directly by the cumulative distribution of z(x).
APPENDIX A
A CUTTING-PLANE TECHNIQUE
As shown in Chapters 5 and 6 the reduction of a stochastic problem to a deterministic equivalent results in added complexity and,
generally, the problem cannot be solved by applying the simplex or
dual simplex methods (Dantzig 1963; Luenberger 1973) alone. A nonlinear
technique due to Kelly (1960) known as a cutting-plane method was used
to effectively solve the nonlinear problem.
Two computer routines have been developed at least (Griffith
and Stewart 1960; Monarchi et al. 1973) with some experience detailed.
More recent activity (Goicoechea, Duckstein and Fogel 1976a, b) presents
a modified version of the method to avoid cycling and speed-up convergence
to a solution. Essentially, this method replaces the original constraint
set by a set of half spaces. These half spaces are updated progressively to "cut away" portions of the new constraint set. In the process,
linear programming is applied to iteratively arrive at a solution in
the original feasible region.
Mathematically the nonlinear problem may be stated as follows:
max G = E c.x. + h(y ,y 2 , . .
1
j=1
J
y )
k
subject to
E a.. + g.(y ,y . . . y ) < b.,
1 2
k —
j=1
140
i = 1, 2, .
141
L
< y
p—
X.
J
<
p —
Up
p = 1, 2, . .
,
> 0
k
j = 1, 2, .
Theproblemmaynowbelinearizedilitheregionaboutnepoint x .=0,
y
i
= y ° by expansion as a Taylor's series and ignoring terms or order
higher than one. The linearized form is,
Maximize (or minimize):
G' = E c.x. + g 1 (y1,y;, . • •,YP
j=1
k 9g 1 (y1,y;, .
E
3y r
r=1
-7
k1
(Y
subject to:
E a..x. + g.(37 5:5Y,
j=1 1-3"
•
k3g i (y1,37;, . .
E @y r
r=1
(Yr- Yr° ) = b.
y _ yo u
yo
P—P _ P,
P = 1, 2, . . .,k
.,Y°)
L _ yo
P
and
j = 1, 2, . . .,n
also, the restriction
is added to control the convergence rate with step size m i .
If Ay i = y i - yl , then
A . =, -Ay.
when
AY. = A)7.
when Ay i > 0 and
Ay i = O.
The linearization procedure and optimization algorithm are illustrated in Figures A-1 and A-2, respectively. A numerical example is
also presented for illustrative purposes.
142
xi
Figure A-1. Cutting plane method.
143
Nonlinear
programming problem
Linearization
about point x
Set-up standard
matrix form of LP
Is
Basic
solution
feasible?
New LP problem,
let :
X. X +
Simplex algorithm
Dual-simplex
algorithm
STOP
The procedure has converged
to the solution of the nonlinear problem, x.
Figure A-2. Program SEARCH, the optimization algorithm.
144
Numerical Example
The problem considered for this example is the following:
maximize:
f(E) = 3 x1 x2
2
subject to:
g1(x)=
g2(2)
xl
2
3c2 -25 <0
2
2
x1(x 2 -5) - 25 < 0
>0
The application of the Taylor's expansion and the addition of
arbitrary bounds on (x, - xl) and (x2 - x ) yields the following:
f(x) = 3(x -x k ) + (x -x k ) + 3x° + x °
1 1
2 2
1
2
gi (x) = 2
k (k)
g2 (21) = 2x, k
k
2 < 0,
+ 2x2 k (x2 -x2k') - 25 + (x 1 k) 2 + ( x2 kk )) 2
k
) + 2(x2 -5)(x2 -x2 k ) - 25 + (x1 k ) 2 + (x2 k -5) 2
< 0,
-2 < xl - xl k <2,
-2 < x2 - x2 k <2,
Repeated iterations yield the sequence
{x} = {(1,4(3,3),(4.67,2.50), . • ., (4.33,2.50)1
as illustrated in Figures A-3, and A-4.
145
8
X2
7
CX
6
2
3
4
5
6
7
8
Figure A-3. First iteration, cutting-plane method.
9
10
146
X2
1
2
4
5
6
7
8
Figure A-4. Second iteration, cutting-plane method.
9
10 xi
147
• •
• •
•
•
•
0
3.
•
•
•
•
•
•
• •
•
•
•
La •
X •
1 •
0 •
•
•
• •
• •
•
0
•
CD
•
•
• •
•
•
•
•
• •
••
•
•
•
•
•
•
0 •
•
,A) •
0 •
•
O.
••
•
o • 0 • 0
•
o
•
•
•
••
•
T.
▪
•
•
•
•
0"
•
'X •
• •
0•
•
•
•
0
O
•
O
••
•
0
•
•
•
•
•
•
•
I...
4 7-
a
t
•
••
••
•
• •
0 •
•
• •
•
•• •
10
• •
4.1 •
0 •
0 •
•-• •
• •
•
•
•
•
•
•
•
•
••
0 • 0
•
•
•
••
V
1.• •.
3.5 •
CI
••
•
•
•
•
S
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
1
• •
0 •
7.. •
0 •
0 •
a
•
•
•
•
a
•
•
•
a
a
•
•
•
•
. •
0 •
•
•
•
*
•
• •
0 •
•
•
•
•
•
• •
••
•
•
•
•
0
0
.
1,J
1
•
••• •
n-,1 •
0 • 0 •
• •
• •
,•• •I..5
C• •
0 •
CI •
0 •
•
• .
•
ru
0.
•
•
al
••
0
•
1..1
0
•S
•
•
••
•
•
•
•
•
•
• •
0 •s.
C.
•
•
•
•
•
•-•
• •
•.
• 0 •
•
•
•
•I
•
•
-J
•
Ls ts-•
•
• ••••
T •
• t Ca
0 *
0 •
al •
• •
•
•
t
•
•
• •
•
U.,
••
UI
•
•
•
C3 •
•
•
•
•
•
•
•
•
•
•
••
•
•
•
•
•-• •
•
•
•
•
•
•
••
'a
•
•
•
• •
0 •
•
•
•
•
0 •
•
•
•
•
•
•
•
• .•
el
•
• •
. •
• •
0 I.
••
•
11.
•
•
a
• •
0 •
0 •
•
•
•
•
•
•
•
s
•
••
•
O
•
•
•
. •
0 •
0 •
0 e
•
. •
4..3 •
• •
• 1 U'
0*
• •
•
• •
0 •
•
•
•
•
•
•
•
•
•
•
II
...1 •
0 •
•-n
40
•
•
al•
•
•
•
•
• •
IIJ •
•
on
•
I.J
.7 • X • 0
I •
I.,
3-
•
•
•
a./ •
0 •
0 •
•
6.3 •
• •
•
a
eu •
•
• •
•
•
•
• • '.
•
•
t-le
0.
•
•
•
a- C.
• •
•
•
••
.1.
•••
•
•
. •
0 •
.
•
•
. •
0 •
• •
EL,/ •
0 •
0
e.1
•
A
••
•
•
•
•
• •
•
0
•
0
••
••
•
•
a.
. •
0 •
•
0
•
•
:
•
•
•
0 •
0 •
11
II
II
• •
0
••
o
>•
148
• •
•
• •
•
•
•
•
s
•
•
••
0•
•
•
•
•
•
•
••
0•
•
•••
•
1•1
•
• •
0•
•
0
•
•
•
•
6
•
•
•
•
10 •
•
0•
•
•
• •
0 •
•
•
•
•
0 •
0 •
•S •
• •
1 •
•• •
o
• •
1•
•
0 •
•
.• •
0•
•
•
•
•
•
•
•
• •
0 •
•
•
•
•
•
•
•
•
• •
0•
•
0•
• •
•
40 •
• •
1. •
0
0 •
.4 •
• •
0
a
•
•
•
•
•
•
•
••
0
•
•
•
•
•
•
•
• •
0•
•
•
•
•
•
• •
0 •
•
• •
1.4 •
0 •
v.,
• •
•
•
•
•
•
•
•
• •
0 •
•
•
•
•
•
•
If
• •
0 •
•
0
•
•
•
•
•
•
•
•
•
•
••
••
•
0 •
•
0 •
0•
0 ••
• •
L.. •
0 •
0 •
0 •
• •
if%)
•
••
0
•
•C
1
•
•
•
•
.
0 •
•• •
• 31
•
••
0 •
..., •
C. •
• •
It; •
0 •
0 •
•C •
• *
•
•
l•
•
•
31.
•
• •
•
0 •
...1 •
c• •
• •
44 •
C3 •
0 •
... •
.... .
0 •
•
11.1
0
0
.....n
•
•
•
•
• •
•
•
•
•
• 0,
•
•
•
•4 •
•
•
•
•
•
•
••
•
•
•
•
•C
•
C •
0 •
4.4
• •
•
•
• •
0 •
••
••
.•
.
••
••
•.
..
.
• •
•.
.•
..
•
.
•
•
•
.41*
•
•
•
•
,.., •
.0 •
•
•
•
•
0 •
•
•
•
•
•
•
•
O •
1
•
•
•
•
. .4 •
•41 •
X •
•
•
••
C•
•
•
•
•
•
•
•
•
•
•
0
0
•
la
0
0
•
•
•
•
• •
0 •
• •
0 •
1.
•
•
•
•
•
•
S
•
•
•
•
•
•
4.1
:
C
• •
0.•
•
•
• •
0 •
•
•
.
••
•
•
•
•
•
•
•
/It
0
•
•
•
•
•
•
•
• •
•
1
•
•
•
* •
a
.4 •
0 •
• •
•
C. •
s•
• •
•
I»
•
st •
• •
•
•
•
•
1.
i
•
•
•
• •
•
•
..4 •
.X. 1.
•
•
•
•
•
•
• •
0*
•
•
•
•
•
.•
•
•
CV
•
11.
.• •
0 1.
• •
1..1 •
0 •
0 •
•
•
•
•
•
•
• .
•
e
•
•
•
•
0 •
•
•
•
•
•
0
•
.1.1
0
17
•-•
•
0 •
•
•
5
•
• •
1•.• •
0 •
-
• •
•
NI
•
V.
• >
•
•
•
•
t
0
•
0W
0 •
.
I
.. •
•
•
•
•
•
•
•
•
•
•
•
•
i•
•
•
•
o
I
•
•
•
...• •
.. •
X •
•
•
•
•
•
•
•
•
••
•
•
•
A
•
•
•
•
0 •
.4 •
X •
...
O
•
I...
0
• •
• •
•
0*
S
•
•
•-1 V
a.
•
•
0
0
•
1L.
0
0
.•
•
.5
•
•
•
•
•
•
•
•
•
0 •
0 •
• •
U- •
•
•
VI
I
••
•
•
•
•
I
1
•
•
•
•
•
S
•
6
I•
•
•
I.
• 4
C3 •
•
v.
a
•
• •
0 •
0 •
• •
I
C.r
0
•
•
•
•
•
•
1/1
•
•
•
•
.•
• •
0 •
n
a
•
•
• •
0 •
0 -a
0 0 ••••
.7., 0 1.0.
•
•J
re
CY a.
u
•
0S
•
CT-
C
•
0.
O
n-•
O
• 0
>
149
•
•
•
t •a
•
•
•
• •
•
•
•
9, •
X •
•
•
,
qt
•
•
C •
•
•
•
10
0
• 4
•
ta
•••
U1
0
0
*J
•
•
.•
0
•
ty
0
a
cs,
•
CI
•
• •
•
•
• •
0 •
•-• •
0 •
•• •
0 •
•
1.4 •
0 •
•
•
Lila
L
•
•
0 •
0 •
0 •
0 •
•
•
• •
•
•
• •
0 •
•
•
•
•
•
•
•
•
•
•
•
•
•
.4 •
x •
• •
0 •
•
•
•
•
•
•
•
•
•
P.1 •
•
•
•
•
•
•
•
o•
hi
• •
1
• •
•
1
•
•
•
•
•
•
•
• •
0 •
.4 •
0 •
•
•
1..., •
0 •
0 •
11.4 •
• •
•
•• •
•11.
0 •
0 •
...I •
• •
•
... •
0 •
. •
L. •
0
0
.•
•
•
•
•
•
•
•
•
•
•
•
• •
0 •
0
0
••n
•
•
•
•
•
•
NI
•
•
•
•
•
1
CA /1.
2. I.
(2 •
•
•
•
•
•
•
•
0 •
.a. •
•
•
• •
0 •
•
•
••
•
•
•
. • •
0 •
:
•
:
•
•
.4 •
0 •
•
•
•
V
•
•
•
u..i •
ea •
a •
• -n •
• •
s •
•
OS
et
4.2 •
0 •
so •
•• •
• •
•
•
s
•
• •
•
m*
•
•
•
1,
•
•
•
. •
•
•
•
1
•
•
•
9-1 •
•
•
•
•
•
•
•
•
•
S
•
•
•
•
e•
•
•
•
Is
•
•
•
.= •
•
•
11.
• •
I.
t
•• •
0 •
•
•
U.1 •
0 •
0 •
• n•
0 •
• •
1.2., •
0 •
0 •
•• •
0 •
•
•
tyJ • •
0 •
02 •
•
•
•
•
•
•
•
•
•
•
•
hl •
v• •
.7 •
•••• •
•
•
1. •
:
•
•
• •
•
0 •
•
•
•
• •
1
•
• l•
•
1
•
•
•
•
• •
•
•
II
0 •
•
•
•
•
•
•
• •
..• ••
• •
•
•
•
•
• •
V •
•
•
•
•
•
•
•
• •
• •
•
0 •
••• •
1
•
•
I0 •
• •
I s I
0
0
•••1
•
•
•
•
•
•
•
•
•
•
1
•
•
•
• •
• "
.
•
5
•
• •
0 •
•
•
•
•
•
•
• •
0 •
f.... •
•
•
•
•
.
•
At
.0*
••
•
.
.
•
.. •
.
•
s
i• •
.
.
•
•
, •
•
•
•
•
•
•
•
•
•
•
•
•
••
•
•
•
•
a
S
S
•
...1 •
... •
1- •
•
•
% :
•
% :
•
c,
•
•
s
•
•
•
•
•
• •
0.»
.
1
*
•
•
•
t
• •
c
ow
•
•
•
•
•
•
•
• •
•
c, ••
•
1
•
•
•
•
•
•
•
•
a
a
t
•
• •
•
•
•
••
• •
• 5
• •
el •
0 •
0 •
0 •
0 •
•
•
•
•
•
•
/•
•
•
•
•
•
•
70
1
1
•
•
•
•
•
•
•
n•
•
•
•
•
•
•
•
•
a
•
•
•
•
C •
CI
t
• •
• •
. •
0 •
1.1 •
.4 •
X •
•
•
. •
•
11.
•
yo •
0 •
• •
I•I •
0 •
•
•
•
•
t
•
..4 •
0 •
• •
ls,I •
Cs •
0 •
S.
• .
•
S
t
•
•
•
•
S•
•
•
S.,
hl
0 •
• -1 •
• *
w
•
t
•
c,
•
•
c• •
y •
41 •
0 •
0 a
•
%
•
4.1 •
0 •
•
•
•
0 •
y •
lil •
0 •
1g1 •
•
n
•
lLi •
•
4. •
•-• •
X •
•
.
•
.., 1
••
•
• •
0 •
•
•
•
•
•
7. •
•
•
C, 4
X
S.
•
•
1-4
X
•
1
• •
0 .
•
•
•
la
• •
0 •
•
•
K1
•
Il
•
•
• •
CI •
.
CD
•
0•
... •
L./ •
o•
•
•••
0
•
•
+
C:t
0
•
•
•
•
•
•
0
•
S.1 • •
•
•
•
•
•
•
Li •
•
C‘A •
•
+
0 •
sys •
• •
•
• •
•
X.
r•
V
NI
•
X •
1 •
•
•
•
0 •
0 •
Ut
•
•••
0
•
L.)
•
0 •
•
•
•
1N, •
•
•
• •
•• •
.•
. •
•• •
0 •
• Iv
44 •
a •
•
0
•
•
•
•
• •
0 •
•
•
•
•
•
•••
I
• •
0 •
•
•
••
•
•
v •
0 •
• •
•
0
••
•
•
•
•
•
•
•
••I •
0 •
•
•
U.I •
0 •
0 •
..4 •
0 •
•
•
I
•
•
n•
•
•
• ••
•
•
•
•
•
•
•
•
• •
0 •
•
•
•
•
•
•
•
I
X •
1 •
C .
•
•
•
•
•
t•.: •
X •
1 •
0 •
•
•
•
•
Lit s
V
.4
•
•
,!
X•
• •
0•
•
•
•
•
•
•
•
• .
•
•
• •
0 •
•
•
•
••
•
• •
•
•
•
•
•
•T •
X •
•
••
•
•
•
0 •
X •
• •
0 •
•
•
•
•
•
4. •
), •
•
0 •
•
•
•
•
•
• •
0 •
•
•
o•
•
•
•
•
l•
5
s
5
•
•
11
•
• •
CI •
•
S
•
•
a
•
0.
.•
•
• •
•
• •
• •
0 •
o •
0 •
0 •
•
•
•
•
1
•
•
•
0
.••
II
•
•
•
•
• •
0 •
•
•
•
•
• •
C. •
S.
•
•
•
•
• •
0 •
•
•
•
• •
0 •
a
•
150
•
•
•
•
•
•
• •
•
0
.
•
•
•
•
•
•
.
•
•
•
•
• •
. •
•
• •
.0 •
••
••
•••
•
•
•
la
X
•
• •
0
4
t
•
•
•
s.
4 •
a
•
•
•
•
s
•
•
•
•
•
•
•
•
•
/.. •
X •
•
c •
••
•
•
•
•
•
0.1
•
X •
1 •
Ca
•
•
•
•
•
•
•n.I •
•
• 1
,, •
X
•
•
ISJ
I
*I
..J
•
•
•
•
•
•
•
•
ta
.4
(.)
I. , • -•
G. .4
•
0
•
cg •
.4 •
41 •
0 •
e, •
•-g •
• •
• •
0
• .... •
0 • 0 •
• •
1,1 •
0 •
I •
.1.1 •
as •
rs., •
*
so
rt •
•
••
•
•
•
•
•
•
a.
to
a
•
••
.... •
'0 •
* •
•
•
0 •
*I •
• •
,,,, •
a.
•
•
•
•
•
•
Lu •
.•
••
•
•
•
••
•
•
11.
ta / •
0 •
•
0 •
•
..4 •
•
• •
• •
• 0 •
•
0
..c •
•
• •
• •
t1... •
0 •
• •
t J •
0 •
ert •
•
•
.4 •
• •
•
• •
•
g
•
•
.1.
e
•
a
•
• •
0
•
*I 1
•
•
3,
•
•
•
•
0 •
• •
•
•
ty) •
• 0*•
• 0 •
* •
•
• •
• •
0 • 0 •
•
•
•
•
•
• •
0 •
•
•
•
•
X •
• •
C •
•
•
•
•
V.
1
•
•
• •
•
•
0 •
•
••
•
•
• ...1 •
•
•
•
•-• •
co •
4. •
Id •
c • •
0 •
••• •
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
I
0
•
•
Il
•
•
CD
.
•
•
:
•
•
•
... •
3 •
•• •
•
•
•
•
• •
VI
1
•
•
•
•
z .
Ce •
•
•
•
r•
•
•
•
•
11,•
•
•
•
•
.1. *
•
.4 3,
3.• •
•
•
• •
:
0 •
4 •
41 •
0 •
C •
l•-n •
•
•
•
•
•
•
•
•
•
• •
0 •
•
•
•
•
•
•
•
•
•
••
•
0
•
0
•
•
•
•
(... •
*
•
•
•
•)
•
•
•
(5V
•
Io• •
•
•
•
•
0
•
•
•
•
•
•
••I •
Y. •
•
•
1.4
. •
•
•
1
•
•
•t •
•
•
•
•
•
• •
• •
0 •0*C-1
•
•
•
•
•
•
•
•
• 0
•
•
•
IL.
0
• 0
•
1 \I
• •
0 •
* •
•
•
0 •
• •
• .
•
LI •
0 •t
•
0 •
•••• •
•
• ••
• 0.
•
••
•
g
• 0 •
•
.... •
•
• •
tri •
0 •
C. •
..4 •
• •
I •
•
•
.= •
•
•
•
• •
0 • 0
C.> •
. •
• •
•
C.)
I
....1 s
c. •
• •
•
S.
•
• •
O
0 •
... •
0 •
0 • 0 • 0 •
I •
.• •
• •
I,S••
1.,1 •
b: •
0 •
r... •
0 •
..a. •
a, •
ai •
_. .1.
..0 •
.4
•
• •
• •
•s
I • .
•
....a •
•
0
* , •
•
•
• •
. •
••1 •
X •
•
0 •
•
•
•
• • •
T. •
•
•
•
•
••
C.. •
•
•
•
•
•
•
•
•
• •
. •
0 • 0 •
• •
••
•
1.
•
• •
,... •
• '
•
•
co
• 0 •
0 • 0 • 0 • 0 • 0 •
•
•
••
• •
0 •
•
s
•
•
•
•
•
•
•
••
w
•
0 • 0 •
•
.... •
••
•
•
X
• •
•
•••
•
•
•
•
•
•
•
•
•
•
•
0 •
• •
0 •
X •. :
•I• •
•
0 •
•
• •
•
0 •
•
••
••
• •
0 •
s
•
•
ID
• •
0 •
•
•
I
•
•
•
•
•
•
•
•
•
•
•
•
0 •
•
.o
•
•
•
a
X •
SQ
•
•
.4 •
••
•
•
•
•
•
• ....•
•
0 •
* ...• • • •
0*
0 •
0 •
*3••
•
• •
1.0 •
1.... •
1O•
•-n •
• •
10 •
0 •
0*
La •
.' •
•
•
•
•
•
•
•
•
•
•
•
•
.•
• •
0 •
•
•
•
• •
0 •
•
•
•
•
•
•
•
•
b
.
•
•
• •
Iii •
•s1 •
PU
•p
•
•
•
•
• •
•
•
•
•
S
11.
0 •
•
•
• s
0 •
•
•
•
•
•
6
l.
•
•
•
•
• •
0 •
•
• •
0 •
:•
••
••
•
a
• •
•
•
•
•
•
• •0
•
0 •
0, •
• •
• y
•
1,1 •
l•-• •
0 •
02/.1,1•
J..1
PI
•
•
4
1
a
•
•
•
•
•
• 1.
0 1
•
II
•
• •
0 •
•
1
•
•
•
•
•
• •
0 •
n
•
•
•
•
•
• •
0 •
•
•
•
6
•
•
• •
0 •
••
•
- .4 •
••
•
•
•
• •
0 •
a
• •... •
•
•
•
•
•
•
•
a
•
•
• •
0 •
•
•
•
•
• •
0 •
•
•
•
•
•
•
•
• •
0 •
••
•i
•
•••
O
•
•
• •
•
•
• •
•
•
•
•
• •
•
t
•
• •
•
•
•
• •
C3 •
0 •
0 •
0 •
0 •
151
•
•
•
. • •
• 10
• •••••
• »
•
•
•
•
•
•
' •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
C.•
a.
•
•
•
•
• •
•0
•
•
•e
•
VI •
•
a
X s 11.
•
1 •
•
•
•s
• •
••
• 0 • 0 •
•
•
•
10 •
•
•
•
•
•
•
•
•
•
•
10 •
),
•
•
•
*
•
•
•
...1 •
X •
1 •
0 •
•
•
•
•
1
a
•
•
•
•
•
X •
• •
•
•
•
•
•
•
•
'•
•
•
•
•
o •
•
•
•
•
•
•
I.
•
•
•
• •
ca •
a
• •
•
•
•
•
•
•
,
0 •
•
•
•
S.
T.
.0
• •
aC •
• •
0 •
•
•
•
11.
•
•
•
•
•
•
•
•
•
•
•
•
• •
,,, •
•
•
•
•
•
•
•
•
•
0
•
i•
•
•
•
•
•
•
n
o
T.
•
•
•
• •
0 •
•
•
•
•
•
•
••
o•
••
•
I
•
•
•
•
•
•
•
G.
0 •
'0 •
0 •
• •
•
•
•
•
•
• •
0 •
,,-I •
0 •
I •
PL, •
0 •
0 •
0
0
.•
1.1 I
0
0
•• •
• •
•
t" •
• f
•
•1
•
1
••• •
0 •
• •
0 •
0 •
0 •
•
•
•
•
•
11
•
• •
0 •
•••
0
•
IU
0
0
1.4
•
•
IJJ •
•
o•
•
. •
•
•
•
•
. •
•
•
•
• •
o•
•
•
•
•
•
•
• •
0 •
•
•
I LI •
0 •
0 •
fs, •
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
Is1
C
• ••
•
V-.
•
O.
•
•
C .11
I.
•
••
•
•
1.1 •
•
ft.
•
•
.
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
X
•
•
••
•
•
•
•
.35
•n• •
)• s
•
•
»•
•
•
•
•
•
P.I •
•
•
•
••• •
•
•
•
•
•
•
•
•
a
•
•
••• •
... •
a. •
•
•
•
••
•
II.
•
•
•
• •
0 •
•
•
•
•
•
•
•
• •
0 •
•
•
•
•
•
• •
p., 5
•
•
PJ •
• •
•
•
e
•
•
•
•
•
•
•
II
e
s
•
• •
0 •
• •
0 •
b
•
.•
•
.
.
•
•
.
.
..
.
.
.
.
.
.
.
••
• •
• ••
0 •
0 •
0 •
0 •
0 •
•
•
•
•
•
•
• •
0, •
•
•
•
•
•
•
•
.•
0 •
•
•
•
•
•
a
•
• •
0, •
•
•
•
•
•
•
•
• •
o •
•
•
•
•
•
•
• •
0 •
•
•
•
•
•
•
.
•• •
• •
o
•
0 •
• •
0 •
I,
•
1.1 •
• •
31,
• •
111 •
• •
0 •
•
*
•
a. •
0 •
• •
1 •
•
•
• •
0 •
• •
1.0 •
• ' aa
•
0 5
p
C
•
•
•
...,PJ ••
•
•
•
•
•
5
5
• •
0 •
S
•
•
P•I •
.4 •
•
1.1 •
S
•
• * • ..., •
o• * • • •• • .-•
0 •
o •
0 a
• •
•
•
•
•
5
•
•
a
.
.
..: :., ••
3
•
5
•
•
•
•
•
•
•
•
•
•
•
•
•
.
• •
.u•
•
=•
•
•
••
0•
•
•
••
•
•
0
• • • • •. . • •o •
•
0 •
•
•
•
•
•
•
•
•
•
•
ti
est
•
•
•
•
•
•
•
• •
o•
•
•
•
•
•
•
•
,•
Cn1
0
0
•
0•
•-• •
•
tn
"4.
X
•
•
•
•
•
•
0•
+
ci
11
n
•Z
•
1, 1
0
0
• l•
+
• 1•
0 •
• •
14.1 *
0 •
0 •
•-1 •
•
•
t Li
0
0
'
ci
ci
*
1,./
0
0
...1
•
0 •
0 •
.4 •
• •
1 •
:-1 •
• •
•
O
•-
•-• •
0 •
1 •
5 .1 •
..0 •
V/ •
0 •
• •
•
I
0. •
•
•
•
•
0
••
0 •
•
I,
•
11.
to
•
• •
•
•
• •
0 •
• a
•
•
•
.•
0•
•
•
• •
• 0 • 0 •
•I . t
•
ie
LA t
n•
X •
4. •
0 •
I •i.P., •
cz •
•• •
u-n •
•
•
•
•
•
•
•
•
•
•
•
.•
0 •
•
•
:
•b
•s
• •
. •
0 •
0 •
•
• •
0 •
• •
•
S
s
S
•
s
6
•
:
•
•
•
•
•
s
11
•
•
•
•
V.
•
•
5
• •
0 •
.
:
:
.11,
•
• •
0 •
.
.
.
.
.
.
.
•
152
• •
• •
•
• •
•
•
•
•
•
•
IA
X •
•
I •e
•
0 •
t • •
•
•
•
•
•
•
•
0
• I
••
•
• 0 •
•
•
t••• ••
X *
• a
0
•
•
•
•
•
Y •
•
Ns
•
•
•
•
•
•
•
I•
•
•
•
0•
••• •
0 •
• •
6 1 •
0 •
0 •
•••• •
•
•
•
•
•
•
•
•
•t •
• •
• •
• 0 • 0 •
•
•
•
•
•
•
•
•
•
.7*•
•
•
X •
•
•
•
• •
•
•
•
0 •
•0•
... •
•
0. •
•
• •
•
1.0 •
t
as.
•
-44 •
• •
•
e
• •
c.3 •
• •
o •
• •
o •
. •
•
••
c, •
f
....1 •
...1. / ,-, •
.1,-, •
. •
•
C •
• •
...I •
tr, •
I •
• al •
0 •
..4 •
0 •
0 ••
•• •
7 • 0 • 0 • 0 • 0 •
• •
I •
• •
I •
a •
b., •
t, •
• 1.0 •
IV •Is
•
1 •
C... •
•
n•
...1 •
•
• 5
41 •
0 •
e
• •
• •
• •
• •
•
0 •
•
o •
P.. •
Lin •
• •
I •
0 •
•• 4
• •
•
•
•
•
•
•
•
0
04
0
•
1.1
01
•0
•
•
••
•
•
• •
on
0 •
n•• •
• •
s
n •
.0 •
...• •
. • •
I
•
•
*
•
•
•
•
•
•
e
0 •
• •
112 •
0 •
0 •
••• •
• •
•
S..1 •
• -• •
• •
• •
••
t
•
•
•
• •
40 •
•
•
•-n •
S.• •
•
•
•
•
•
2. •
re *
•
•
•
•
•
•
•
tel •
•
1 1
•• 5
›. •
•
•
•
•
IP
•
•
1
•
•
a
•
•e•
•
•
•
• '
• •
• a
0 • C3 • ' •
•
•
•
• .
• •
•
•
•
•
•
•• •
•
•
0 •
•
•
• •
••
•
I, •
•
0 •
•
•
•
0 •
•
••• •
•
• 6
• •
• •
• 0 • 0 •
•
•
•
•
•
•
•
• •
0 •
•
••
CI
I
•
.0 •
1,... •
ti, •
• •
II
,a .
o .
•
1.1 •
•
0 •
•
• •
•Lit
S,
0 •
•
0 •
a
to •
(NI •
•.SS
•
c
• •
•
c-.• •
• •
•
• •
•
II
•-n ENI
X
•
•
1
•
IP
•
•
•
•
A
• •
•
•
•
• •
• •
01 • 0 •
CD •
.0 •
I+ •
tiN •
1
11.
•
•
•
•
•
•
.4 •
r•••
• •
I •
0 • • 0 1
•
•
o •
o •
.. •
• •
I •
•
•
•
•
•
S..
•••
.0 •
es •
u. .
.0 .
•-: 5
NI •
• •
•
•
• •
•
•
•
•
•
Cl *
•
X •
•
I •
•
•
•
•
• 0
•
•
•
•
•
•
1
•
•
IN1 •
1
.1
PI
•
X •
)0 •
• •
•
0 •
•
•
•
•
•
5
a
•
•
•
•
•
•
on
I
I I./
•
In
•
•
•
NI •
X •
•
•-1
••
•
•
•
•
X •
• •
O •
•
•
•
•
•
IS
•
•
•
•
•
•
• •
• •
.0 • o •
•-• •
• 0
•
..? •
la
-J
C.: CV
••••
II • -•
•
• •
o .
•
•
•
•
• •
-, •
tr, •
•
•••
▪
•
• •
c0 •
•
••
•
0 . • •
0K •
$ •
•
•
•
PI
•
•
•
••
0 •
*
•
•
CS.1
•
• •
•
•
• .
•
•
•
•
IA •
].
•
.4
•
•
•
•
•
•
•
,n •
lfI.
1.1 •
Is. •
•
•
•
y
• •
•
•
•
.0
•
S.•
•
•
*
•
•
•
•
•
• •
0 • 0
••
•
S.
•
••
• •
• 0 • 0 • 0 1
•
••
••
•
•
•
•
•
•
•••
••
•
• •
o •
•
X •
• .1.
•
0
•
•
•
•S
• •
• •
•
•1
•
•
•
•
•
-S
.
•
•e
• •
•
• •
• •
11,
•
•
•
•
•
e
1 •
• • .
• •
1 •
.4It• ' •-1 1
••• •
•• •
•
• 0 • 0 •
• 0 • 0 •
•
• •
• •
•
I •
I •
•
1...: •
tu •
1•.1 •
•
4.1 •
.... •
•
C., •
0 •
e., .
.
•
Cs •
cm •
Cs, •
•
...• •
l•A •
•
...I •
•
".1 •
*1 •
• *
• •
• •
• •
• •
•
•
I
•
•
•
•
•
•
•
•
•
_
•
•
•
•
•
•
•
•
Cs.1 •t
•
••• •
•
•• •
•
•
a
•
•
• •
0 •
•
•
•
•
•
•
•
a
*
•
••
•
•
•
•
•
•• • . *
•
1 •
•
•
•e
e •
• •
• •
7 • 0 • 0 •
•
•
•
•
• •
0 •
••-• X 2
X 1 0
I 0 P•4
c, •n
1 =
I
-J
•
•
•
•
•
•
....• ...• (,)
X •
• C. 0-
•
•
•
•S.
*
•
1
*
•
•
•
•
•
•
•
•
•
•
•
*
•
•
•
•
•
• •
• s • • •
• •
• •
0 • 0 • 0 • 0 • 0 •
•
•
•
S
•
•
•
• •
0 •
•
•
•
•
•
•
•
•
•
•
•
•
: '
•
•
•
•
• .
s
•
•
•
•
•••• •
••• •
,.. •
•
•
0 •
•
e
•
•
•
•
•
•
•
•
•
•
•
•
•
•
• •
0
S
•
• •
0
•
I.
•
•
1
•
•
7/.
•
•
•
•
•
•
•
•
• •
0
•
•
•
•
•
•
•
1
•
•
0
•
•
•
•
• •
•
•
•
•
•
•
•
• •
•
0 •
•
•
•
S
•
•
1
•
•
••
•
•
•
•
•
• •
• •
• 11, •
• 0 • 0 • 0 •
0
N
11
At
C.
0
a
...1
II
11 1, 1 =
.... x +...1
X 0 2
o
APPENDIX B
MULTI-OBJECTIVE COMPUTER PROGRAM (PROGRAM SEARCH)
153
154
*NON-LINEAR PROGRAMMING MODEL*
THIS NON-LINEAR PROGRAMMING ALGORITHM IS BASED ON THE
CUTTING-PLANE METHOD AS MODIFIED BY R.E. GRIFFITH AND
R.A. STEWART. BRIEFLY, IT IS BASED ON THE IDEA THAT THE
CONSTRAINT SET CAN BE REPRESENTED AS THE INTERSECTION OF
A SUFFICIENT NUMBER OF HALF-SPACES THAT CONTAIN IT. THE
MAIN TOOL OF THE PROCEDURE IS THE REPRESENTATION OF THE
OBJECTIVE FUNCTION AND CONSTRAINTS BY FIRST-ORDER APPROXIMATIONS (USING TAYLORS EXPANSION). THE RESULTING HYPERPLANES CONSTITUTE CUTS, CUTTING OFF PIECES OF THE
POLYHEDRAL CONSTRAINT SET CONTAINING THE ORIGINAL ONE.
LINEAR PROGRAMMING IS THEN APPLIED TO ITERATIVELY ARRIVE
AT A SOLUTION IN THE FEASIBLE REGION. USE OF THE DUAL
SIMPLEX METHOD IS MADE AVAILABLE WHENEVER A PREVIOUS LP
SOLUTION IS INFEASIBLE, OR A NEW CONSTRAINT(S) IS ADDED
TO THE PROBLEM.
DIMENSION Y(33),W(33),D(11,3),C(3)1YMAX(33)
,WN(33),SN(33),AN(33),RN(33),CN(33)
COMMON/BLOCK3/XSTORE(100,2),DIFF(2)
COMMON/BLOCK5/NCYCLE,J8
COMMON SM(6U,130),NBASIC(60),YVALUE(130)
COMM0N/BLOCK1/DELX(33),YIN(33),ZVALUE
NAMELIST/0ATA1/NR,NC,Y,NV,J9,J8,STEP,NCON.W,DPC
,YMAXPS
I
,WN,SN,AN,RN,CN,DELW,DELS.DELA.DELR,OELC
2
400 READ(5,DATA1)
IF(EOF(5))499,420
*DISPLAY ONE CASE OF INPUT DATA*
420 WRITE(6,DATA1)
*SAMPLE PROBLEM FOR PROGRAM CHECK-OUT -*INITIAL LP SOLUTION
2
*SET-UP INITIAL LP TABLEAU, SM(5,16). SLACK VARIABLE COEFFICIENTS ARE ALSO INTRODUCED IN PREPARATION FOR EITHER THE
SIMPLEX OR DUAL-SIMPLEX ALGORITHM.
DO 5 I.1,NV
YIN(I). Y(I)
5 CONTINUE
NCYCLE. 0
421 NCYCLE. NCYCLE +1
STEP'. STEP
LPP. 0
DO 423 1.1,100
00 422 J=1,2
15'5
XSTORE(I,J)* 0.
422 CONTINUE
423 CONTINUE
100 OP. LPP+ 1
DO 31 Iwl,NR
DO 30 J*1,NC
SM(I,J)* O.
30 CONTINUE
31 CONTINUE
DO 35 J*2,NR
JJ*J+2*NN
SM(J,JJ)m 1.
35 CONTINUE
36
NTABLE* 1
1=0
NN=NCON+2
DO 36 J*NN,27
1=1+2
JJAJ-NCON
IF(YIN(JJ-1) .NE. YMAX(JJ-1))
1512E1* STEP1/(YMAX(JJ -1) -YIN(JJ -1))
IF(YIN(JJ-1) .NE. 0.)
1SIZE2=STEP1/4BS(YIN(JJ-1))
IF(YIN(JJ-1) .E0. YMAX(JJ-1))SIZE1.1.E+20
IF(YIN(JJ--1) • E Q . 0.)SIZE2.1.E+20
SM(JAI)* AM4X1(1.,SIZE1)
SM(J , I+1)*AMAX1(1.,SIZE2)
JK*JJ-1
IF(YIN(JK)
1.E-05 .AND. JK
10)YIN(JK)=.00001
CONTINUE
NNR*2*NV
DO 37 J*2,NNR,2
JR.J/2
SM(1,J)=-W(JR)*5
SM(1,J+1)= W(JR)*$
37
CONTINUE
$M(2,2) 1.
SM(2A3)*-1.
SM(Z,4)* 1.
SM(295)=-1.
SM(2,6)* 1.
SM(2,7). -1.
SM(3,8)* 1.
SM(3,9)x-1.
5M(3,10)* 1.
5 M(3,11)= -1.
SM(3,12)* 1.
$ M(3,13)* -1.
SM(3,14)* 1.
SM(3,15)* -1.
156
IF(YIN(9) .GT. BL)EE2=CL
SM(9,NC)=C(3) —(D(2.3)*EE1*YIN(3)
1
+D(4,3)*YIN(6)
1
+0(5,3)*EE2*YIN(9)
2
+0(7,3)*Y/N(12)
2
+D(903)*YIN(15) +D(11,3)*YIN(18))
NNR=2*NV
DO 18 J=2.NNR.2
JR=J12
SM(28.J)=SN(JR)
SM(28.J+1)=—SN(JR)
SM(29.J)=—RN(JR)
SM(29sJ+I)= RN(JR)
5M(30,J)=AN(JR)
SM(30,J#1)=—AN(JR)
18 CONTINUE
SUMC1=0.
SUMC230.
SUMC3=0.
DO 17 I=1,NV
SUMC1= SUMC1+SN(/)*YIN(I)
SUMC2= SUMC2+RN(I)*YIN(I)
SUMC3= SUMC3 .SAN(I)*YIN(I)
17 CONTINUE
SM(28,NC)=932.86 —SUMC1
5M(29,NC)=718.65 +SUMC2
SM(30,NC). 497228. —SUMC3
DO 19 1=2,NR
IP(SM(I.NC) .GT. —1.E-08 .AND. SM(IoNC) .LT. 1.E-08)
1
SM(IsNC)=0.
19 CONTINUE
DO 20 1=10.27
SM(I.NC). STEP1
20 CONTINUE
WRITE(6,7)NTABLE,LPP
7 FORMAT(1)1 ,10X,14HLP TABLEAU NO.,I2,/
J. 11X114HLP PROBLEM NO.,I2)
IFINCYCLE .LT.J91CALL OUTPUT1(NR,NC,NTABLE,LPP)
IF(LPP •EQ. 1)
1CALL OUTPUT4(NR)
200 NTABLE= NTABLE+ 1
* SELECT SIMPLEX OR DUAL—SIMPLEX ALGORITHM
KFLAGI= Is SIMPLEX,
. 2, DUAL—SIMPLEX.
KFLAG1= 1
DO 40 L=2,NR
IF(SM(L,NC) .LT. —1.E-08)KFLAG1=2
40 CONTINUE
IF(KFLAG1 .EO• 1)CALL SIMPLEX(NR,NC,K)
157
IF(KFLAG1 .E0. 2)CALL DUAL(NR,NC,K)
IF( K .E0. 0)GO TO 999
*WRITE LP TABLEAU, LIST OF VARIABLE VALUES AND OBJECTIVE
FUNCTION VALUE.
NCC=NC— 1
DO 68 I4.2,NR
DO 67 J4*2,NCC
IF(SM(14,J4) .LT. .999 .0R. SM(I4,J4) • GT. 1.001)00 TO 67
JJ4.. J4
SUM. O.
DO 66 I*1,NR
,
SUM=SUM+ABS(SM(I,JJ4))
66 CONTINUE
TOT*A85(1...-SUM)
IF(TOT .LT. .0001)N8ASIC(I4-1)*JJ4-1
IF(TOT .LT. .0001)00 TO 68
67 CONTINUE
68 CONTINUE
WRITE(6.7)NTABLE,LPP
IF(NCYCLE •1.T.J9)CALL OUTPLIT1(NR,NC,NTABLE,LPP)
NC1= NC-•2
DO 70 KK1=1,NC1
YVALUE(KK1)* O.
70 CONTINUE
NR1*NR-1
DO 72 KK2=1,NR1
N* N8ASIC(KK2)
l'F(N .E0. 0)00 TO 72
YVALUE(N)* SM(KK2+1,NC)
72 CONTINUE
*CHECK FOR OPTIMAL,FEASIBLE SOLUTION,
KFLAG2* 1, EITHER NOT OPTIMAL OR NOT FEASIBLE
* 2, OPTIMAL AND FEASIBLE.
KFLAG2* 2
DO 60 K2=2,NR
IF(SM(KapNC) .LT. 0.)KFLAG2=1
60 CONTINUE
IF(KFLAG2 .E0. 1)00 TO 200
NCC* NC-1
DO 65 K3. 2,NCC
IF(SM(1,K3).LT. 0.)KFLA02= 1
65 CONTINUE
IF(KFLAG2 .E0. 1)00 TO 200
*(AT THIS POINT THE LP SOLUTION IS OPTIMAL AND FEASIBLE)
158
DO 80 Kl•loNV
K11. 2*K1— 1
K12. 2*K1
DELX(K1). YVALUE(K11)— YVALUE(K12)
80 CONTINUE
SUM. O.
DO 39 I•loNV
SUM. SUM+ W(I)*YIN(I)
39 CONTINUE
ZVALUE. SUM
DO 85 1=1,NV
YIN(1). DELX(I)+YIN(I)
85 CONTINUE
XSTORE(LPPo1).YIN(4)
XSTORE(LPPo2).YIN(5)
*PRINT OUT LP SOLUTION VECTOR.
CALL OUTPUT2(NV)
CALL OUTFUT3(NCYCLEOV)
*CHECK FOR CONVERGENCE TO THE SOLUTION OF THE NON—LINEAR
PROBLEM. THE ALGORITHM ACHIEVES CONVERGENCE TO THE SOLUTION
WHENEVER (0+XI —
0.0 FOR ALL I (I. NO. OF VARIABLES).
KFLAG3. 1, CONTINUE APPLYING THE CUTTING—PLANE METHODE,
• 2, CONVERGENCE TO A SOLUTION HAS BEEN ACHIEVED.
KFLAG3. 2
DO 90 K3.1oNV
IF(ABS(DELX(K3)).GT. 0.50)KFLAG3. 1
90 CONTINUE
IF(LPP .LT. 31G0 TO 94
OIFF(1).XSTORE(LRPol) —XSTORE(LPP-2,1)
DIFF(2)=XSTORE(LPPo2)— XSTORE(LPP-2,2)
IF(DIFF(1) •E0. O. .AND. DIFF(2) 0.)STEP1m STEP1/2.
94 IF(KFLAG3
11GO TO 100
IF(KFLAG3 .E0. 2)WRITE(6o95)
95 FORMAT(1)4 o10X03HC3NVERGENCE TO THE SOLUTION OF THE NON
108LEMo/11X,17HHAS BEEN ACHIEVED)
*AT THIS POINT THE PROCEDURE HAS CONVERGED TO THE SOLUTION
OF THE NON—LINEAR PROBLEM. EVALUATE THE ACHIEVED ASPIRATION
LEVEL VECTOR ZA*(21,22,23), PREPARE THE NEV ASPIRATICN LEVEL
VECTOR G.(G1oG2,63),RESET THE INITIAL ALLOCATION VECTOR
X.(X110(2), AND BEGIN THE NEXT CYCLE.
159
*CHECK FOR CONVERGENCE OF THE ASPIRATION LEVEL VECTOR ZA.(Z11
Z2,23) TO STEADY—STATE VALUES. CONVERGENCE IS ACHIEVED WHENEVER /G(1).+GNEW(I)/.1.E. EPSI1 FOR ALL I INTEGRAL VALUES, (3)
KFLAG4= 1, CONTINUE CYCLING,
= 2, CONVERGENCE HAS BEEN ACH+IEVED
2 11X,35HALTERNATIVES TO THE DECISION—MAKER./)
GO TO 400
999 STOP
END
SUBROUTINE S/MPLEX(NR,NC,K)
*SIMPLEX ALGORITHM. THE SIMPLEX ALGORITHM STARTS
WITH A SYSTEM IN BASIC, FEASIBLE FORM.
NR= NO. OF ROWS IN SYSTEM MATRIX SM(5,16)
NC.. NO. OF COLUMNS IN SYSTEM MATRIX
K= BOUNDNESS INDICATOR,
= Os SOLUTION IS UNBOUNDED,
• 1, BASIC, FEASIBLE SOLUTION IS AVAILABLE.
COMMON SM(60,130),NBASIC(60),YVALUE(130)
COMMON/BLOCK5/NCYCLE,48
WRITE(6,5)
5 FORMAT(1H ,20X,20H*SUBROUTINE SIMPLEX*,//)
*FIND CMOST, MOST NEGATIVE COST COEFFICIENT SM(1,J).
J1=1
K=0
CMOST=0.
MCC- NC-1
DO 10 J=2,NCC
IF(SM(1,J).GE.CMOST)G0 TO 10
J1= J
CMOST= SM(1,J1)
10 CONTINUE
IF(CMOST.E0.0.)RETURN
*(MAXIMIZATION PROCESS).
FIND R= LEAST POSITIVE VALUE OF RATIO SM(I,16)/SM(I,
OVER ALL I, WHERE SM(I,16) ARE NON—NEGATIVE ELEMENTS
AND THE SM(I,J1) ARE POSITIVE ELEMENTS.
Il -1
R= 1.E+20
DO 20 12= 2,NR
IF(SM(I2,NC) .LE. 0.)G0 TO 20
IF(SM(I2,J1).LE. 0.)G0 TO 20
IF(5)1(I21NC) .LT. 1.E-08)G0 TO 20
RR= SM(I2ANC)/SM(I2,J1)
160
IF(NCYCLE • LT. 4 8)
1WRITE(6,15)RR
15 F0RMAT(1H
',HP = ,E11.8)
IF(Rk .LT. R)I1= 12
IF(RR .LT. R)R. RR
K=1
20 CONTINUE
*PIVOT ON ROW Il AND COLUMN JI OF SYSTEM MATRIX SM(5,16)
WRITE(6,25)I1,41
25 FORMAT(1H r25X,14HPIVOT POINT
(sI2,1H,,I2,1H),/)
ZTEST= 1.E-06
IF(SM(I1,41) .LT. ZTEST) WRITE(6,30)
30 FORMAT(1H ,10X,24HN0 PIVOT POINT AVAILABLE)
IF(SM(I1,41) . 11 .ZTEST)K=0
IF(SM(I1,J1)
ZTEST)RETUR4
CALL PIVOT(NR,NC,I1,J1)
RETURN
END
SUBROUTINE DUAL(NR,NC,K)
*DUAL SIMPLEX ALGORITHM. THE DUAL SIMPLEX ALGORITHM
IS USED FOR BASIC BUT INFEASIBLE SYSTEMS, AND EITHER
OPTIMAL OR NON—OPTIMAL TO REMOVE THE INFEASIBILITY.
NR= NO. OF ROWS IN SYSTEM MATRIX SM(5,16),
NC= NO. OF COLUMNS IN SYSTEM MATRIX,
K= SOUNDNESS INDICATOR,
. OP SOLUTION IS UNBOUNDED, PIVOT ELEMENT IS
NOT AVAILABLE,
. I, PIVOT ELEMENT IS AVAILABLE.
COMMON/BLOCK5/NCYCLE,J8
COMMON SM(60,130),N8ASIC(60),YVALUE(130)
WRITE(65)
5 FORMAT(1H #30X,17H*SUBROUTINE DUAL*,//)
*FIND BMOST, THE MOST NEGATIVE RIGHT—HAND SIDE ELEMENT
511 (I,16) OF THE SYSTEM MATRIX.
K= 0
SMOST=0.
00 10 I=2,NR
IF(SM(I,NC) .GE. BMOST)G0 10 10
11= I
BMOST= SM(I1,NC)
10 CONTINUE
IF(BMOST .EQ. 0.)RETURN
L=NC-1
*(MAXIMIZATION PROCESS).
FIND R= LARGEST RATI3 OF SM(1,L)/SM(I1,L) OVER ALL L WH
SM(110.) ARE POSITIVE COST COEFFICIENTS AND THE 511 (I1,L)
NEGATIVE ELEMENTS.
161
J1.1
R. —I.E+20
DO 20 12.2,1IF(SM(1p12).LE. 0.)G0 TO 20
IF(SM(II1I2).GE. 0.)G0 10 20
RR= SM(lpI2)/SM(I1,12)
/F(RR .GT. R)J1. 12
1F(kR •GT. R)R. RR
K.1
IF(NCYCLE .LT. J8)
1WRITE(6,15)I2,RR
15 FORMAT(1H P35X,I2,5X,4HR = PE11.3)
20 CONTINUE
WRITE(6,25)I1PJ1
25 FORMAT(1H '35X,14HPIVOT POINT (sI2,1H,,I2,1H)p/)
MST. 1.E-06
IF(ABS(SM(I1,J1)) .LT. ZTEST)4RITE(6,30)
30 F0RMAT(1H P1OX,24HN0 PIVOT POINT AVAILABLE)
IF(ABS(SM(Ilp.11)) .LT. ZTEST)RETURN
CALL PIVOT(NRINC,I1,J1)
RETURN
END
- SUBROUTINE PIVOT(NRpNC,IPJ)
*THIS SUBROUTINE PIVOTS ON ELEMENT (I ps!) OF THE SYSTEM M
SM(5p16) TO UPDATE IT.
COMMON/BLOCK5/NCYCLE,J8
COMMON SM(60,130),NBASIC(60),YVALUE(130)
IF(NCYCLE .LT. JB)
1WRITE(6,5)
5 FORMAT(1H .40X,18H*SUBROUT/NE PIVOT*p//)
ITEST. 1.E-06
VALUE. 1./SM(IpJ)
00 10 N. loNC
SM(I,N). VALUE*SM(IpN)
10 CONTINUE
DO 30 M=IpNR
IF(M.EQ.I)G0 TO 30
IF(SM(M,J) •EQ. 0.)G0 TO 30
Q. SM(M,J)
DO 20 NN.1,NC
SS. SM(M,1N)—.SM(1,NN)*0
IF(ABS(SS).LT.ZTEST)SS. O.
SM(M,NN). SS
20 CONTINUE
30 CONTINUE
RETURN
END
162
SUBROUTINE OUTPUT1(NR.NCpNgL)
+THIS SUBROUTINE PRINTS OUT THE LP TABLEAU;
MATRIX SM(5.16)p
COMMON SM( 6 0 , 130).NBASIC(60),YVALUE(130)
wRITE(6.5)N,L
5 FORMAT(1H1,10X,14HLP TABLEAU NO.,I2,/
1 11X,14HLP PROBLEM NO.,12)
WRITE (6.10)
10 FORMAT(1H ,2X,24(5H*****))
WRITE(6,12)
12 F0RMAT(1H p2X,12(10H*
),1H*)
WRITE(6,14)
14 FORMAT(1H+ ,4 X.5HBASIC,17)(.2HY1,8X,2HY2,8X,2HY3,8X,2HY4,8X
18X , 2HY6,8X,2HY7,2Xp2HY8.8X,2HY9,8X,3HY10)
WRITE(6,12)
WRITE(6,16)
16 F0RMAT(1H+,3X,BHVARIABLE,5X,1HZ)
WRITE(6.18)
18 FORMAT(1H ,2X,1H*,9X,22(5H ))
WRITE(6.12)
WRITE(6.12)
WRITE(6,20)
20 FORMAT(1H+.2X,6H*
YIp8X,1HZ,BX,4H0+X1,6X14HD-X1,6X,4H0+
1 4H 0 X2,6X,4H0+X3,6X,4H0-X3,6X,4H0+X4,6X,4H0-X4,6X,4H0+X5
2 4H0-X5)
WRITE(6,22)
22 FORMAT(1H .2X,24(5H****1))
WRITE(6,12)
WRITE(6.12)
WRITE(6,24)(5M(1,J),J.1,11)
2.4 FORMAT(1H+,13X,11(E9.3,1%))
WRITE(6.22)
NNIR. NR-1
NRRw NR-1
00 50 I.1,NRR
WRITE(6.12)
WRITE(6,12)
WRITE(6,26) NBASIC(I).(SI(I+1,J),J=1,11)
26 FORMAT(1H+,6X0I2,5X,11(E9.3,1X))
WRITE(6,22)
50 CONTINUE
WRITE(4,62)
62 FORMAT(1H1p 2)(024(544 ))
WRITE (6,64)
),11-1*)
64 FORMAT(1H ,2X, 12(10H* WRITE(6,64)
WRITE(6,66)
66 FORMAT(1H+, 6X,0. 11*,7X, 4q12*,7X,*Y13*, 7X..*Y14*,
163
1 7Xo*Y15*,7X, Y16*,7X,*Y17*, 7X,*Y18*,7X,*Y19*,7X,
Z *X20*,7X,*Y21*,7X,*RHS*)
WRITE(6,68)
68 FORMAT(1H
2X,24(5 ))
WRITE(6.64)
WRITE(6,64)
WRITE(6,68)
DO 80 Iml,MR
WRITE(6,64)
WRITE(6,64)
WRITE(6,70) J*12,NC)
70 FORMAT(1H+,3X, 12(E9.3,1X))
WRITE(6,68)
80 CONTINUE
RETURN
END
SUBROUTINE OUTPUT2(NV)
*THIS SUBROUTINE PRINTS OUT THE LP SOLUTION VECTOR
AND STATUS OF /D+XI— D—XI/ FOR ALL THE I—CONSTRAINTS.
,
20
25
30
5
10
15
COMMON/8LOCR1/0ELX(33),YIN(33),ZVALUE
WRITE(6.20)
FORMAT(1H1,21X,*DELX VECTOR*)
DO 30 I=1,NV
WRITE(6,25)I,DELX(I)
FORMAT(1H ,25X,I2,E15.3)
CONTINUE
RETURN
END
SUBROUTINE OUTPUT4(NR)
DIMENSION RSM(10,52)
COMMON SM(60,130),NBASIC(60),YVALUE(130)
DO 25 N*10,130,10
IC1* N-9
ICZ=C
DO 10 IC*IC1,N
IC2=1C2+1
00 5 IR*1,NR
RSM(IC2,IR)=SM(IRDIC)
CONTINUE
CONTINUE
WRITE(6,15)N
FORMAT(1H1,13)
WRITE(6,20)((RSM(I,J),I*1,10),J*1 52)
FORMAT(1H '10E10.3)
CONTINUE
,RETURN
END
,
20
25
164
SUBROUTINE OUTPUT3(NCYCLE,NV)
*THIS SUBROUTINE PRINTS OUT THE ASPIRATION LEVE
Z. THE RESOURCE ALLOCATION VECTOR X, AND THE
ASPIRATION LEVEL VECTOR ZA FOR EACH CYCLE.
COMKON/BLOCK1/DELX(33),YI4(33),ZVALUE
WRITE(6,5)
5 FORMAT(1H1,21X,*RESOURCE ALLOCATION VECTOR, X(I)*)
00 30 I.1,NV
WRITE(6,25)IsYIN(I)
25 FORMAT(1H ,25X,I2,F15.3)
30 CONTINUE
4RITE(6,35)ZV4LUE
35 FORMAT (1H ,40X,*ZVALUE**,E15.6)
RETURN
END
SDATA1
APPENDIX C
COMPUTER RESULTS
Run #1.
Maximize water runoff (10 3 cu. meters)
Resource Allocation Vector, X(I)
1
2
3
4
5
6
7
8
9
10
11
.000
0.000
-.000
.000
355.338
24.662
.000
.000
.000
-.000
-.000
Z value = .541912E+03
Covergence of the aspiration level vector ZA to steady-state values has
been achieved. Submit list of alternatives to the decision-mak6r.
Run #2. Maximize selected crops, Iteration 1
Resource Allocation Vector, X(I)
1
2
3
4
5
6
7
8
9
10
11
.000
.000
.000
.000
.000
.000
16.295
.000
0.000
0.000
0.000
Z value = .108525E+03
165
166
Run #3.
Maximize selected crops
Resource Allocation Vector, X(I)
1
2
3
4
5
6
7
8
9
10
11
Run #4.
.000
.000
.000
.000
.000
.000
51.458
.000
.000
.000
-.000
Z value = .342710E+03
Runoff availability (10 3 cu. meters)
Resource Allocation Vector, X(I)
1
2
3
4
5
6
7
8
9
10
11
Run #5.
Runoff availability, (10
.000
.000
.000
.000
263.500
65.500
.000
.000
.000
-.000
-.000
Z value = .493728E+03
3 cu. meters)
Resource Allocation Vector, X(I)
1
2
3
4
5
6
7
8
9
10
11
.000
.000
.000
.000
229.286
80.714
.000
.000
.000
-.000
-.000
Z value = .475777E+03
167
Run #6.
Maximize livestock, (AU)
Resource Allocation Vector, X(I)
1
2
3
4
5
6
7
8
9
10
11
.000
350.000
-.000
.000
.000
.000
.000
.000
.000
-.000
-.000
Z value = .481250E+02
APPENDIX D
ALTERNATIVE DEVELOPMENT
An alternative development leading to some of the results presented in Chapter 6 has been suggested by Ferenc Szidarovszky (1976)
and it is outlined here.
The following two statements are well known and are offered
without proof.
Lemma 1. If 5., is a random variable with exponential probability
density function (PDF) having parameter X > 0, then a
has an expon-
ential PDF with parameter X/a, where a > O.
Lemma 2. If and 71 are independent random variables with
PDF's f and g, respectively, then the PDF of ,§ + n is given by
f(x-t)g(t)dt.
Now, from Lemma 1 and with reference to the material presented in
Theorems 1 and 3 in Chapter 6, z = c ixi has the distribution
Al
f(z) =
X1
7•
-
e
x1
z
, for z > 0
From Lemma 2 above, y =c lxi + c 2 x2 has the distribution
168
169
h(y) =
f(y-t)g(t)dt
X
X 1
x 1
1(y-t) X 2
x 1
X 2
—
x2
X y
x2
x1
cit
X X
1 2
X y
_1
X 1 X 2x1
-
e
xl x 2
o
X
x2-
, 2
h(y) = X
-X x_
1 x2 1
X1
— )
xi
-1] /
[e
X 2 y
Al X2
dt
A
X
1
)
-te--2 -
x2 xl
1
X 1 X 2
t
X 2 X1
N— + ;7 ]
x2
1
X1 y
X2x
-
e
1
y > 0
= 0, otherwise,
as previously shown in Theorem 1. Theorem 3 in Chapter 6 can also be
proven in a similar manner.
LIST OF REFERENCES
Allen, F. M., R. Braswell and P. Rao. 1974. "Distribution-Free
Approximations for Chance-Constraints," Oper. Res. 22, pp.
610-612.
Agricultural Experiment Station. 1968. "Consumptive Use of Water
by Crops in Arizona," Technical Bulletin 169, College of Agriculture, The University of Arizona, Tucson.
Arizona Crop and Livestock Reporting Service. 1975. Arizona Agricultural Statistics, Bulletin S-10, Phoenix, Arizona.
Aumann, R. J. 1964. "Subjective Programming," in Human Judgments and
Optimality, edited by M. W. Shelby II and G. L. Bryan, Chap. 12,
John Wiley, New York, pp. 217-242.
Babbar, M. M. 1955. "Distribution of Solutions of a Set of Linear
Equations with Applications to Linear Programming," Journal of
American Statistical Associations, Vol. 50, pp. 155-164.
Bartlett, E. T. 1974. "A Decision-Aiding Model for Planning Optimal
Resource Allocation of Water Basins," Ph. D. Dissertation,
. The University of Arizona, Tucson.
Beeson, R. M. and W. S. Meisel. 1971. "The Optimization of Complex
Systems with Respect to Multiple Criteria," in Proc., Systems,
Man and Cybernetics Conference, Inst. of Electronics and
Electrical Engineers, Anaheim, California.
Bellman, R. 1957. Dynamic Programming, Princeton University Press,
Princeton, N. J.
Bellman, R. and L. A. Zadeh. 1970. "Decision Making in a Fuzzy Environment," Management Science, Vol. 17, No. 4, pp. B141-B164.
Benayoun, R., J. de Montgolfier, J. Tergny and O. Laritchev. 1971.
"Linear Programming with Multiple Objective Functions: Step
Method (Stem)," Math. Progr., 1(3), pp. 366-375.
Berelson, B. and G. A. Steiner. 1964. Human Behavior: An Inventory
of Scientific Findings, Harcourt, Brace and World, Inc., New
York.
Blackwell, D. H. and M. A. Girshick. 1954. Theory of Games and
Statistical Decisions, John Wiley, New York.
170
171
Brinck, F., M. M. Fogel and L. Duckstein. 1976. "Optimal Livestock
Produ :tion of Rehabilitated Mine Lands," Proceedings, Sixth
Annual Joint Meeting of AWRA and Arizona Academy of Sciences,
Tucson, Arizona, pp. 277-284.
,
Charnes, A. and W. W. Cooper. 1961. "Deterministic Equivalents for
Optimizing and Satisfising Under Chance Constraints," Oper.
Res., 11(1), pp. 18-39.
Cluff, C. B., G. R. Dutt, P. R. Ogden, and J. L. Stroehlein. 1971.
"Development of Economic Water Harvest Systems for Increasing
Water Supply, Project Completion Report, OWRR Project No.
B-205-Ariz., September.
Cohon, J. L. 1972. "Multiple Objective Screeining of Water Resource
Investment Alternatives," M. S. Thesis, Dept. of Civil
Engineering, Massachusetts Institute of Technology, Cambridge.
Cohon, J. L. and D. H. Marks. 1975. "A Review and Evaluation of
Multi-objective Programming Techniques," Water Resour. Res.,
10(2), pp. 208-220.
Dantzig, G. B. 1963. Linear Programming and Extensions, Princeton
University Press, Princeton, N. J.
Dantzig, G. B. and A. Madansky. 1961. "On the Solution of Two-Stage
Linear Programs Under Uncertainty," Proc. Fourth Berkeley
Symposium on Mathematical Statistics and Probability, Vol. I,
University of California Press, Berkeley.
Duckstein, L. 1975. "Decision Making and Planning for River Basin
Development," Proc., U. N., Interregional Seminar on River
Basin Development, Budapest, September.
Duckstein, L. and C. Kisiel. 1971. "Collection Utility: A Systems
Approach to Water Pricing Utility," Proc., Intl. Symp. on
Mathematical Models in Hydrology, Warsaw, Poland, pp. 881-888.
Fishburn, P. C. 1970. Utility Theory for Decision Making, Wiley,
New York.
Fishburn, P. C. and R. L. Keeney. 1975. "Generalized Utility IndepenNo. 5.
dence and Some Applications," Oper. Res., Vol. 23,
a Programming
Freund, R. L. 1956. "The Introduction of Risk into
253-263.
pp.
Vol.
24,
Model," Econometrica,
Geoffrion, A. M. 1967a. "Solving Bicriterion Mathematical Programs,"
Oper. Res., 15(1), pp. 39-54.
172
Geoffrion, A. M. 1967b. "Stochastic Programming with Aspiration or
Fractile Criteria," Management Science, Vol. 13, pp. 672-679.
Geoffrion, A. M. 1968. "Proper Efficiency and the Theory of Vector
Maximization," J. Math. Anal. Appl., 22, pp. 618-630.
Ghellinck, G. T. and G. D. Eppen. 1967. "Linear Programming Solutions
for Separable Markovian Decision Problems," Management Science,
Vol. 13, No. 5.
Goicoechea, A. 1977. "Non-normal Deterministic Equivalents and a
Transformation in Stochastic Programming," paper presented to
the George E. Nicholson Student Paper Competition, Joint
ORSA/TIMS National Meeting in San Francisco, May 9-11.
Goicoechea, A., L. Duckstein and R. L. Bulfin. 1976. "Multi-objective
Stochastic Programming: The PROTRADE Method," Paper presented
to the ORSA/TIMS Miami Meeting, Nov. 2-5.
Goicoechea, A., L. Duckstein and M. M. Fogel. 1976a. "A Multi-objective
Approach to Managing a Southern Arizona Watershed," Proc. 1976
Meetings of the Arizona Section-American Water Resources Assoc.,
and the Hydrology Sec.-Arizona Academy of Science, April 29 May 1, Tucson, Arizona.
Goicoechea, A., L. Duckstein, and M. M. Fogel. 1976b. "Multi-objective
Programming in Watershed Management: A Case Study of the
Charleston Watershed," Water Resources Res., 12(6), pp. 10851092.
Griffith, R. E. and R. A. Stewart. 1960. "A Nonlinear Programming
Technique for the Optimization of Continuous Processing Systems,"
Management Science, Vol. 7, No. 4, pp. 379-392.
Haimes, Y. Y. 1973. "Hierarchical Modeling for the Planning and
Management of a Total Regional Water Resources System," paper
presented at the Symposium on Control of Water Resources
Systems, Inst. Fed. of Automat. Contr., Haifa, Israel, Sept.
17-21.
Haimes, Y. Y. and W. A. Hall. 1974. "Multi-objectives in Water Resource Systems Analysis: The Surrogate Worth Tradeoff Method,"
Water Resour. Res., 10(4), pp. 615-624.
Haimes, Y. Y., W. A. Hall and H. T. Freedman. 1975. Multi-objective
Optimization in Water Resources Systems, Vol. 3 of Developments
in Water Science, Elsevier, New York.
Haimes, Y. Y., L. S. Lasdon and D. A. Wismer. 1971. "On the Bicriterion
Formulation of the Integrated System Identification and Systems
Optimization," IEEE Trans. Syst. Man. Cybern., 1, pp. 296-297.
173
Haimes, Y. Y. and W. S. Nainis. 1973. "Multi-objective and Dynamic
Benefit -- Cost Analysis in Water Resources Planning -- A
Hierarchical Coordination," paper presented at First World
Congress on Water Resources, Inst. Water Resources Assn.,
Chicago, Ill., pp. 24-28.
Hillier, F. S. and G. J. Lieberman. 1967. Introduction to Operations
Research, Holden-Day, Inc., California.
Hogg, R. V. and A. T. Craig. 1972. Introduction to Mathematical
Statistics, McMillan Co., London.
Howard, R. A. 1960. Dynamic Programming and Markov Processes, MIT
Press, Cambridge, Mass.
Johnsen, E. 1968. Studies in Multi-objective Decision Models, Monograph No. 1, Economic Research Center in Lund, Lund, Sweden.
Keeney, R. L. 1969. "Multidimensional Utility Functions: Theory,
Assessment and Application," Tech. Rep. 43, Oper. Res. Center,
MIT, Cambridge, Mass.
Keeney, R. L. 1973. "Concepts of Independence in Multi-attribute
Utility Theory," Multiple Criteria Decision Making, J. L.
Cochrane and M. Zaleny (eds.), Univ. of South Carolina Press,
Columbia, S. C., pp. 62-71.
Keeney, R. L. 1974. "Multiplicative Utility Functions," Oper. Res.,
22, pp. 22-34.
Keeney, R. L. and A. Sicherman. 1975. "An Interactive Computer
Program for Assessing and Analyzing Preferences Concerning
Multiple Objectives," International Inst. for Applied Systems
Analysis, RM-75-12, Laxenburg, Austria.
Kelly, J. E. 1960. "The Cutting-Plane Method for Solving Convex
Problems," J. Soc. Industr. and Applied Math., 8(4), pp. 703-712.
Koopmans, T. C. 1951. "Analysis of Production as an Efficient Combination of Activities," Activity Analysis of Production, Monogr.
13, pp. 33-97, John Wiley, New York.
Kuhn, H. W. and A. W. Tucker. 1950. "Nonlinear Programming," Proc.
Second Berkeley Symposium on Mathematical Statistics and
Probability, Univ. of Cal. Press, Berkeley, pp. 481-492.
Kynard, B. and J. Tash. 1975. "Rearing of Tilapia Zillii in Reclaimed
Ponds in the Coal Mining Area of Northern Arizona," Farm Pond
Harvest.
174
Lee, S. M. 1972. Goal Programming for Decision Analysis, Auerbach Pub.
Inc., Philadelphia, Penn.
Lee, S. M., E. Clayton and L. Moore. 1975. "New Developments in Goal
Programming," paper presented at the TIMS/ORSA Meeting,
November.
Lindgren, B. W. 1968. Statistical Theory, McMillan Co., New York.
Lingaraj, B. P. 1974. "Certainty Equivalent of a Chance Constraint if
the Random Variable is Uniformly Distributed," Comm. Statist.
Vol. 3, pp. 949-951.
Lingaraj, B. P. and H. Wolfe. 1974. "Certainty Equivalent of a Chance
Constraint if the Random Variable Follows a Gamma Distribution,"
Sandhya Ser. 36, pp. 204-208.
Loucks, D. P. 1975. "Conflict and Choice: Planning for Multiple Objectives," In Economy Wide Models and Development Planning,
C. Blitzer, P. Clark and L. Taylor (eds.), Oxford Univ. Press,
New York.
Luenberger, D. G. 1973. Introduction to Linear and Nonlinear Programming, Addison-Wesley, California.
Maass, A. M., M. Hufschmidt, R. Dorfman, H. A. Thomas, S. Marglin and
G. Fair. 1962. Design of Water Resource Systems, Harvard
University Press, Cambridge, Mass.
MacCrimmon, K. R. 1969. "Improving the System Design and Evaluation
Process by the Use of Tradeoff Information: An Application to
Northeast Corridor Planning," RAND Memorandum, EN-5877-DOT,
April.
Major, D. C. 1969. "Benefit-Cost Ratios for Projects in Multiple
Objective Investment Programs," Water Resour. Res., 5(6),
pp. 1174-1178.
Marglin, S. A. 1967. Public Investment Criteria, MIT Press, Cambridge,
Mass.
Markowitz, H. 1959. Portfolio Selection, John Wiley, New York.
Monarchi, D. E. 1972. "Interactive Algorithm for Multiple Objective
Decision Making," Tech. Report 6, Hydrology and Water Resources
Dept., The University of Arizona, Tucson.
Monarchi, D. E., C. C. Kisiel and L. Duckstein. 1973. "Interactive
Multiobjective Programming in Water Resources: A Case Study,"
Water Resources Res., 9(4), pp. 837-850.
175
Pasternak, H. and U. Passy. 1974. "Finding Global Optimum of Bicriterion Mathematical Programs," Cahiers du Centre d'Etudes de
Recherche Operationelle, 16(1).
Prekopa, A. 1966. "On the Probability Distribution of the Optimum of
a Random Linear Program," SIAM J. on Control, 4(1).
Raiffa, H. 1968. Decision Analysis, Addison-Wesley, Boston, Mass.
Raiffa, H. 1969. "Preferences for Multi-Attributed Alternatives,"
Memo. Rm-5868-DOT RC, Rand Corp., Santa Monica, Calif.
Reid, R. W. and V. Vemuri. 1971. "On the Noninferior Index Approach
to Large-Scale Multi-Criteria Systems," J. Franklin Inst.,
291(4), pp. 241-254.
Roy, A. D. 1952. "Safety First and the Holding of Assets," Econometrica, Vol. 20, pp. 431-449.
Roy, B. 1971. "Problems and Methods with Multiple Objective Functions,"
Math. Progr., 1(2), pp. 239-266.
Roy, B. 1975. "Interactions et Compromis: La Procedure du Point de
Mire," Cahiers Belges de Recherche Operationelle, forthcoming.
Schleifer, R. O. 1969. Analysis of Decisions Under Uncertainty,
McGraw-Hill, New York.
Schlaifer, R. O. 1971. Computer Programs for Elementary Decision
Analysis, Div. of Research, Harvard Business School, Boston,
Mass.
Sengupta, J. K. 1969. "Safety-first Rules Under Chance-constrained
Linear Programming," Oper. Res. 17(1), pp. 112-132.
Sengupta, J. K. and G. Tintner. 1964. "An Approach to a Stochastic
Theory of Economic Development with Applications," In Problems
of Economic Dynamics and Planning: Essays in Honor of M. Kalechi,
PWN Polish Sci. Pub., Warsaw.
Sengupta, J. K., G. Tintner and C. Millham. 1963. "On Some Theorems of
Stochastic Linear Programming with Applications," Management
Science, Vol. 10, pp. 143-159.
Sicherman, A. 1975. "An Interactive Computer Program for Quantifying
and Analyzing Preferences Concerning Multiple Objectives,
M. S. Thesis, Dept. of Systems, MIT, Cambridge, Mass.
Simon, H. A. 1953. Administrative Behavior, McMillan, New York.
176
Smith, J., M. M. Fogel and L. Duckstein. 1974. "Uncertainty in Sediment
Yield from a Semi-arid Watershed," Proc., 18th Annual Meeting,
Arizona Academy of Sciences, Flagstaff, Ariz., pp. 258-268.
Szidarovszky, Ferenc. 1976. Visiting Professor, Systems and Industrial
Engineering Dept., The University of Arizona, Tucson, personal
communication.
Thames, John L. 1977. Professor, School of Renewable Natural Resources,
The University of Arizona, Tucson, personal communication.
Tintner, G. 1955. "Stochastic Linear Programming with Applications to
Agricultural Economics," Proc. of Second Symp. on Linear
Programming, Natl. Bureau of Standards, Washington, D. C.
Vemuri, V. 1974. "Multiple-objective Optimization in Water Resource
Systems," Water Resour. Res., 10(1), pp. 44-48.
Verma, T. R. and J. L. Thames. 1975. "Rehabilitation of Land Disturbed
by Surface Mining Coal in Arizona," J. Soil and Water Cons.,
Vol. 30, pp. 129-131.
Von Neuman, J. and O. Morgenstern. 1947. Theory of Games and Economic
Behavior, 2nd Ed., Princeton Univ. Press, Princeton, N. J.
Von Neuman, J. and O. Morgenstern. 1953. Theory of Games and Economic
Behavior, 3rd Ed., Princeton University Press, Princeton, N. J.
Wagner, H. M. 1969. Principles of Operations Research, Prentice-Hall,
Englewood Cliffs, N. J.
Yu, P. L. and G. Lietman. 1974. "Nondominated Decisions and Cone
Convexity in Dynamic Multicriteria Decision Problems," J. of
Optimization Theory and Applications, 14(5), pp. 573-584.
Zadeh, L. A. 1963. "Optimality and Non-scalar-valued Performance
Criteria," IEEE Trans. Automat. Contr., AC-8(1), pp. 59-60.
Zeleny, M. 1973. "Compromixe Programming," In Multiple Criteria
Decision Making, J. L. Cohrane and M. Zeleny (eds.), Univ. of
South Carolina Press, Columbia, S. C., pp. 262-301.
Zeleny, M. 1974. Linear Multi-objective Programming, Springer-Verlag,
Berlin, Heidelber, New York.
Zeleny, M. 1975. "The Theory of the Displaced Ideal," Multiple
Criteria Decision Making, Kyoto - 1975, Springer-Verlag,
Berlin, pp. 153-206.