On global quadratic growth condition for min

This article was downloaded by: [Hong Kong Polytechnic University]
On: 01 September 2015, At: 04:33
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: 5 Howick Place, London, SW1P 1WG
Applicable Analysis: An International
Journal
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/gapa20
On global quadratic growth condition
for min-max optimization problems
with quadratic functions
a
a
Zhangyou Chen & Xiaoqi Yang
a
Department of Applied Mathematics, The Hong Kong Polytechnic
University, Kowloon, Hong Kong.
Published online: 25 Apr 2014.
Click for updates
To cite this article: Zhangyou Chen & Xiaoqi Yang (2015) On global quadratic growth condition for
min-max optimization problems with quadratic functions, Applicable Analysis: An International
Journal, 94:1, 144-152, DOI: 10.1080/00036811.2014.908286
To link to this article: http://dx.doi.org/10.1080/00036811.2014.908286
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions
Applicable Analysis, 2015
Vol. 94, No. 1, 144–152, http://dx.doi.org/10.1080/00036811.2014.908286
On global quadratic growth condition for min-max optimization
problems with quadratic functions
Zhangyou Chen∗ and Xiaoqi Yang
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
Department of Applied Mathematics, The Hong Kong Polytechnic University, Kowloon, Hong Kong
Communicated by J.-C. Yao
(Received 4 December 2013; accepted 22 March 2014)
Second-order sufficient condition and quadratic growth condition play important
roles both in sensitivity and stability analysis and in numerical analysis for
optimization problems. In this article, we concentrate on the global quadratic
growth condition and study its relations with global second-order sufficient conditions for min-max optimization problems with quadratic functions. In general,
the global second-order sufficient condition implies the global quadratic growth
condition. In the case of two quadratic functions involved, we have the equivalence of the two conditions.
Keywords: mathematical programming; global second-order sufficient
condition; quadratic growth condition; min-max problems with quadratic
functions
AMS Subject Classifications: 90C30; 90C46; 90C47
1. Introduction
Second-order sufficient conditions of optimization problems [1–3] are important for sensitivity analysis and numerical algorithms, such as to study the continuity and/or the
differentiability properties of the solution sets and the value functions and the convergence
rate of algorithms, see, e.g. [2,4–7].
The standard second-order sufficient conditions imply that the optimization problem
has a unique solution or a finite set of isolated solutions. When the optimization problem
has nonisolated solutions, different devices were introduced to deal with it, see, e.g. [7–12].
As can be seen, what is needed and efficient is the following quadratic growth condition:
f (x) ≥ c + αdist 2 (x, S),
for all x near S,
where f is the objective function, S is a set on which f has constant value c, and α is a
positive parameter.
In the case where S is a singleton, the standard second-order sufficient conditions are
equivalent to the quadratic growth condition in the presence of the Mangasarian–Fromovitz
constraint qualification, see Robinson [13] and Alt [14].
∗ Corresponding author. Email: [email protected]
© 2014 Taylor & Francis
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
Applicable Analysis
145
Bonnans and Ioffe [9] studied the relationship between the general second-order sufficient condition and the quadratic growth condition for the unconstrained optimization
of a simple composite function (maximum of a finite collection of smooth functions) and
derived some sufficient conditions for the quadratic growth condition.
In this article, we concentrate on the study of global second-order optimality conditions
for optimization problems. There has not been much work on global conditions for general
optimization problems, except for problems with special structures. For quadratic problems,
second-order optimality conditions for global solutions are obtained for some problems
with specific structures. For example, Gay [15] and Sorenson [16] characterized the global
solution of trust region subproblems; Moré [17] studied quadratic problems with one
quadratic constraint and obtained necessary and sufficiency optimality conditions for the
global solution. For quadratic problems with a two-sided constraint, Stern and Wolkowicz
[18] obtained optimality conditions for the global solution. For all above three special cases
of quadratic problems, there is no gap between the necessary and sufficient optimality
conditions for the global solution. However, for quadratic problems with two quadratic
constraints, Peng and Yuan [19] showed that the Hessian of the Lagrangian has at most one
negative eigenvalue at the global solution. For a convex composite optimization problem,
Yang [20] proposed second-order sufficient conditions for global solutions, respectively, by
introducing a generalized representation condition which is satisfied by quadratic functions
and linear fractional functions.
In this article, we will give some characterizations of the global quadratic growth
condition and the global general second-order sufficient condition for the min-max problem
with quadratic functions. Following the scheme proposed by Bonnans and Ioffe [9], we will
define the global quadratic growth condition and the global general second-order sufficient
condition for the problem and then study the relationship between them. The obtained results
differ when the number of functions is different.
2. The global quadratic growth and global second-order conditions
Consider the problem of the following min-max form
min f (x) := max f i (x),
x∈R n
(P)
1≤i≤m
where f i : R n → R, i = 1, . . . , m, are quadratic functions, that is, f i (x) = x T Q i x +
2qiT x + bi for some qi ∈ R n , bi ∈ R and Q i a symmetric real matrix. In the case of qi = 0
and bi = 0, f i is referred to as quadratic form.
Following [9], the index set I (x) := {i : 1 ≤ i ≤ m, f i (x) = f (x)} denotes the set of
active indices of f (x) at x. The function
L(λ, x) :=
m
λi f i (x)
(1)
i=1
is defined as the Lagrangian of f (x). The set
S := λ ∈ R : λ ≥ 0,
m
m
m
i=1
λi = 1
(2)
146
Z. Chen and X. Yang
denotes the standard simplex of R m . The set
/ I (x);
(x) := λ ∈ S : λi = 0 if i ∈
m
m
λi ∇ f i (x) = 0
(3)
i=1
is the set of Lagrange multipliers of f at x and
/ I (x); δ (x) := λ ∈ S m : λi = 0 if i ∈
m
λi ∇ f i (x) ≤ δ
(4)
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
i=1
is the set of Lagrange δ-multipliers. Denote a positive semi-definite matrix Q (resp., positive
definite) by Q 0 (resp., Q 0). Given a set X ⊂ R n , the distance function is defined
by dist(x, X ) = inf y∈X x − y, where the norm is the Euclidean norm and the contingent
cone to X at a point x ∈ X is defined by TX (x) := lim supt↓0 X −x
t .
Throughout this article, we assume that f (x) is a constant c0 on the set S, which is
usually assumed to be the global solution set of problem (P).
Definition 2.1 [9] A mapping π from a neighborhood U of S onto S is called a regular
projection onto S if there exists ε > 0 such that
εx − π(x) ≤ dist(x, S),
for all x ∈ U.
(5)
Definition 2.2 [9]
(i) We say f satisfies the quadratic growth condition (QGC) with respect to S if there
exists β > 0 and a neighborhood U of S such that
f (x) ≥ c0 + βdist 2 (x, S), for all x ∈ U.
(6)
(ii) We say that f satisfies the global QGC with respect to S if the inequality (6) holds
for all x ∈ R n .
Definition 2.3 [9]
(i) We say f satisfies the general second-order sufficient condition (GSO) with respect
to S if for any δ > 0 there exists a neighborhood U of S, a regular projection
π : U → S and α > 0 such that, for all x ∈ U \S,
1
(7)
max
Lx (λ, π(x))h + Lx x (λ, π(x))(h, h) ≥ αh2 ,
λ∈δ (π(x))
2
where h = x − π(x).
(ii) We say that f satisfies the global GSO with respect to S if U = R n .
Note that both the global QGC and global GSO imply that the set S is the global solution
of problem (P).
Definition 2.4 [9] Let C and D be sets of R n and x ∈ C
nontangent at x if
TD (x) = {0}.
TC (x)
D. We say that C and D are
Applicable Analysis
147
Definition 2.5 [9] We say that f satisfies the tangency condition (TC) on D ⊂ R n if for
any x ∈ D and i ∈ I (x), one of the following statements is provided.
(a) i ∈ I (y) for all y ∈ D sufficiently close to x,
(b) D and {y : f i (y) = f i (x) = c0 } are nontangent at x.
First we consider the case of m = 1, that is, problem (P) is of the form
min x T Qx + 2q T x.
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
x∈R n
The second-order necessary and sufficient optimality conditions for x to be a global solution
of (P) are
Qx + q = 0, Q 0.
Thus, the solution set S = {x | Qx + q = 0}. We have (x) = δ (x) = {1} and
Lx (λ, π(x))h + 12 Lx x (λ, π(x))(h, h) = 2h T (Qπ(x) + q) + h T Qh, with h = x − π(x).
If either the global QGC or GSO holds on S, then S is the optimal solution set.
Proposition 2.6 Let m = 1. If S is the optimal solution set, then the global QGC with
respect to S holds.
Proof Let the rank of Q be r . The Takagi’s factorization of Q is Q = PP T where P is
orthonormal and is diagonal with the first r elements being the positive eigenvalues of
Q, see Horn and Johnson [21, Corollary 4.4.4].
Under the necessary conditions and transformation y = P T x:
f (x) := x Qx + 2q x =
T
T
r
λi yi2 + 2βi yi ,
(8)
i=1
where q T P = (β1 , . . . , βr , 0, . . . , 0). The solution set S is the affine space P S̃ where
S̃ := {y 0 + (0, . . . , 0, sr +1 , . . . , sn )T | y 0
= (−β1 /λ1 , . . . , −βr /λr , 0, . . . , 0)T , si ∈ R, i = r + 1, . . . , n}.
From (8), for any x0 ∈ S(y 0 = P T x0 ∈ S̃), it is easy to see that
f (x) = f (x 0 ) +
r
λi (yi − yi0 )2 ≥ f (x 0 ) + min λi dist2 (y, S̃).
i
i=1
Note that
dist(x, S) = dist(y, P T S) = dist(y, S̃).
Then the global QGC holds:
f (x) ≥ f (x 0 ) + min λi dist2 (x, S),
i
for all x ∈ R n .
Theorem 2.7
Let m = 1. Then the global QGC is equivalent to the global GSO.
148
Z. Chen and X. Yang
Proof First, we show that global QGC implies global GSO. Suppose that QGC holds
globally w.r.t. S, i.e. there exist c0 ∈ R and β > 0 such that for all x ∈ R n ,
f (x) ≥ c0 + βdist 2 (x, S).
Let π be the projection mapping on S. Then for any x ∈ R n \S,
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
2h T (Qπ(x)+q)+h T Qh = (x −π(x))T Q(x −π(x)) = f (x)− f (π(x)) ≥ βx −π(x)2 ,
Second, we show that global GSO implies global QGC. Suppose that global GSO holds,
i.e. for any given δ > 0, there is a regular projection π : R n → S and α > 0, such that for
all x ∈ R n \S,
2h T (Qπ(x) + q) + h T Qh ≥ αh2 ,
where h = x − π(x). Then for any x ∈ R n \S and let x0 = π(x),
f (x) = f (x0 ) + 2(x − x0 )T (Qx0 + q) + (x − x0 )T Q(x − x0 )
≥ c0 + αx − x0 2 ≥ c0 + αdist 2 (x, S).
In case of m ≥ 2, when the local GSO condition holds w.r.t. a set S and S is the global
solution set of the quadratic problem (P), the global QGC may not hold. The following is a
simple counter example.
Example 2.8 For f (x) = max{x, −x}, x = 0 is the global solution of problem (P) and
the inequality (7) holds for any |x| ≤ 1, i.e. local GSO w.r.t. S = {0} holds for f and, from
Theorem 1 of [9], so does local QGC. If the QGC holds, that is |x| ≥ C x 2 for some positive
constant C, we have |x| ≤ C −1 . It follows that the global QGC does not hold.
However, under the global GSO, we have the following proposition.
Proposition 2.9 Let f (x) = max1≤i≤m f i (x) and f i , i = 1, . . . , m, be quadratic
functions and let S be the solution set of problem (P). Then the global GSO with respect to
S implies the global QGC with respect to S.
Proof If the global GSO holds, for any δ > 0, there exists a regular projection π : R n → S,
and α > 0 such that, for all x ∈ R n \S,
1
Lx (λ, π(x))h + Lx x (λ, π(x))(h, h) ≥ αh2 ,
max
λ∈δ (π(x))
2
where h = x − π(x).
Then, for any x ∈ R n ,
f (x) − c0 = f (π(x) + h) − f (π(x))
≥
[L(λ, π(x) + h) − L(λ, π(x))]
1
= max
Lx (λ, π(x))h + Lx x (λ, π(x))(h, h)
λ∈δ (π(x))
2
2
2
≥ αh = αdist (x, S).
max
λ∈δ (π(x))
Applicable Analysis
149
Then the global QGC holds.
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
Bonnans and Ioffe [9] showed that QGC and TC imply GSO. We want to know whether
the global QGC and TC imply the global GSO or not.
Example 2.10 Let f (x) = max{ f 1 (x), f 2 (x), f 3 (x)} with f 1 (x) = −x 2 +2x, f 2 (x) = −
x 2 − 2x and f 3 (x) = 2x 2 − 1. We have that x = 0 is the global solution and f (x) ≥ x 2 .
This is to say the global QGC holds. Since the optimal solution set is a singleton, TC
holds trivially. However, the global GSO does not hold since the related Hessian is negative definite. In fact, since for any x ∈ R n , π(x) = 0, maxλ∈δ (π(x)) [Lx (λ, π(x))h +
1
2
3
2 Lx x (λ, π(x))(h, h)] = maxλ∈δ (0) {(4λ1 − 2)x − x }, with δ (0) = S if δ ≥ 2;
δ
3
δ (0) = {λ ∈ S : λ1 ≥ 1 − 2 , λ3 = 0} if δ < 2. Thus, the global GSO could not
hold.
Now we consider the case of m = 2 and the global solution set is a singleton. First, we
review a result by Moré [17].
Lem m a 2.11
Consider the following problem
min{ f 0 (x) : f 1 (x) ≤ 0},
where f 0 , f 1 : R n → R are quadratic functions. Assume that min{ f 1 (x) : x ∈ R n } < 0
and ∇ 2 f 1 = 0. A vector x ∗ is a global solution of the problem if and only if there is a λ ≥ 0
such that
f 1 (x ∗ ) ≤ 0,
∇ f 0 (x ∗ ) + λ∇ f 1 (x ∗ ) = 0,
∇ 2 f 0 (x ∗ ) + λ∇ 2 f 1 (x ∗ ) 0.
Applying this Lemma, we may obtain the following result.
Proposition 2.12 Assume that max{ f 0 (x), f 1 (x)} ≥ c + αx − x0 2 , for all x ∈ R n ,
where f i (x) = x T Q i x + 2qiT x + ci , i = 0, 1, and α is a positive constant. Then the global
GSO holds.
Proof It suffices to prove that there are multipliers λ0 ≥ 0, λ1 ≥ 0 such that λ0 + λ1 = 1,
λ0 ∇ f 0 (x0 ) + λ1 ∇ f 1 (x0 ) = 0 and λ0 Q 0 + λ1 Q 1 0.
To avoid the trivial case, we may assume that f 0 (x0 ) = f 1 (x0 ) = c. Otherwise, there is at
least one function being strictly convex. Note that max{ f 0 (x), f 1 (x)} ≥ c + αx − x0 2 ⇔
f 0 (x) + max{0, f 1 (x) − f 0 (x)} ≥ c + αx − x0 2 .
Then (x0 , 0) ∈ R n × R is the global solution of the problem:
min f (x, y) subject to (x, y) ∈ R n × R, g(x, y) ≤ 0,
where f (x, y) = f 0 (x)+ y 2 −αx − x0 2 , g(x, y) = f 1 (x)− f 0 (x)− y 2 ≤ 0. It is obvious
that g(x, y) satisfies the assumptions in the Lemma 2.11. Hence, there is a λ ≥ 0 such that
150
Z. Chen and X. Yang
∇ f 0 (x0 ) + λ(∇ f 1 (x0 ) − ∇ f 0 (x0 )) = 0,
Q 0 − 2α I 0
Q1 − Q0 0
+λ
0.
0
2
0
2
Similarly, max{ f 0 (x), f 1 (x)} ≥ c + αx − x0 2 ⇔ f 1 (x) + max{0, f 0 (x) − f 1 (x)} ≥
c + αx − x0 2 .
Then there is a λ ≥ 0 such that
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
∇ f 1 (x0 ) + λ (∇ f 0 (x0 ) − ∇ f 1 (x0 )) = 0,
Q 1 − 2α I 0
Q0 − Q1 0
+λ
0.
0
2
0
2
When λ > 0, λ > 0, we have λ ∇ f 0 (x0 )+λ∇ f 1 (x0 ) = 0, and λ ∇ 2 f 0 (x0 )+λ∇ 2 f 1 (x0 ) 2αλλ I .
Therefore, there are λ0 , λ1 such that 0 ≤ λ0 , λ1 ≤ 1, λ0 + λ1 = 1, λ0 ∇ f 0 (x0 ) +
λ1 ∇ f 1 (x0 ) = 0 and λ0 Q 0 + λ1 Q 1 α I 0. This completes the proof.
As a special case, we consider the following problem with quadratic forms:
min f (x) = max x T Q i x.
x∈R n
1≤i≤m
(P1)
Proposition 2.13 Assume that the problem (P1) is bounded from below and the solution
set S is bounded. Then S = {0} and both the global QGC and global GSO hold.
Proof The optimal value is 0; otherwise, if there exists an x ∈ R n such that f (x) < 0,
by the homogeneity of order 2 of f , f is unbounded from below. Thus, x = 0 is a global
solution of (P1) and it is the only global solution. If not, there is an x = 0 is also a solution
and thus the whole line {λx | λ ∈ R} belongs to the solution set, which contradicts the
assumption that S is bounded.
Next, we claim that f (x) satisfies the global QGC, that is, there is an α > 0 such that
f (x) ≥ αdist 2 (x, S) = αx2
for all x ∈ R n .
(9)
1
2
Otherwise, assume there exists a sequence (x n )∞
n=1 such that f (x n ) < n x n . Letting
xn
1
yn = xn , and taking if necessary a subsequence, we have f (yn ) < n and yn → y0 . Then,
we have f (y0 ) ≤ 0 and y0 = 1, contradicting that x = 0 is the unique solution.
Finally, we show that the global GSO for f still holds. Since S = {0}, π(x) = 0 for
any x ∈ R n , δ (π(x)) = δ (0) = S m , Lx (λ, π(x)) = 0, and Lx x (λ, π(x))(h, h) =
m
λi h T Q i h = max1≤i≤m h T Q i h with h = x. Therefore
maxλ∈S m i=1
1
Lx (λ, π(x))h + Lx x (λ, π(x))(h, h) = max h T Q i h ≥ αh2 ,
max
1≤i≤m
λ∈δ (π(x))
2
where the last inequality follows from (9). This completes the proof.
Remark A result from Yuan [22] is as follows:
max{x T Q 1 x, x T Q 2 x} ≥ 0,
∀x ∈ R n ⇔ ∃λ ∈ [0, 1], s.t. λQ 1 + (1 − λ)Q 2 0.
Applicable Analysis
151
This result is not true anymore if there are more than two quadratic forms to be considered.
This can be seen from the following example from Martinez–Legaz and Seeger [23].
Example 2.14 Let f (x) = max{x12 +4x1 x2 −3x22 , x12 −8x1 x2 −3x22 , −5x12 +4x1 x2 +3x22 }.
Then, f (x) ≥ 0, for all x ∈ R 2 and, but for any λ = (λ1 , λ2 , λ3 ), the smallest eigenvalue
of λ1 Q 1 + λ2 Q 2 + λ3 Q 3 is no larger than −1.
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
Funding
This work is partially supported by the Research Grants Council of Hong Kong (PolyU 529213).
References
[1] Ben-Tal A, Zowe J. An unified theory of first and second order conditions for extremum problems
in topological vector spaces. Math. Program. Stud. 1982;19:39–77.
[2] Fiacco AV, McCormick GP. Nonlinear programming: sequential unconstrained minimization
techniques. New York (NY): Wiley; 1968.
[3] Ioffe A. Necessary and sufficient conditions for a local minimum. SIAM J. Control. Optim.
1979;17:245–288.
[4] Gollan B. Perturbation theory for abstract optimization problems. J. Optim. Theory Appl.
1981;35:417–441.
[5] Gollan B. On the marginal function in nonlinear programming. Math. Oper. Res. 1984;9:208–
221.
[6] Jittorntrum K. Solution point differentiability without strict complementarity in nonlinear
programming. Math. Program. 1984;21:127–138.
[7] Shapiro A. Perturbation theory of nonlinear programs when the set of optimal solutions is not a
singleton. Appl. Math. Optim. 1988;18:215–229.
[8] Bonnans JF, Ioffe AD. Quadratic growth and stability in convex programming problems with
multiple solutions. J. Convex Anal. 1995;2:41–57.
[9] Bonnans JF, Ioffe AD. Second-order sufficiency and quadratic growth for non-isolated minima.
Math. Oper. Res. 1995;20:801–817.
[10] Ioffe AD. On sensitivity analysis of nonlinear programs in Banach spaces: the approach via
composite unconstrained optimization. SIAM J. Optim. 1994;4:1–43.
[11] Shapiro A. Perturbation analysis of optimization problems in Banach spaces. Numer. Funct.
Anal. Optim. 1992;13:97–116.
[12] Studniarski M, Doug E. Ward, Weak sharp minima: characterizations and sufficient conditions.
SIAM J. Control Optim. 1999;38:219–236.
[13] Robinson SM. Generalized equations and their solutions, part ii: applications to nonlinear
programming. Math. Program. Stud. 1982;19:200–221.
[14] Alt W. Stability of solutions for a class of nonlinear cone constrained optimization problems part
I: basic theory. Numer. Funct. Anal. Optim. 1989;10:1053–1064.
[15] Gay DM. Computing optimal locally constrained steps. SIAM J. Sci. Stat. Comput. 1981;2:186–
197.
[16] Sorensen DC. Newton’s method with a model trust region modification. SIAM J. Numer. Anal.
1982;19:409–426.
[17] Moré JJ. Generalizations of the trust region problem. Optim. Method. Softw. 1993;2:189–209.
[18] Stern RJ, Wolkowicz H. Indefinite trust region subproblems and nonsymmetric eigenvalue
perturbations. SIAM J. Optim. 1995;5:286–313.
152
Z. Chen and X. Yang
Downloaded by [Hong Kong Polytechnic University] at 04:33 01 September 2015
[19] Peng J-M, Yuan Y-X. Optimality conditions for the minimization of a quadratic with two
quadratic constraints. SIAM J. Optim. 1997;7:579–594.
[20] Yang XQ. Second-order global optimality conditions for convex composite optimization. Math.
Program. 1998;81:327–347.
[21] Horn RA, Johnson CR. Matrix analysis, corrected reprint of the 1985 original. Cambridge:
Cambridge University Press; 1990.
[22] Yuan Y. On a subproblem of trust region algorithms for constrained optimization. Math. Program.
1990;47:53–63.
[23] Martinez-Legaz JE, Seeger A. Yuan’s alternative theorem and the maximization of the minimum
eigenvalue function. J. Optim. Theory Appl. 1994;82:159–167.