discussion paper - Université catholique de Louvain

2013/21
■
On implicit functions
in nonsmooth analysis
Dominik Dorsch, Hubertus Th. Jongen,
Jan.-J. Rückmann and Vladimir Shikhman
DISCUSSION PAPER
Center for Operations Research
and Econometrics
Voie du Roman Pays, 34
B-1348 Louvain-la-Neuve
Belgium
http://www.uclouvain.be/core
CORE DISCUSSION PAPER
2013/21
On implicit functions in nonsmooth analysis
Dominik DORSCH 1, Hubertus Th. JONGEN2,
Jan.-. RÜCKMANN3 and Vladimir SHIKHMAN 4
April 2013
Abstract
We study systems of equations, F (x) = 0, given by piecewise differentiable functions F : Rn → Rk, k ≤
n. The focus is on the representability of the solution set locally as an (n − k)-dimensional Lipschitz
manifold. For that, nonsmooth versions of inverse function theorems are applied. It turns out that their
applicability depends on the choice of a particular basis. To overcome this obstacle we introduce a
strong full-rank assumption (SFRA) in terms of Clarke’s generalized Jacobians. The SFRA claims the
existence of a basis in which Clarke’s inverse function theorem can be applied. Aiming at a
characterization of SFRA, we consider also a full-rank assumption (FRA). The FRA insures the full
rank of all matrices from the Clarke’s generalized Jacobian. The article is devoted to the conjectured
equivalence of SFRA and FRA. For min-type functions, we give reformulations of SFRA and FRA
using orthogonal projections, basis enlargements, cross products, dual variables, as well as via
exponentially many convex cones. The equivalence of SFRA and FRA is shown to be true for min-type
functions in the new case k = 3.
Keywords: Clarke's inverse function theorem, strong full-rank assumption, full-rank assumption, fullrank conjecture, Lipschitz manifold.
1
Department of Mathematics – C, RWTH Aachen University, D-52056 Aachen, Germany.
E-mail: [email protected]
2 Professor Emeritus, Department of Mathematics, RWTH Aachen University, D-52056 Aachen, Germany.
E-mail: [email protected]
3 School of Mathematics, The University of Birmingham, U.K. E-mail: [email protected]
4
Université catholique de Louvain, CORE, B-1348 Louvain-la-Neuve, Belgium.
E-mail: [email protected]
This paper presents research results of the Belgian Program on Interuniversity Poles of Attraction initiated by the Belgian
State, Prime Minister's Office, Science Policy Programming. The scientific responsibility is assumed by the author.
1
Introduction
We consider the following underdetermined system of equations
F (x) = 0,
(1)
where F : Rn −→ Rk , k ≤ n, is a PC1 -function. We briefly recall the latter
notion.
Definition 1 (PC1 -function, [6, 15]) A continuous function F : Rn −→ Rk
is called PC1 if for every x̄ ∈ Rn there exist an open neighborhood U of x̄ and
continuously differentiable functions F 1 , . . . , F m : U −→ Rk such that
F (x) ∈ F 1 (x), . . . , F m (x) for all x ∈ U.
We say then that F1 , . . . , Fm are selection functions for F at x̄.
Let x̄ ∈ Rn be a solution of (1). A classical issue of analysis is to address the
structure of the solution set of (1). Or more precisely, the question: is F −1 (0)
an (n − k)-dimensional Lipschitz manifold (cf. [13] for a definition of a Lipschitz
manifold)?
Clarke’s inverse function theorem is used to describe the local structure of
F −1 (0). For its formulation we need Clarke’s generalized Jacobian of a
locally Lipschitz function G : Rn −→ Rk at ȳ ∈ Rn , i.e.
∂G(ȳ) := conv{lim DG(yj ) | yj −→ ȳ, yj 6∈ ΩG },
where ΩG ⊂ Rn denotes the set of points at which G fails to be differentiable
(see [1]). It is well-known that a PC1 -function F : Rn −→ Rk is locally Lipschitz
continuous with
n
o
ˆ
∂F (x̄) = conv DF i (x̄) | i ∈ I(x̄)
,
(2)
ˆ
where I(x̄)
= i | x̄ ∈ cl(int({x ∈ U | F (x) = F i (x)})) is the set of essentially
active selection functions of F at x̄ (see [15]).
Theorem 1 (Clarke’s inverse function theorem, [1]) Let G : Rn −→ Rn
be Lipschitz continuous near x̄. If all matrices in ∂G(x̄) are nonsingular, then
G has the unique Lipschitz inverse function G−1 locally around x̄.
Now, we apply Theorem 1 to show that F −1 (0) is an (n − k)-dimensional Lipschitz manifold locally around x̄. The so-called maximal rank condition
becomes crucial (see [1]). Namely, for x̄ = (ȳ, z̄) ∈ Rk × Rn−k we assume
πz ∂F (ȳ, z̄) := {M ∈ Rk×k | there exists N ∈ Rk×(n−k) with [M, N ] ∈ ∂F (ȳ, z̄)}
to be of maximal rank (i.e., it contains merely nonsingular matrices). Then, we
consider the mapping G(y, z) = (F (y, z), z) from Rn to Rn . It holds
M N
∂G(ȳ, z̄) =
[M, N ] ∈ ∂F (ȳ, z̄) .
O I
2
In view of the maximal rank condition, all matrices in ∂G(ȳ, z̄) are nonsingular.
The application of Theorem 1 yields the existence of an Rk -neighborhood Y of ȳ,
an Rn−k -neighborhood Z of z̄, and a Lipschitz continuous function ζ : Z −→ Y
such that ζ(z̄) = ȳ and for every (y, z) ∈ Y × Z it holds
F (y, z) = 0 if and only if y = ζ(z).
Consequently, F −1 (0) is locally the graph of the Lipschitz continuous function
ζ(·), and hence, F −1 (0) is a Lipschitz manifold near x̄.
It is worth to mention that the argumentation above depends on the basis
decomposition of Rn . In fact, it may happen that maximal rank condition is
not satisfied w.r.t. the splitting of coordinates in the standard basis. The next
example illustrates this issue.
Example 1 (IFT is not applicable in standard basis, [7]) We consider a
PC1 -function
F :
R3
−→
(x, y, z)
7→
R2
min{x, y}
.
min{−y + z, z}
Note that F −1 (0) is a 1-dimensional Lipschitz manifold (see Figure 1). Nevertheless, F −1 (0) cannot be parameterized by means of one standard coordinate
x, y or z. Hence, the maximal rank condition is violated w.r.t. any splitting of
standard coordinates.
min{x, y} = 0
min{−y + z, z} = 0
Figure 1 Illustration of Example 1
Example 1 suggests that Theorem 1 has to be eventually applied w.r.t. a suitably chosen coordinates. Next definition is aimed to modify the maximal rank
condition accordingly.
Definition 2 (Strong full-rank assumption, [3, 7]) Let F : Rn −→ Rk be
a PC1 -function as in Definition 1, F (x̄) = 0. The strong full-rank assumption
(SFRA) is said to hold at x̄ if there exists a k-dimensional
linearosubspace E of
n
n
i
ˆ
R such that every matrix A ∈ ∂F (x̄) = conv DF (x̄) | i ∈ I(x̄)
satisfies
ker A ∩ E = {0}.
(3)
ˆ
Here, F i , i ∈ I(x̄)
are essentially active selection functions of F at x̄ and kerA
denotes the kernel of an (k, n)-matrix A.
3
Under SFRA, it is straight-forward to show that F −1 (0) is locally an (n − k)dimensional Lipschitz manifold. For that, let V be an (n, n)-matrix whose first
k columns v1 , . . . , vk form a basis of E and the last n − k columns form a basis
of its complement E ⊥ . We perform a coordinate transformation of Rn given by
u := V −1 x.
Due to the chain rule from [1], we obtain at ū := V −1 x̄
∂u (F ◦ V )(ū) = ∂x F (x̄) · V.
We claim that F ◦ V (·) fulfills the maximal rank condition at ū w.r.t. the new
coordinates u. Indeed, let us assume the opposite. Then, there is a matrix
A ∈ ∂x F (x̄) such that
A · V = A1 A2
with singular (k, k)-submatrix A1 .
Taking a non-vanishing vector ξ ∈ Rk with A1 ξ = 0, we get
!
k
X
ξ
A
ξl vl = A · V
= A1 ξ = 0.
0n−k
l=1
Hence,
k
k
X
X
ξl vl ∈ ker A ∩ E and
ξl vl 6= 0, a contradiction to (3). Further,
l=1
l=1
−1
we use arguments above to conclude that (F ◦ V ) (0) is locally an (n − k)dimensional Lipschitz manifold. Because of that,
h
i
−1
F −1 (0) = V (F ◦ V ) (0)
is also an (n − k)-dimensional Lipschitz manifold around x̄.
It is important to note that SFRA is a typical statement on existence. For the
sake of its applicability, a characterization of SFRA just in terms of ∂F (x̄) is
needed. The following condition becomes then crucial.
Definition 3 (Full-rank assumption, [12]) Let F : Rn −→ Rk be a PC1 function as in Definition 1, F (x̄) = 0. The full-rank
assumption (FRA)
is said
n
o
i
ˆ
to hold at x̄ if every matrix from ∂F (x̄) = conv DF (x̄) | i ∈ I(x̄) has full row
ˆ
rank k. Here, F i , i ∈ I(x̄),
are the essentially active selection functions of F at
x̄.
The main goal of this article is the discussion and examination of the following
conjecture.
Full-rank conjecture: Let F : Rn −→ Rk be a PC1 -function as in Definition
1, F (x̄) = 0. Then, SFRA and FRA for F at x̄ are equivalent.
The full-rank conjecture has been proposed in [7] in the context of min-type
functions. In [3] the same question has been raised for the more general case
4
of locally Lipschitz functions. In fact, Definitions 2 and 3 of SFRA and FRA,
respectively, do not necessarily rely on a particular description of ∂F (x̄) and,
hence, can be easily generalized for locally Lipschitz continuous functions. Nevertheless, without assuming any structural properties of F (e.g., piecewise differentiability) we don’t have an explicit description of the convex compact set
∂F (x̄). As it is mentioned in [3], the characterization of convex compact sets
of matrices being Clarke’s generalized Jacobians is an open question. For that
reason, in [3] SFRA and FRA are generalized in an algebraic form, i.e. holding
for any convex compact sets of (k, n)-matrices instead for Clarke’s generalized
Jacobians of locally Lipschitz continuous functions from Rn to Rk . The following Example 2 from [3] shows that the full-rank conjecture does not hold in the
algebraic form.
Example 2 (Full-rank conjecture fails in the algebraic form, [3])
Let n = 3, k = 2 and
A = conv {A1 , A2 , A3 } ,
where
A1 =
A2 =
A3 =
1
0
0
1
−0.2
0
0.7
−0.8
0
0
0.8
−0.5
−0.7
0.1
,
0.2
0.4
−0.6
0.6
,
.
It is shown in [3] that all matrices from A have rank 2. However, there is no
2-dimensional linear subspace E of R3 satisfying the condition ker A ∩ E = {0}
for all A ∈ A.
From this example it is clear that the algebraic form of the conjectured equivalence of SFRA and FRA is not true. Nevertheless, we emphasize that the
full-rank conjecture involves rather special convex compact sets, namely Clarke’s
generalized Jacobians of PC1 -functions. To illustrate this fact, we show that the
set A from Example 2 can not be realized as the Clarke’s generalized Jacobian
of a PC1 -function with three selection functions.
Remark 1 Let n = 3, k = 2 and A be from Example 2. We show that there
does not exist a PC1 -function F : R3 −→ R2 with three selection functions such
that ∂F (x̄) = A at some point x̄ ∈ R3 . Indeed, let us assume the opposite.
According to Definition 1, there are functions F i : U −→ R2 , continuously
differentiable on an open ball U with x̄ ∈ U , such that
ˆ
DF i (x̄) = Ai for i ∈ I(x̄)
= {1, 2, 3} .
Note that rank(Ai − Aj ) = 2, i < j. Consequently, in virtue of the standard
implicit function theorem, the sets
Li,j := x ∈ U | F i (x) = F j (x)
5
ˆ
are 1-dimensional curves passing through x̄ for all i, j ∈ I(x̄),
i < j. (Here and
below, we shrink the neighborhood U if needed.) Analogously,
x ∈ U | F 1 (x) = F 2 (x) = F 3 (x) = {x̄} .
Hence, the set
V :=
[
(U \Li,j )
ˆ
i, j ∈ I(x̄)
i<j
ˆ
is path-connected, and there exists a unique index l ∈ I(x̄)
with F (x) = F l (x)
ˆ
for all x ∈ V . Since the sets Li,j are 1-dimensional, we have I(x̄)
= {l}, a
contradiction.
Remark 1 suggests to study the full-rank conjecture for specific classes of nonsmooth functions. In this situation, the formulas for Clarke’s generalized Jacobians can be explicitly used. Moreover, the discussion of the full-rank conjecture then focuses on the combinatorial structure of the particular type of nonsmoothness. This approach points towards the development of many branches
of nonsmooth analysis, each of them maintaining its own type of nonsmoothness. In this article, we carry out the discussion of the full-rank conjecture for
the min-type functions. Min-type functions form a class of functions with the
simplest structure of nonsmoothness. Their study is also motivated by so-called
mathematical programs with equilibrium constraints (MPECs). We note that
the feasible set of MPECs can be written as a solution set of an appropriate
min-type function (see [10]). Section 2 is devoted to the full-rank conjecture for
min-type functions. In particular, the full-rank conjecture is proven for the new
case k = 3.
2
Full-rank conjecture for min-type functions
We consider the following min-type function
(
Rn −→
Rk ,
F :
k
x
7→ (min{F1,j , F2,j })j=1 ,
(4)
where the defining functions F1,j , F2,j , j = 1, . . . , k are continuously differentiable.
Obviously, F is a PC1 -function with 2k selection functions
F i (x) := (Fi1 ,1 (x), . . . , Fik ,k (x)) ,
where i = (i1 , . . . ik ) ∈ {1, 2}k .
Let x̄ ∈ Rn be a zero of F . We define the set of active indexes at x̄ as
I(x̄) = i ∈ {1, 2}k | F i (x̄) = 0 .
6
Without loss of generality, we assume that
k
I(x̄) = {1, 2} ,
ˆ
i.e. F1,j (x̄) = F2,j (x̄) = 0 for all j = 1, . . . k. Note that in general I(x̄)
6= I(x̄).
Hence, the description of the Clarke’s generalized Jacobian ∂F (x̄) may involve
the computation of the set of essentially active indexes (cf. (2)). To avoid this,
we assume throughout this section that the vectors
DT F1,j (x̄) − DT F2,j (x̄), j = 1, . . . k
(5)
are linearly independent. The latter means that the manifolds
{x | F1,j (x) = F2,j (x)} , j = 1, . . . k
intersect transversally at x̄ (see [4]). Moreover, the linear independence of the
vectors in (5) guarantees that
ˆ
I(x̄)
= I(x̄),
and, therefore:
∂F (x̄) = conv DF i (x̄) | i ∈ I(x̄) .
After all, SFRA at x̄ reads as follows: there exists
sub a k-dimensional linear
space E of Rn such that every matrix A ∈ conv DF i (x̄) | i ∈ {1, 2}k satisfies
ker A ∩ E = {0}.
FRA at x̄ says that every matrix A ∈ conv DF i (x̄) | i ∈ {1, 2}k has full row
rank k.
2.1
Equivalent reformulations of FRA and SFRA
Lemma 1 links FRA and SFRA as given above with their original definition
from [7].
Lemma 1 (FRA and SFRA via orthogonal projection)
(i) FRA holds at x̄ if and only if any k vectors
k
(w1 , . . . , wk ) ∈
× conv {DF
1,j (x̄), DF2,j (x̄)}
j=1
are linearly independent.
(ii) SFRA holds at x̄ if and only if there exists a k-dimensional linear subspace
E of Rn such that any k vectors
k
(u1 , . . . , uk ) ∈
×P
E
(conv {DF1,j (x̄), DF2,j (x̄)})
j=1
are linearly independent. Here, PE : Rn −→ E denotes the orthogonal
projection onto E.
7
Proof. (i) follows directly from the equality
k
conv DF i (x̄) | i ∈ {1, 2}k =
× conv {DF
1,j (x̄), DF2,j (x̄)} .
(6)
j=1
For (ii), we set V as an (n, k)-matrix
whose columns
v1 , . . . , vk form an orthogonal basis of E. Let A ∈ conv DF i (x̄) | i ∈ {1, 2}k . Due to (6), the rows of A
are
aj ∈ conv {DF1,j (x̄), DF2,j (x̄)} , j = 1, . . . , k.
We obtain

ha1 , v1 i

..
A·V =
.
hak , v1 i
···
···

ha1 , vk i

..
.
.
hak , vk i
Note that the rows of A · V represent the coordinates of PE (aj ), j = 1, . . . , k
w.r.t. the basis V . Let ker A ∩ E = {0}. We assume that the vectors PE (aj ),
j = 1, . . . , k are linearly dependent. Then, the matrix A · V is singular. Hence,
there exists a non-zero vector λ ∈ Rk with A·V λ = 0. We get V λ ∈ ker A∩E and
V λ 6= 0, a contradiction. Now, let PE (aj ), j = 1, . . . , k be linearly independent.
For ξ ∈ ker A ∩ E we write ξ = V λ. We obtain
A · V λ = Aξ = 0.
Since the matrix A · V is nonsingular, λ = 0 and, thus, ξ = 0. Next Lemma 2 reflects the differences between FRA and SFRA expressed via
basis enlargements.
Lemma 2 (FRA and SFRA via basis enlargement)
(i) FRA holds at x̄ if and only if for any
wj ∈ conv DT F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , k
there exist ξ1 , . . . , ξn−k ∈ Rn such that the vectors
w1 , . . . , wk , ξ1 , . . . , ξn−k
are linearly independent.
(ii) SFRA holds at x̄ if and only if there exist ξ1 , . . . , ξn−k ∈ Rn such that
for any
wj ∈ conv DT F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , k
the vectors
w1 , . . . , wk , ξ1 , . . . , ξn−k
are linearly independent.
8
Proof. (i) follows immediately from the definition of linear independence. To
prove (ii), we apply Lemma 1. If SFRA holds, we choose ξ1 , . . . , ξn−k as a basis
⊥
of E ⊥ . Conversely, we set E := (span {ξ1 , . . . , ξn−k }) in SFRA. For s ∈ {−1, 1}k we set
Ks (x̄) := cone {sj DF1,j (x̄), sj DF2,j (x̄) | j = 1, . . . k} .
Lemma 3 (FRA and SFRA via convex cones)
(i) FRA holds at x̄ if and only if all convex cones Ks (x̄), s ∈ {−1, 1}k are
pointed; that is,
if x1 + · · · + xp = 0, xi ∈ Ks (x̄), then xi = 0 for all i = 1, . . . , p.
(ii) SFRA holds at x̄ if and only if FRA holds at x̄ and there exist n − k
linearly independent vectors ξ1 , . . . , ξn−k ∈ Rn such that
span {ξj , j = 1, . . . , n − k} ∩
[
Ks (x̄) = {0} .
s∈{−1,1}k
Proof. (i) follows from Lemma 1 (i). For (ii), we just apply Lemma 2 (ii). Now, we turn our attention to the case k = n − 1. The cross product of the
vectors v1 , . . . , vn−1 ∈ Rn is defined as the formal determinant
v11
. . . v1n ..
.. ^
. .
(v1 , . . . , vn−1 ) := .
n
v1
n−1 . . . vn−1 e1
...
en The latter determinant is written in the coordinate form using standard basis
vectors e1 , . . . , en .
Lemma 4 expresses SFRA in terms of Mangasarian-Fromovitz constraint qualification ([11]). It becomes clear that the full-rank conjecture is connected with
a common orientation of exponentially many vectors.
Lemma 4 (FRA and SFRA via cross product)
Let k = n − 1.
(i) FRA holds at x̄ if and only if
^
(w1 , . . . , wn−1 ) 6= 0
T
for all wj ∈ conv D F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , n − 1.
(ii) SFRA holds at x̄ if and only if there exists ξ ∈ Rn such that
^
DT Fi1 ,1 (x̄), . . . , DT Fin−1 ,n−1 (x̄) · ξ > 0
for all (i1 , . . . , in−1 ) ∈ {1, 2}n−1 .
9
Proof. (i) follows from the fact that the vectors w1 , . . . , wn−1 ∈ Rn are linearly
independent if and only if
^
(w1 , . . . , wn−1 ) = 0.
Then, we apply Lemma 1 (i). For (ii), let first SFRA hold. Due to Lemma
n
2 (ii), there exist
that w1 , . . . , wn−1 , ξ are linearly independent for
Tξ ∈ R such
all wj ∈ conv D F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , n − 1. Equivalently,
^
(w1 , . . . , wn−1 ) · ξ 6= 0.
(7)
We assume that there exist (i1 , . . . , in−1 ), (k1 , . . . , kn−1 ) ∈ {1, 2}n−1 such that
^
DT Fi1 ,1 (x̄), . . . , DT Fin−1 ,n−1 (x̄) · ξ > 0,
^
DT Fk1 ,1 (x̄), . . . , DT Fkn−1 ,n−1 (x̄) · ξ < 0.
For t ∈ [0, 1] we define the vectors
wj (t) := (1 − t)DT Fij ,j (x̄) + tDT Fkj ,j (x̄), j = 1, . . . , n − 1.
Then, the function
f (t) :=
^
(w1 (t), . . . , wn−1 ) · ξ
is continuous on [0, 1], moreover, f (0) > and f (1) < 0. The application of the
mean value theorem for f delivers a contradiction to (7). Overall, we showed
that
^
DT Fi1 ,1 (x̄), . . . , DT Fin−1 ,n−1 (x̄) · ξ
have the same sign for all (i1 , . . . , in−1 ) ∈ {1, 2}n−1 . Taking the negative of ξ if
needed, we get the assertion. Vice versa, let a vector ξ ∈ Rn satisfy
^
DT Fi1 ,1 (x̄), . . . , DT Fin−1 ,n−1 (x̄) · ξ > 0
for all (i1 , . . . , in−1 ) ∈ {1, 2}n−1
is a multilinear form,
. Since the cross product
we obtain for any wj ∈ conv DT F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , n − 1:
^
=
X
(w1 , . . . , wn−1 ) =
γ(i1 ,...,in−1 )
^
DT Fi1 ,1 (x̄), . . . , DT Fin−1 ,n−1 (x̄)
(i1 ,...,in−1 )∈{1,2}n−1
Here, the nonnegativeVand not all vanishing coefficients γ(i1 ,...,in−1 ) depend on
wj ’s. It follows that (w1 , . . . , wn−1 ) · ξ > 0 and, hence, w1 , . . . , wn−1 , ξ are
linearly independent. Finally, we describe FRA and SFRA via dual variables. This characterization
from Lemma 5 will be crucial for the proof of the new case k = 3 below. We
write a vector α ∈ R2k as
α = (α1,j , α2,j , j = 1, . . . , k) .
10
The set of dual variables at x̄ is given by


k


X
K(x̄) := α ∈ R2k |
α1,j DT F1,j (x̄) + α2,j DT F2,j (x̄) = 0 .


j=1
For l ∈ N we define
G2l := β ∈ R2l | there exists j ∈ {1, . . . , l} such that β1,j · β2,j < 0 .
Note that each β ∈ G2l has at least one pair of components β1,j , β2,j with
negative and positive sign, respectively.
Lemma 5 (FRA and SFRA via dual variables)
Let k = n − 1.
(i) FRA holds at x̄ if and only if K(x̄)\{0} ⊂ G2k .
(ii) SFRA holds at x̄ if and only if FRA holds at x̄ and either (A) or (B)
holds:
(A) span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , n − 1 6= Rn ,
(B) there exists β̄ ∈ R2k such that β̄ + K(x̄) ⊂ G2k .
Proof. To show (i), we consider a non-trivial linear combination
0=
k
k
X
X
γj λj DT F1,j (x̄) + γj (1 − λj )DT F2,j (x̄),
γj wj =
j=1
j=1
where wj ∈ conv DT F1,j (x̄), DT F2,j (x̄) , γj ∈ R, λj ∈ [0, 1], j = 1, . . . , n − 1.
Hence, (γj λj , γj (1 − λj ), j = 1, . . . , k) ∈ (K(x̄)\{0}) \G2k , a contradiction. Vice
versa, if there exists a non-zero vector α ∈ K(x̄)\G2k , we can always solve the
equations
α1,j = γj λj and α2,j = γj (1 − λj ), j = 1, . . . , n − 1
w.r.t. γj ∈ R, λj ∈ [0, 1], j = 1, . . . , n − 1. Hence, FRA does not hold, and we
are done. For (ii), we notice that in case (A) there exists a vector
ξ 6∈ span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , n − 1 .
Then, SFRA holds trivially due to Lemma 2 (ii). In case (B), we define
ξ :=
k
X
β̄1,j DT F1,j (x̄) + β̄2,j DT F2,j (x̄).
j=1
For SFRA, we prove that the vectors
wj ∈ conv DT F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , n − 1, and ξ
11
(8)
are linearly independent. Assuming the opposite,
ξ=
k
X
β1,j DT F1,j (x̄) + β2,j DT F2,j (x̄)
(9)
j=1
with β := (β1,j , β2,j , j = 1, . . . , n − 1) 6∈ G2k . Subtracting (8) from (9), we
obtain
α := β − β̄ ∈ K(x̄).
Hence, β̄ + α = β 6∈ G2k , a contradiction to (B). The reverse direction can be
shown analogously. 2.2
Proven cases
In this subsection we concentrate on the cases where we can prove the fullrank conjecture for min-type functions. The following Theorem 2 from [7] gives
a sufficient condition for the validity of the full-rank conjecture. We briefly
recapitulate its proof for the sake of completeness.
Theorem 2 (Full-rank conjecture under linear independence, [7])
Let the vectors
DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . k
be linearly independent. Then, the full-rank conjecture for min-type functions
(4) is true.
Proof. W.l.o.g., we may assume that n = 2k. We define a linear coordinate
transformation L : Rn −→ Rn via
L(DT F1,j (x̄)) = e2j−1 + e2j , L(DT F2,j (x̄)) = e2j−1 , j = 1, . . . , k,
whereby em denotes the m-th standard basis vector. For j = 1, . . . , k, it holds:
L(conv DT F1,j (x̄), DT F2,j (x̄) ) = {e2j−1 + λj e2j | λj ∈ [0, 1]}.
⊥
Let the vectors ζ1 , . . . , ζn−k form a basis of span {e2j−1 , j = 1, . . . , k} . Note
that for any vj ∈ L conv DT F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , k, the vectors
v1 , . . . , vk , ζ1 , . . . , ζn−k are linearly independent. Setting
ξj := L−1 (ζj ), i = 1, . . . , n − k
in Lemma 2 (ii), we conclude that SFRA is fulfilled at x̄. Remark 2 (Full-rank conjecture holds generically) We point out that the
full-rank conjecture for min-type functions (4) holds on a dense and open (w.r.t.
the strong (or Whitney) C 1 -topology, cf. [4]) subset of defining functions. In
fact, we consider the well-known (e.g., [14]) condition at x̄ ∈ F −1 (0):
T
D Fi,j (x̄) | Fi,j (x̄) = 0, i = 1, . . . , k, j = 1, 2
(10)
are linearly independent.
12
As mentioned in [14], condition (10) is fulfilled on a dense and open subset of
defining functions (Fi,j , i = 1, . . . , k, j = 1, 2). Due to a slight modification of
Theorem 2, condition (10) implies the full-rank conjecture to hold. The derivation of the analogous result for PC1 -functions is an issue of current research.
The following Remark 3 describes the non-trivial cases for dealing with the
full-rank conjecture.
Remark 3 (Full-rank conjecture, non-trivial cases) For proving the fullrank conjecture for min-type functions (4), we may assume w.l.o.g. that
(a) k < n < 2k, and
(b) span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , k = Rn .
In fact, let FRA hold at x̄ and denote
p := dim span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , k .
We have: p ≤ n and k ≤ p ≤ 2k. Due to Lemma 2, SFRA holds at x̄ if we find
n −k linear independent
constitute
a basis of Rn together with any
T vectors which
T
k vectors from conv D F1,j (x̄), D F2,j (x̄) , j = 1, . . . , k. There exist n − p of
those vectors due to the dimensional arguments. Hence, we need to find
(n − k) − (n − p) = p − k
of those vectors more. In particular, if p = k we are done. If p = 2k we apply
Theorem 2 and, hence, SFRA holds at x̄ too. In case of p < n, we restrict our
considerations to the linear subspace span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , k .
The full-rank conjecture was shown in [7] for the case k = 2. The proof is mainly
based on an argument with separating hyperplanes.
Theorem 3 (Full-rank conjecture for k = 2, [7])
The full-rank conjecture for min-type functions (4) is true for k = 2.
Now, we turn our attention to the new proven case k = 3. The key idea of
the proof is based on the description of FRA and SFRA via dual variables (see
Lemma 5). We shall use the following lemmas.
Lemma 6 Let span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , k = Rn . The following
conditions (i) and (ii) are equivalent:
(i) for any wj ∈ conv DT F1,j (x̄), DT F2,j (x̄) , j = 1, . . . , k the vectors
w1 , . . . , wk , ξ
are linearly independent.
13
(ii) ξ =
k
X
β̄1,j DT F1,j (x̄) + β̄2,j DT F2,j (x̄) with β̄ ∈ R2k fulfilling
j=1
β̄ + K(x̄) ⊂ G2k .
Proof. The assertion follows in a straight-forward manner, see also the second
part of the proof of Lemma 5.
For α ∈ R2k and a nonempty subset of pairs P ⊂ {1, . . . , k} we define
α(P ) := (α1,j , α2,j , j ∈ P ) ∈ R2|P | .
Note that α(P ) consists only of those pairs which correspond to P .
Lemma 7 Let for a nonempty and proper subset of pairs P ⊂ {1, . . . , k} hold
α(P ) ∈ G2|P | for all α ∈ K(x̄)\{0}.
Then, there exists β̄ ∈ R2k such that β̄ + K(x̄) ⊂ G2k .
Proof. We just take β̄ ∈ R2k with
β̄1,j = β̄2,j := 0 for all j ∈ P
and
β̄1,m β̄2,m < 0 for some m ∈ {1, . . . , k}\P.
Theorem 4 (Full-rank conjecture for k = 3)
The full-rank conjecture for min-type functions (4) is true for k = 3.
Proof. The nontrivial part is to show that FRA implies SFRA. Hence, throughout the proof we assume that K(x̄)\{0} ⊂ G2k (see Lemma 5). Due to Remark
3, we may assume that n = 4 or n = 5, and
span DT F1,j (x̄), DT F2,j (x̄), j = 1, . . . , 3 = Rn .
Case I: n = 4
Due to Lemma 2,we need to prove the existence
of a vector ξ ∈ R4 such that
T
T
for any wj ∈ conv D F1,j (x̄), D F2,j (x̄) , j = 1, . . . , 3 the vectors w1 , w2 , w3 , ξ
are linearly independent.
Case I.1: there exists P ⊂ {1, 2, 3}, |P | = 2 such that
dim span DT F1,j (x̄), DT F2,j (x̄), j ∈ P < 4 .
W.l.o.g., let the above condition be fulfilled for P = {1, 2}.
Case I.1.1: dim span DT F1,j (x̄), DT F2,j (x̄), j ∈ P = 2
14
Note that dimK(x̄) = 2k − n = 2. Let K(x̄) = span α1 , α2 . Note: DT F1,3 (x̄)
and DT F2,3 (x̄) are linearly independent. Then,
1
1
2
2
α1,3
= α2,3
= α1,3
= α2,3
= 0.
Having K(x̄)\0 ∈ G6 from Lemma 5, we obtain
α(P ) ∈ G2|P | for all α ∈ K(x̄).
Lemmas 6 and 7 complete the proof for this case.
Case I.1.2: dim span DT F1,j (x̄), DT F2,j (x̄), j ∈ P = 3
Appropriate renumbering and coordinate transformation allow us to assume
w.l.o.g.:
DT F1,1 (x̄) = e1 , DT F2,1 (x̄) = e2 , DT F1,2 (x̄) = e3 , DT F1,3 (x̄) = e4 .
To simplify the notation, we put
a := DT F2,2 (x̄), b := DT F2,3 (x̄).
Setting,
α1 = (a1 , a2 , a3 , −1, 0, 0) , α2 = (b1 , b2 , b3 , 0, b4 , −1) ,
we obtain K(x̄) = span α1 , α2 . In fact, note that a4 = a5 = b5 = 0. Since
α1 ∈ G6 from Lemma 5, we may assume a1 > 0, a2 < 0 (the proof runs the
same way for a1 < 0, a2 > 0 or a3 > 0).
Case I.1.2-a: b4 ≤ 0
Then, α(P ) ∈ G2|P | for all α ∈ K(x̄)\{0}.
Case I.1.2-b: b4 > 0
Then, for S := {1, 3} it holds: α(S) ∈ G2|S| for all α ∈ K(x̄).
In both Cases I.1.2-a and I.1.2-b, Lemmas 6 and 7 complete the proof.
Case I.2: the condition in Case I.1 is not fulfilled
Case I.2.1: there exist a nonempty and proper subset P ⊂ {1, 2, 3},
such that α(P ) ∈ G2|P | for all α ∈ K(x̄).
Then, due to Lemmas 6 and 7 we are done.
Case I.2.2: the condition in Case I.2.1 is not fulfilled
For v ∈ R4 we denote
sign(DF · v) := (sign (DF1,j · v) , sign (DF2,j · v) , j = 1, 2, 3) ,
where sign(r) ∈ {1, 0, −1} is the sign of r ∈ R.
In what follows, we show that there exist vectors v 1 , v 2 , v 3 , v 4 ∈ R4 such that
sign(DF
sign(DF
sign(DF
sign(DF
· v 1 ) = (1, 1,
1,
1,
1,
1),
· v 2 ) = (1, 1,
1,
1, −1, −1),
· v 3 ) = (1, 1, −1, −1,
1,
1),
· v 4 ) = (1, 1, −1, −1, −1, −1)
15
(11)
and
dim span v 1 , v 2 , v 3 , v 4 ≤ 3.
(12)
⊥
Then, by choosing ξ ∈ span v 1 , v 2 , v 3 , v 4
\{0} we are done. Indeed, let us
assume that
3
X
ξ=
α1,j DF1,j + α2,j DF2,j ,
(13)
j=1
where α1,j α2,j ≥ 0, j = 1, 2, 3. Accordingly to the signs of α1,j , α2,j , j = 1, 2, 3,
we multiply (13) by one of the vectors v 1 , v 2 , v 3 , v 4 . Since ξ · v i = 0, i = 1, 2, 3, 4,
we obtain a contradiction.
To find the vectors v 1 , v 2 , v 3 , v 4 as above, we first use the fact that the condition
in Case I.2.1 is not fulfilled for P = {1, 2}. Hence, there exists ᾱ ∈ K(x̄) with
ᾱ1,1 ᾱ2,1 ≥ 0, ᾱ1,2 ᾱ2,2 ≥ 0, ᾱ1,3 ᾱ2,3 < 0. Note that K(x̄)\0 ∈ G6 .
By taking the opposite of gradients for some pairs if needed, we have w.l.o.g.:
ᾱ1,1 ≥ 0, ᾱ2,1 ≥ 0, ᾱ1,2 ≥ 0, ᾱ2,2 ≥ 0, ᾱ1,3 > 0, ᾱ2,3 < 0.
Note that at least one of the numbers in both pairs ᾱ1,1 , ᾱ2,1 and ᾱ1,2 , ᾱ2,2
does not vanish. Otherwise, Case I.1 would occur. Further, let us assume that
there exists α
e ∈ K(x̄) with
α
e1,1 ≥ 0, α
e2,1 ≥ 0, α
e1,2 ≤ 0, α
e2,2 ≤ 0, α
e1,3 > 0, α
e2,3 < 0.
(14)
Again, at least one of the numbers in both pairs α
e1,1 , α
e2,1 and α
e1,2 , α
e2,2
does not vanish. We come to a contradiction as follows. It holds: K(x̄) =
span {ᾱ, α
e}, since ᾱ, α
e are linearly independent and dimK(x̄) = 2. Hence, every α ∈ K(x̄)\{0} can be written as a linear combination
α = α(t1 , t2 ) := t1 ᾱ + t2 α
e with t1 , t2 ∈ R and (t1 , t2 ) 6= (0, 0).
We get
for
for
for
for
t1 , t2 ≥ 0 :
t1 ≥ 0, t2 ≤ 0 :
t1 ≤ 0, t2 ≥ 0 :
t1 , t2 ≤ 0 :
α1,1
α1,2
α1,2
α1,1
≥ 0,
≥ 0,
≤ 0,
≤ 0,
α2,1
α2,2
α2,2
α2,1
≥ 0, α1,3 > 0, α2,3 < 0,
≥ 0,
≤ 0,
≤ 0, α1,3 < 0, α2,3 > 0.
Overall, for P = {1, 3} the condition in Case I.2.1 is fulfilled, a contradiction.
By the same arguments, there does not exist α
e ∈ K(x̄) with
α
e1,1 ≥ 0, α
e2,1 ≥ 0, α
e1,2 ≤ 0, α
e2,2 ≤ 0, α
e1,3 < 0, α
e2,3 > 0.
Now, we claim that there exists a vector u1 ∈ R4 such that
sign(DF · u1 ) =
(1, 1, −1, −1, 0, 0).
In fact, otherwise the following system of linear (in-)equalities

 DF1,1 · v > 0, DF2,1 · v > 0,
DF1,2 · v < 0, DF2,2 · v < 0,

DF1,3 · v = 0, DF2,3 · v = 0,
16
(15)
is not solvable w.r.t. v ∈ R4 . Applying the standard theorem on alternatives
(e.g., [5]), we get
3
X
α
e1,j DF1,j + α
e2,j DF2,j = 0
j=1
with a non-vanishing α
e ∈ R6 satisfying
α
e1,1 ≥ 0, α
e2,1 ≥ 0, α
e1,2 ≤ 0, α
e2,2 ≤ 0.
Moreover, α
e ∈ K(x̄), and, therefore, α
e ∈ G6 . Hence, we have α
e1,3 α
e2,3 < 0.
Overall, we obtain a contradiction to the fact that there does not exist any
α
e ∈ K(x̄) with (14) or (15).
Further, we proceed by using the fact that the condition in Case I.2.1 is not
fulfilled for P = {1, 3}. Then, there exists ᾱ ∈ K(x̄) with
ᾱ1,1 ᾱ2,1 ≥ 0, ᾱ1,2 ᾱ2,2 < 0, ᾱ1,3 ᾱ2,3 ≥ 0.
(16)
Analogously as above, we obtain a vector u2 ∈ R4 such that
sign(DF
sign(DF
sign(DF
sign(DF
· u2 ) = (
1,
1,
· u2 ) = (
1,
1,
· u2 ) = ( −1, −1,
· u2 ) = ( −1, −1,
0,
0,
0,
0,
0,
1,
1 ),
0, −1, −1 ),
0, −1, −1 ),
0,
1,
1 ).
or
or
or
(17)
These four cases correspond to the signs of ᾱ1,j , ᾱ2,j , j ∈ {1, 3} from (16):
ᾱ1,1
ᾱ1,1
ᾱ1,1
ᾱ1,1
≥ 0,
≥ 0,
≤ 0,
≤ 0,
ᾱ2,1
ᾱ2,1
ᾱ2,1
ᾱ2,1
≥ 0,
≥ 0,
≤ 0,
≤ 0,
ᾱ1,3
ᾱ1,3
ᾱ1,3
ᾱ1,3
≤ 0,
≥ 0,
≥ 0,
≤ 0,
ᾱ2,3
ᾱ2,3
ᾱ2,3
ᾱ2,3
≤ 0,
≥ 0,
≥ 0,
≤ 0.
or
or
or
Case I.2.2-a: sign(DF · u2 ) = (1, 1, 0, 0, 1, 1)
We obtain for sufficiently small ε1 , ε2 > 0
sign(DF · (u2 − ε1 u1 )) = ( 1, 1,
1,
1,
1,
sign(DF · (u1 + u2 ))
= ( 1, 1, −1, −1,
1,
sign(DF · (u1 − ε2 u2 )) = ( 1, 1, −1, −1, −1,
1 ),
1 ),
−1 ).
Moreover, there exists v 2 ∈ R4 with
sign(DF · v 2 )
=
(
1, 1, 1, 1, −1, −1
),
by an analogous application of the theorem on alternatives as before. Setting
v 1 := u2 − ε1 u1 , v 3 := u1 + u2 , v 4 := u1 − ε2 u2 ,
it holds
dim span v 1 , v 2 , v 3 , v 4 ≤ 3.
Hence, the construction of v 1 , v 2 , v 3 , v 4 with (11) and (12) is accomplished.
Case I.2.2-b: sign(DF · u2 ) = (1, 1, 0, 0, −1, −1)
17
We obtain for sufficiently small ε1 , ε2 > 0:
sign(DF · (u2 − ε1 u1 ))
sign(DF · (u1 − ε2 u2 ))
sign(DF · (u1 + u2 ))
= ( 1, 1,
1,
1, −1, −1 ),
= ( 1, 1, −1, −1,
1,
1 ),
= ( 1, 1, −1, −1, −1, −1 ).
Moreover, there exists v 1 ∈ R4 with
sign(DF · v 1 )
=
(
1, 1, 1, 1,
1, 1
),
again, by an analogous application of the theorem on alternatives. Setting
v 2 := u2 − ε1 u1 , v 3 := u1 − ε1 u2 , v 4 := u1 + u2 ,
it holds again
dim span v 1 , v 2 , v 3 , v 4 ≤ 3.
Hence, the construction of v 1 , v 2 , v 3 , v 4 with (11) and (12) is accomplished.
In the other cases in (17) we proceed just by taking −u2 instead of u2 .
Case II: n = 5
Due to Lemma 2, we need to prove the existence
of vectors ξ1 , ξ2 ∈ R5 such that
T
T
for any wj ∈ conv D F1,j (x̄), D F2,j (x̄) , j = 1, . . . , 3 the vectors w1 , w2 , w3 , ξ1 , ξ2
are linearly independent.
5
First, we claim the existence
of a vector ξ1 ∈ R such that for any wj ∈
T
T
conv D F1,j (x̄), D F2,j (x̄) , j = 1, . . . , 3 the vectors w1 , w2 , w3 , ξ1 are linearly
independent. In fact, dimK(x̄) = 2k − n = 1, and we set K(x̄) = span {ᾱ}.
Due to Lemma 5, ᾱ ∈ G6 . W.l.o.g., α1,1 < 0 and α2,1 > 0. Then, for the subset
P = {1} we get
α(P ) ∈ G2|P | for all α ∈ K(x̄).
Lemmas 6 and 7 provide the vector ξ1 ∈ R5 as needed.
Further, the second vector ξ2 ∈ R5 will be constructed. For that, let M be a
(4, 5)-matrix whose rows span the linear subspace {ξ1 }⊥ . Note that the null
space of the matrix M T M is span{ξ1 }, and the dimension of its range is 4. We
consider the vectors
M T M DT F1,j , M T M DT F2,j , j = 1, 2, 3.
(18)
Let us assume that there exists a nontrivial α 6∈ G6 such that
3
X
α1,j M T M DT F1,j + α2,j M T M DT F2,j = 0.
j=1
Then,


3
X
M T M  α1,j DT F1,j + α2,j DT F2,j  = 0.
j=1
Hence,
3
X
α1,j DT F1,j + α2,j DT F2,j ∈ span{ξ1 }, a contradiction to the choice
j=1
of ξ1 . Now, we may apply Case I on the 4-dimensional range of M T M for the
18
vectors in (18). We get a vector ξ2 ∈ R5 such that for any
vj ∈ conv M T M DT F1,j , M T M DT F2,j , j = 1, . . . , 3
the vectors v1 , v2 , v3 , M T M (ξ2 ) are linearly independent.
Finally, we prove that the vectors ξ1 , ξ2 do the job. In fact, let us consider a
nontrivial linear combination
3
X
ᾱ1,j DT F1,j + ᾱ2,j DT F2,j + λ̄1 ξ1 + λ̄2 ξ1 = 0,
(19)
j=1
where ᾱ 6∈ G6 and λ̄1 , λ̄1 ∈ R. Applying M T M in (19) and having in mind that
M T M (ξ1 ) = 0, we get
3
X
ᾱ1,j M T M DT F1,j + ᾱ2,j M T M DT F2,j + λ̄2 M T M (ξ2 ) = 0.
j=1
Thus, due to the choice of ξ2 , we obtain: ᾱ = 0 and λ̄2 = 0. Substituting into
(19), we have ξ1 = 0, a contradiction. Finally, we give a remark on the application of Kummer’s inverse function theorem (see [8, 9]) .
Remark 4 (On Kummer’s inverse function theorem) It has been shown
in [9] that the condition in Clarke’s inverse function theorem (see Theorem 1) is
not necessary for the Lipschitzian invertability of a mapping. In [9], a necessary
and sufficient condition for the Lipschitzian invertability was established by using
so-called Thibault derivatives (see also [17, 18]). The latter result is known as
Kummer’s inverse function theorem. It turns out that for min-type functions
SFRA holds if and only if Kummer’s implicit function theorem can be applied
w.r.t. some basis decomposition of Rn (see [16]). Thus, the remaining difficulty
concerning the topological structure of the solution sets of min-type functions
lies in the full-rank conjecture rather than in an application of different inverse
function theorems. Note that for general PC1 -functions it can be different. Here,
the applicability of Kummer’s inverse function theorem for PC1 -functions from
Rn to Rk can be addressed. This is the matter of future research. For other
versions of inverse function theorems in the nonsmooth setting we refer to [2].
Acknowledgement
The authors would like to thank Aram V. Arutyunov, Harald Günzel and Bernd
Kummer for valuable discussions.
References
[1] F. Clarke, Optimization and Nonsmooth Analysis, Wiley, New York,
1983.
19
[2] A. L. Dontchev, R. T. Rockafellar, Implicit Functions and Solution
Mappings, Monographs in Mathematics, Springer, New York, 2009.
[3] A.F. Izmailov, On the existence problem of a nondegenerate subspace
for a convex compact family of epimorphisms, Theoretical and practical
problems of nonlinear analysis, Moscow VZ RAN, pp. 34-49, 2010 (in
Russian).
[4] H.Th. Jongen, P. Jonker, F. Twilt, Nonlinear Optimization in Finite
Dimensions, Kluwer Academic Publishers, Dordrecht, 2000.
[5] H.Th. Jongen, K. Meer, E. Triesch, Optimization Theory, Kluwer
Academic Publishers, Dordrecht, 2004.
[6] H.Th. Jongen, D. Pallaschke, On linearization and continuous selections of functions, Optimization, Vol. 19, pp. 343-353, 1988.
[7] H.Th. Jongen, Jan.-J. Rückmann, V. Shikhman, On stability of the
MPCC feasible set, SIAM Journal on Optimization, Vol. 20, No. 3, pp.
1171–1184, 2009.
[8] D. Klatte, B. Kummer, Nonsmooth Equations in Optimization: Regularity, Calculus, Methods and Applications, Nonconvex Optimization and
Its Applications, Kluwer, Dordrecht, 2002.
[9] B. Kummer, Lipschitzian inverse functions, directional derivatives and
application in C 1,1 optimization, Journal of Optimization Theory and Applications, Vol. 70, pp. 559–580, 1991.
[10] Z.Q. Luo, J.S. Pang, D. Ralph, Mathematical Programs with Equilibrium Constraints, Cambridge University Press, Cambridge, 1996.
[11] O.L. Mangasarian, S. Fromovitz, The Fritz John Necessary Optimality Conditions in the Presence of Equality and Inequality Constraints,
Journal of Mathematical Analysis and Applications, Vol. 17, pp. 37–47,
1967.
[12] B.H. Pourciau, Analysis and optimization of Lipschitz continuous mappings, Journal of Optimization Theory and Applications, Vol. 22, No. 3,
pp. 311-351, 1977.
[13] R.T. Rockafellar, Maximal monotone relations and the second derivatives of nonsmooth functions, Ann. Inst. H. Poincaré Anal. Non Linéaire,
Sect. C, 2 (1985), pp. 167–184.
[14] H. Scheel, S. Scholtes, Mathematical Programs with Complementarity
Constraints: Stationarity, Optimality, and Sensitivity, Mathematics of
Operations Research, Vol. 25, No. 1, pp. 1-22, 2000.
[15] S. Scholtes, Introduction to Piecewise Differentiable Equations, Series:
SpringerBriefs in Optimization, 2012.
[16] V. Shikhman, Topological Aspects of Nonsmooth Optimization, Nonconvex Optimization and Its Applications, Springer, New York, 2012.
20
[17] L. Thibault, Subdifferentials of compactly Lipschitz vector-valued functions, Annali di matematica Pura ed Applicata, Vol. 4, pp. 151–192, 1980.
[18] L. Thibault, On generalized differentials and subdifferentials of Lipschitz
vector-valued functions, Nonlinear Analysis: Theory, Methods & Applications, Vol. 6, pp. 1037–1053, 1982.
21
Recent titles
CORE Discussion Papers
2012/44
2012/45
2012/46
2012/47
2012/48
2012/49
2012/50
2012/51
2012/52
2012/53
2012/54
2012/55
2012/56
2012/57
2012/58
2012/59
2012/60
2012/61
2012/62
2012/63
2013/1
2013/2
2013/3
2013/4
2013/5
2013/6
2013/7
Miguel JARA, Dimitri PAOLINI and J.D. TENA. Management efficiency in football: An
empirical analysis of two extreme cases.
Guillaume ROELS, Philippe CHEVALIER and Ying WEI. United we stand? Coordinating
capacity investment and allocation in joint ventures.
Mikel BEDAYO, Ana MAULEON and Vincent VANNETELBOSCH. Bargaining and delay in
trading networks.
Pierre M. PICARD and Ridwan D. RUSLI. State owned firms: Private debt, cost revelation and
welfare.
Shin-Huei WANG, Luc BAUWENS and Cheng HSIAO. Forecasting long memory processes
subject to structural breaks.
Thierry BRECHET, Carmen CAMACHO and Vladimir M. VELIOV. Adaptive modelpredictive climate policies in a multi-country setting.
Vladyslav NORA and Hiroshi UNO. Saddle functions and robust sets of equilibria.
Thomas BAUDIN, David DE LA CROIX and Paula GOBBI. DINKs, DEWKs & Co. Marriage,
fertility and childlessness in the United States.
David DE LA CROIX and Omar LICANDRO. The longevity of famous people from
Hammurabi to Einstein.
Marijn VERSCHELDE, Jean HINDRIKS, Glenn RAYP and Koen SCHOORS. School staff
autonomy and educational performance: within school type evidence.
Thierry BRECHET and Susana PERALTA. Markets for tradable emission permits with fiscal
competition.
Sudipto BHATTACHARYA, Claude D'ASPREMONT, Sergei GURIEV, Debapriya SEN and
Yair TAUMAN. Cooperation in R&D: patenting, licensing and contracting.
Guillaume WUNSCH, Michel MOUCHART and Federica RUSSO. Functions and mechanisms
in structural-modelling explanations.
Simone MORICONI, Pierre M. PICARD and Skerdilajda ZANAJ. Commodity taxation and
regulatory competition.
Yurii NESTEROV and Arkadi NEMIROVSKI. Finding the stationary states of Markov chains
by iterative methods.
Tanguy ISAAC and Paolo PIACQUADIO. Equity and efficiency in an overlapping generation
model.
Luc BAUWENS, Giuseppe STORTI and Francesco VIOLANTE. Dynamic conditional
correlation models for realized covariance matrices.
Mikhail ESKAKOV and Alexey ISKAKOV. Equilibrium in secure strategies.
Francis BLOCH and Axel GAUTIER. Strategic bypass deterrence.
Olivier DURAND-LASSERVE, Axel PIERRU and Yves SMEERS. Sensitivity of policy
simulation to benchmark scenarios in CGE models: illustration with carbon leakage.
Pierre PESTIEAU and Maria RACIONERO. Harsh occupations, health status and social
security.
Thierry BRECHET and Henry TULKENS. Climate policies: a burden or a gain?
Per J. AGRELL, Mehdi FARSI, Massimo FILIPPINI and Martin KOLLER. Unobserved
heterogeneous effects in the cost efficiency analysis of electricity distribution systems.
Adel HATAMI-MARBINI, Per J. AGRELL and Nazila AGHAYI. Imprecise data envelopment
analysis for the two-stage process.
Farhad HOSSEINZADEH LOTFI, Adel HATAMI-MARBINI, Per J. AGRELL, Kobra
GHOLAMI and Zahra GHELEJ BEIGI. Centralized resource reduction and target setting under
DEA control.
Per J. AGRELL and Peter BOGETOFT. A three-stage supply chain investment model under
asymmetric information.
Per J. AGRELL and Pooria NIKNAZAR. Robustness, outliers and Mavericks in network
regulation.
Recent titles
CORE Discussion Papers - continued
2013/8
2013/9
2013/10
2013/11
2013/12
2013/13
2013/14
2013/15
2013/16
2013/17
2013/18
2013/19
2013/20
2013/21
Per J. AGRELL and Peter BOGETOFT. Benchmarking and regulation.
Jacques H. DREZE. When Borch's Theorem does not apply: some key implications of market
incompleteness, with policy relevance today.
Jacques H. DREZE. Existence and multiplicity of temporary equilibria under nominal price
rigidities.
Jean HINDRIKS, Susana PERALTA and Shlomo WEBER. Local taxation of global
corporation: a simple solution.
Pierre DEHEZ and Sophie POUKENS. The Shapley value as a guide to FRAND licensing
agreements.
Jacques H. DREZE and Alain DURRE. Fiscal integration and growth stimulation in Europe.
Luc BAUWENS and Edoardo OTRANTO. Modeling the dependence of conditional
correlations on volatility.
Jens L. HOUGAARD, Juan D. MORENO-TERNERO and Lars P. OSTERDAL. Assigning
agents to a line.
Olivier DEVOLDER, François GLINEUR and Yu. NESTEROV. First-order methods with
inexact oracle: the strongly convex case.
Olivier DEVOLDER, François GLINEUR and Yu. NESTEROV. Intermediate gradient
methods for smooth convex problems with inexact oracle.
Diane PIERRET. The systemic risk of energy markets.
Pascal MOSSAY and Pierre M. PICARD. Spatial segregation and urban structure.
Philippe DE DONDER and Marie-Louise LEROUX. Behavioral biases and long term care
insurance: a political economy approach.
Dominik DORSCH, Hubertus Th. JONGEN, Jan.-J. RÜCKMANN and Vladimir SHIKHMAN.
On implicit functions in nonsmooth analysis.
Books
G. DURANTON, Ph. MARTIN, Th. MAYER and F. MAYNERIS (2010), The economics of clusters –
Lessons from the French experience. Oxford University Press.
J. HINDRIKS and I. VAN DE CLOOT (2011), Notre pension en heritage. Itinera Institute.
M. FLEURBAEY and F. MANIQUET (2011), A theory of fairness and social welfare. Cambridge University
Press.
V. GINSBURGH and S. WEBER (2011), How many languages make sense? The economics of linguistic
diversity. Princeton University Press.
I. THOMAS, D. VANNESTE and X. QUERRIAU (2011), Atlas de Belgique – Tome 4 Habitat. Academia
Press.
W. GAERTNER and E. SCHOKKAERT (2012), Empirical social choice. Cambridge University Press.
L. BAUWENS, Ch. HAFNER and S. LAURENT (2012), Handbook of volatility models and their
applications. Wiley.
J-C. PRAGER and J. THISSE (2012), Economic geography and the unequal development of regions.
Routledge.
M. FLEURBAEY and F. MANIQUET (2012), Equality of opportunity: the economics of responsibility.
World Scientific.
J. HINDRIKS (2012), Gestion publique. De Boeck.
CORE Lecture Series
R. AMIR (2002), Supermodularity and complementarity in economics.
R. WEISMANTEL (2006), Lectures on mixed nonlinear programming.
A. SHAPIRO (2010), Stochastic programming: modeling and theory.