Online Appendix:
Proofs for lemmas used in Digraphs Method for
Screening and the computer program
Here we give the proofs for all lemmas used in the present paperby slightly reformulating (closer to 2016 terms) our
previous discussion paper All solution graphs in multidimensional screening [https:// mpra.ub.uni-muenchen.de
/30025/1/ MPRA_paper_30025.pdf], published in Russian as Kokovin et al. (2011). The screeening model studied
is the same. The equations numbering/references in this Appendix remains as in the main paper Digraphs Method
for Screening, i.e., equations (1)-(3) mean those in the main text.
1
Solution structures: all essential envy-graphs are rivers and vice versa
In this section we show which types of solution structures are possible and which are not. This helps in characterizing
the solutions later on.
Preorders.
The rivers can be classied through preorders
,
i.e., complete transitive binary relations among
the nodes according to some principle. Any preorder allows for the equivalence denoted as
but does not allow for the non-comparable nodes.
some ordered classes, called here
layers.
Most useful for our purpose is the
is dened only for acyclic digraphs, based on the length
d∨ (i, #0)
L∨
2
within a couple
longest-path
L∨
1
ordering principle
∨ .
i
of the longest available path from each node
the root (number of arcs). A node having a longer maximal path is considered
with equal lengths are equivalent. Namely, the rst layer
than one arc; similarly, layer
i∼j
In other words, a preorder is a partitioning of all nodes into
higher
in preorder
∨ ,
It
to
the nodes
contains all nodes connected to node #0 by no more
contains all nodes connected to #0 by no more than two arcs, and so on.
shortest-path
L∧
1 contains
∧
all nodes connected to node #0 whose shortest way to #0 includes one arc; similarly, layer L2 contains all nodes
connected to #0 by the shortest of two arcs, and so on.
Another useful preorder is based on the
ordering principle
∧ .
Namely, the rst layer
τi = ti − c(xi ) which are the per-consumer prot contributions. In
τi > τi , whereas similarly protable bundles belong to the same layer.
partial (incomplete) ordering → among successors and predecessors in
We shal use also the preorder of net taris
this preorder, a type
i
is higher than
j
when
We shall compare this preorder with the
the digraph, where equality means belonging to a cycle, whereas some nodes can be not conected (by a directed
path), and therefore incomparable.
1.1
All envy-graphs are rivers and vice verse
To prepare our propositions, a lemma below states the most general property of the solution structures, a property
guaranteed solely by quasi-linearity of utilities for both the non-relaxed (ρ
= 0)
1
and the relaxed screening.
Lemma 1: (in-rooted envy-graph). For any ρ-solution (x̄, t̄) its envy-graph
is in-rooted, i.e., each
node i is connected to the root (#0) by a directed path i → ... → 0. Thus, Ḡ¯ (x̄, t̄) contains a spanning-tree.
Proof:
(i
→ 0)
¯ (x̄, t̄)
Ḡ
¯ (x̄, t̄), suppose the root (#0) is free of envy, i.e., no constraint of the type
Ḡ
the constraints (1)(3) shows that in this case, with quantities x̄ remaining
We proceed by induction. In
is active. The specic form of
(t̄1 , ..., t̄n ) could be increased simultaneously by some (same) amount without violating any
ti enter all incentive-compatibility inequalities at both sides. This contradicts prot
¯
∧
maximization at (x̄, t̄). So, the set L1 = {i| (i, 0) ∈ Ḡ(x̄, t̄)} of agent types adjacently connected to the root is
∧
n
∧
non-empty: L1 6= ∅. If the compliment-set I \ L1 is empty, then the lemma is proved. Otherwise, suppose that
∧
the compliment-set is not connected by outgoing arcs neither to #0 nor to L1 . Then, for the same reason, it again
¯
∧
∧
contradicts the optimality of (x̄, t̄), so the second-high layer L2 = {i| (i, j) ∈ Ḡ(x̄, t̄) : j ∈L1 } is also nonempty,
thereby connected indirectly to layer #0. We repeat this logic for all layers, and by induction, the in-rooted property
is proved.
unchanged, all taris
constraint, because variables
Now, under additional assumptions, we can ensure acyclic property of in-rooted envy-graphs under relaxation
ρ > 0.
1
It is done using Lemma 2. Its idea and the simple version of statement (i) originates in Guesnerie and Seade
Previous somewhat similar statements that we know, are Lemmas 1 and 2 in Rochet and Stole (2003), and Proposition 1 in
Andersson (2005). Our version distinguishes between binding and active constraints, considers quite general valuations and
It is possible to further generalize such lemma onto non-quasi-linear utility functions
decreasing in
t,
and
u(x, ∞) = −∞.
1
u(x, t);
ρ-relaxation.
one should assume them continuous, strictly
(1982) being also used in several papers. We extend it for the case
ρ>0
and enforce it.
states that the preorder of prot-contributions cannot contradict the partial order
2 In our terms, the lemma
of the envy-graph.
Take any ρ-solution (x̄, t̄) under quasi-separable costs C(m, x) = f0 + ni=1 mi c(xi )
( f0 ≥ 0), then: (i) the prot contribution τi = ti − c(xi ) from any agent is not lower than the contribution from
any her successor in the envy-graph, i.e., i → · · · → j ⇒ τ̄i ≥ τ̄j ,; (ii) under ( ρ > 0) this inequality is strict:
i → · · · → j ⇒ τ̄i > τ̄j , and for the adjacent couples i → j it has the particular form τ̄i ≥ τ̄j + ρ. Under relaxation
ρ > 0, bunching among predecessors and successors (i.e., xi = xj , i → · · ·j ) is excluded, as well as other dicycles.
Lemma 2: (profits order).
Proof.
P
Using separable-cost assumption, we can argue in terms of per-package prot contributions
ρ > 0.
Let us prove claim (ii) for the case
i → j)
τi = ti −c(xi ).
→ j , i.e.,
Assume that there could be a couple of adjacent agents (i
≤ τj ). Then we could increase the objective function
(x̄i , τ̄i ) by the envied package (x̄j , τ̄j ), i.e., by assigning a
new
menu ((x̄1 , τ̄1 ), ..., (x̃i , τ̃i ), ..., (x̄n , τ̄n )) remains incentive-compatible
because no new quantity-tari packages arise, and the only aected agent i is indierent between her old and new
packages, by the assumption i → j which means ṽi (x̄i ) − τ̄i = ṽi (x̄j ) − τ̄j − ρ. Besides, with a bigger net tari
(τ̃i = τ̄j + ρ > τ̄i ) now the new menu brings a larger prot compared to the initial one. This contradicts the
optimality of initial (x̄, τ̄ ) menu. Thus, i → j ⇒ τ̄i ≥ τ̄j + ρ. By transitivity and using chains like τ̄i > τ̄k ... > τ̄j ,
such ordering conclusion follows also for non-adjacent successive agents i → ...j . The same logic proves the case (i)
with ρ = 0, only the inequality proved is not strict: τ̄i ≥ τ̄k ... ≥ τ̄j . with the reverse order of prot contributions: (τi
mi ρ > 0 by replacing the envier's
package (x̃i , τ̃i ) := (x̄j , τ̄j + ρ). This new
for amount
package
3
The above two lemmas imply the following claim on acyclic solution structures.
For any ρ-solution (x̄, t̄) to a screening problem with quasiseparable costs and relaxation ρ > 0, its envy-graph
is a river.
Lemma 3: (all envy-graphs are rivers).4
Proof.
¯ (x̄, t̄)
Ḡ
Rooted (namely, in-rooted) property of digraph
under relaxation
ρ > 0, prot contributions τ̄
in
¯
Ḡ
would mean a cycled order
By this lemma, bunching (xi
= xj )
follows from Lemma 1. As to dicycles, by Lemma 2
are well coordinated with partial order of A-graph
always bring strictly higher prot contributions
i → j → ··· → i
¯
Ḡ
¯ , i.e., predecessors
Ḡ
τ̄i than their successors. Therefore any (bunched or not) dicycle
τ̄i > τ̄j > ... > τ̄i among real numbers τ̄i ∈ R, which is impossible.
among predecessors and successors (essential bunching) is excluded, coin-
cidence remains possible only for packages coinciding accidentally. Unlike usual bunching, the accidental bunching
can be ignored because it has no impact on characterizing solutions.
Speaking methodologically, we have obtained here the restrictions on endogenously emerging structure of the
solutions, in contrast with the exogenous structure (unique path) imposed by SCC in many papers.
1.2
All rivers can be envy-graphs
We call as
n+0 -river
any river with
for some screening problem where
n
n
nodes plus the root. Let us show that each
n+0 -river can be a solution graph
n+0 -rivers (the enumeration
agent types are served. First we enumerate all
idea being explained by Fig.1).
Lemma 4 (number of rivers).5 Consider all labelled rivers having n non-root nodes {1, ..., n} and the root
(sink) with label #0. The number r0 (n) of such rivers can be found recursively as 6
r0 (n)
:
=
n
X
2n−k an,k ,
where
k=1
an,k
:
=
n−k
X
(2k − 1)m 2k(n−m−k) (nk )an−k,m ,
aj,j = 1,
m=1
2
Other versions of claim (i), in Brito et al.
(1990) and Andersson (2005), are proved under stronger restrictions on costs and
valuations than here, and consider only adjacent agents, without using graph notions.
Extension of such claim onto non-separable
concave cost functions see in Kokovin et al. (2010).
3 Moreover,
to discuss non-optimal plans, observe that the above proof simplies a given plan through unifying some agents, i.e.,
deleting a package from the menu and endowing its owner with another package already existing in this menu. Then it becomes obvious
that regarding any
ρ-
ρ
-feasible plan we can also state that it is either acyclic or can simplied in this way into a weakly more protable
feasible plan satisfying the natural prot ordering and thereby acyclic.
4
Reducibility of cycles in A-graph of the main problem (with more restrictions on
vi , C
than here) was proven in Guesnerie and
Seade (1982) through the same simple Lemma 2, and is repeated in subsequent papers.
5 In
Appendix 2 one can nd also enumeration of all labelled trees with root #0, in particular
T5 = 1296.
6 Sign :=
here means assigning a value to a symbol.
2
T1 = 1, T2 = 3, T3 = 16, T4 = 125,
and (nk ) denotes the number of all k-element subsets of {1, 2, ..., n}, the rst elements of the sequence being r0 (1) = 1,
r0 (2) = 5, r0 (3) = 79, r0 (4) = 2865, r0 (5) = 254111.
Proof:
n
For proving this, we should rst know the number of all in-rooted directed trees, having
1, 2, ..., n,
nodes labelled
non-root
and a root (sink) #0. We derive it from the following known lemma about non-directed
labelled trees.
Lemma C [Cayley (1889)].7 The number Cn of labelled trees of order n (with n nodes) is equal to Cn = nn−2 ,
rst numbers of this sequence being C1 = 1, C2 = 1, C3 = 3, C4 = 16, C5 = 125...
Corollary. The number Tn of labelled in-rooted trees with the sink labelled #0 and n non-root nodes 1, 2, ..., n
is equal to
Tn = Cn+1 = (n + 1)n−1 ,
the rst members of this sequence being T1 = 1, T2 = 3, T3 = 16, T4 = 125, T5 = 1296, T6 = 16807, T7 = 262144...
Proof
of Corollary.
Directing a non-directed tree
G
(i.e., making an in-rooted tree from
G)
amounts to
appointing any node for the role of the root, then it determines the direction of all arcs towards it.
Similarly,
when labelling an unlabelled tree we can rst assign any node to become the lowest number out of all labels used
n-tree has a unique counterpart
n-trees directed towards the lowest number (this number is #0 in our paper because the direction
is towards #0). These classes are equivalent. Hence, our problem of counting in-trees of n nodes with labels {1, ..., n}
plus the root #0 is equivalent to Cayley's problem for non-directed labelled trees of the order n + 1.
(#0 in our case) and then assign somehow other labels. Any non-directed labelled
among all labelled
Now we can use similar ideas for counting rivers.
We need to count all
n+0 -rivers
labelled rivers
with
n
non-root nodes
#1, #2, ..., #n
plus the root #0. This class called
includes both tree-rivers and rivers with bypasses. Adding various bypasses to a tree is a good method
to build all rivers with same spanning-tree, maintaining the longest-path preorder unchanged (see Fig 1). We use
this idea to derive what we need from Robinson's lemma below. His lemma considers all acyclic digraphs, which is
a broader class than rivers because the sink in acyclic digraphs need not be unique.
Denote by
an,k
the number of labelled acyclic digraphs with
arcs to them (these
k
subsets of the labels set
Lemma R
n
nodes, exactly
k ≤ n out of these n lacking any
(nk ) the number of all k -element
nodes being sources or disconnected). Denote standardly by
{1, 2, ..., n}.
8
Any number
can be found recursively from the lower-order elements of
Using matrix a, the total number ρac (n) of acyclic
, the rst 7 numbers of this sequence being ρac (·) = 1,
an,k
m k(n−m−k) n
a
an,k = m=1 (2 − 1) 2
(kP
)an−k,m .
n
n
ρac (n) = k=1 an,k
3, 25, 479, 22511, 2349987 , 569684123.
[Robinson (1970)].
matrix as
labelled digraphs of order
Pn−k
k
is found as
We use Robinson's lemma and details of its proof to derive our Lemma 4, which considers a dierent class of
2n−k . To prove our Lemma 4 (number of rivers), we
need to apply Robinson's lemma and prove that any magnitude r0 (n) can be found recursively from the Robinson's
matrix a as
graphs but gives a similar formula, except for a multiplier
r0 (n) :=
n
X
2n−k an,k .
(9)
k=1
To this end, note that our task of enumerating rivers is rather similar to Robinson's enumeration of all acyclic
digraphs because any acyclic (connected or disconnected) digraph
there a root #0. Namely, we take each node
nodes) and connect it
i
not connected
to
G of n nodes i = 1, ..., n, we can extend by adding
anything (i.e., we take all sinks and disconnected
to #0 by an arc from i, thereby extending the graph G downstream (see Harary and Palmer,
1973, Section 1.6, about extensions). Obviously this extension of graph
acyclic graph
G.
G
always makes a river from the initial
Moreover, all rivers constructed in this way are dierent because their basic graphs were dierent
and each was supplemented by #0 in a unique way. The class of acyclic digraphs supplemented by a root in this
R1 = R1 (n). This class is equivalent to the class An of
|An | = ρac (n) by Robinson's lemma.
However, to construct the complete family of rivers called Rall (n) from the smaller family R1 , we should
supplement R1 by some other rivers. To comprehend this fact, consider a river named rkj ∈ R1 with k ≤ n nodes
adjacent to the root and j -th label among such sub-class. Robinson's proof of his lemma tells us that there are an,k
such rivers: j = 1, ..., an,k . (More correct reference should mention that we apply notation an,k to k sinks of an
acyclic digraph with n nodes, instead of Robinson's k sources. But changing + for - in all directions of all graphs
minimal way constitutes a family of rivers called further
all acyclic digraphs and its number of elements is
7 See
8 See
Harary and Palmer (1973), Section 1.7.
Harary and Palmer (1973), Section 1.6.
3
in a family makes no dierence for their number, so we use constructions from Section 1.6. in Harary and Palmer
rkj has high (dierent from L∨
1 ) layers, they can be supplemented by some bypasses from
∨
∨
these L2 , L3 ,... directly to the root, such bypasses being absent in R1 by construction (see Fig.1 for examples of
extending a graph with bypasses). Therefore, enumerating all such modications of any river rkj ∈ R1 amounts
n−k
to 2
combinations of Yes or No possible for all (n − k) nodes non-adjacent to the root; Yes means that
(1973).). When this river
the node will become adjacent to the root during the modication, No means the opposite. Thus, each initial
river
rkj
rkj through adding bypasses like i → 0. Rivers rkj : k = n, and a star is the only member in its family:
becomes included into a richer family constructed from
stars (see Fig.1) lack high layers
∨
L∨
2 , L3 , ...,
in the sense
2n−n = 1. More generally, the possibility of adding to rkj bypasses towards #0 increases the family of rivers similar
n−k
to initial rkj by 2
times. This multiplier combined with as many as an,k initial rivers yields exactly the formula
(9) that we are proving.
However, to use this formula without doubt, we must be sure that when we add one or several bypasses
such extension of a river
other element
rk̂i
of class
rkj ∈ R1 generates always a new
R1 . When k̂ 6= k the dierence is
i → 0,
river, dierent from any river, generated from some
obvious because longest-path preorder
is not changed by any additional bypasses to #0, whereas such
rk̂i
and
rkj
∨
of a graph
belong to dierent classes, their rst
∨
|L∨
1 (rk̂i )| = k̂ 6= k = |L1 (rkj )|. More generally, it can be k̂ = k . Suppose that the same
river rks ∈ Rall resulted from adding bypasses l → 0 to two dierent initial rivers rki ∈ R1 and rkj ∈ R1 . However,
∨
∨
consider the reverse operation: take all bypasses l → 0 originating from high layers l ∈ L2 , L3 ,... and remove these
bypasses from the constructed river rks . It is an operation with the unique outcome. Therefore, two constructed
rivers could coincide only when their initial rivers did coincide: rki = rkj ∈ R1 . Thus, we never arrive at the same
river when extending dierent initial rivers from R1 by the bypasses to #0. In other words, there arise no duplicates
in Rall during our extension of R1 and the lemma is proved.
layers being dierent:
One can object to Lemma 4 that so far we have got only the number
who are
really
r0 (n)
problem where some
N −n
n agent types
N -agents screening
of possible graphs for
served. Additionally, we could nd the number of possible envy graphs for any
agents may remain unserved, that potentially could change the graph and the number
of outcomes. However, in the main screening setting
0∈X
is a common outside option for everybody, and this
bundle (0,0) is just assigned to all non-served types, so formally everybody is always served:
n = N.
If zero and
non-zero quantities need not be distinguished in enumeration, then the number of possible outcomes (rivers) remains
unaected by non-serving.
The lemma below expresses all taris though quantities, thus reducing half of the variables, to easily optimize them by dierentiation (this standard technique becomes cumbersome without SCC). We recall that nonparticipation is denoted as node #0 (the root of A-graph
¯ ),
Ḡ
and introduce the following notations for connections
n nodes of any digraph G:
Sk (G) the set of all successors of node k ;
Skad (G) the set of adjacent successors of k ;
s1k (G) the unique adjacent successor of k (Skad (G) = {s1k (G)}).
Pk (G) ⊂ {0, 1, ..., n} the set of all predecessors of node k ;
Pkad (G) ⊂ P
Pk (G) the set of adjacent predecessors of k ;
MkP G :=
j∈Pk (G)∪{k} mj the mass of all predecessors and
among
the node itself (sign
:=
from now on means
assigning the value to the left-hand side).
The lemma below expresses all taris though quantities, thus reducing half of the variables, to easily optimize
them by dierentiation.
Let (x̄, t̄) ∈ X n ×Rn be a ρ-solution ( ρ ≥ 0) to the screening
program (1)-(3) with n consumers served. Then: (i) There exists a spanning-tree GT ⊆ Ḡ¯ (x̄, t̄) within the rooted
envy-graph such that the assignment (x̄, t̄) is also a solution to the following tree-specic optimization program
(10)-(13) solved w.r.t. variables x:
Lemma 6 (Spanning-tree characterization) .
maxn π̃(x, GT )
x∈X
:
=
n
X
mi θi (x, GT ) − C(m, x)
s.t.
(10)
i=1
Vi (xi ) − θi (x, GT ) ≥ Vi (xj ) − θj (x, GT ) − ρ ∀(i, j) 6∈ GT ,
(11)
Vi (xi ) − θi (x, GT ) ≥ 0 ∀i,
(12)
x0 := 0,
4
functions θi and taris ti being determined as
X
tk = θk (x, GT ) :=
[Vi (xi ) − Vi (xs1i (GT ) ) + ρ] ∀k ≥ 1, θ0 (.) ≡ 0.
(13)
i∈Sk (GT )∪{k}
For any spanning-tree GT ⊆ Ḡ¯ (x̄, t̄) resulting from a solution, all solutions to its GT -specic program (10)-(13)
are also the solutions to the initial program (1)-(3).
(iii) Under quasi-separable cost, the tree-specic objective function becomes separable w.r.t. xi , namely :
(ii)
π̃(x, GT ) =
n
X
[mi vi (xi ) +
i=1
Proof .
¯ (x̄, t̄)
Ḡ
X
MjP GT · (vi (xi ) − vj (xi ))].
(14)
j∈Piad (G)
(i) Based on Lemma 1, a (non-unique) spanning-tree can always be chosen from the A-envy-graph
of the solution
(x̄, t̄)
to the initial problem. Take any such tree
¯ (x̄, t̄).
GT ⊆ Ḡ
To comprehend why the new
optimization program has the same maxima as the initial one, compare the optimization formulations and observe
that new problem (10)-(13) diers from the initial problem (1)-(3)
requires that those constraints
(i, j) ∈ GT
feasible points; thereby the taris are calculates specically as
(this means using the named equalities recursively).
restrictive, though initially-optimal point
the same optimal value
only in one additional requirement:
equation (13)
P(x̄, t̄) would remain equalities at all
t̄k = Tk (xi ) := i∈Sk (GT )∪{k} [Vi (xi ) − Vi (xs1i (GT ) )]
which were active (equalities) at point
(x̄, t̄)
This new requirement makes the set of constraints
still satises it. Moreover, at
(x̄, t̄)
more
the new objective function takes
π̃(x̄, GT ) as the old function: π(x̄, t̄) = π̃(x̄, GT ). Hence in this new GT -specic problem
(x̃, t̃) can be better than the initial plan (x̄, t̄), which therefore remains optimal for the
never an admissible plan
new tree-specic problem (10)-(13).
GT -tree-specic solution (x̃, t̃) must give the same value to new and old objective functions
(x̄, t̄). Besides, (x̃, t̃) must satisfy all constraints (10)-(13) that are not weaker
than (1)-(3). Therefore, such plan (x̃, t̃) is also a solution to the initial program (1)-(3).
PG
(iii) Under separable costs, derivation of our expression π̃, Mk
in (14) from initial formulae (10)-(??) is performed directly by recursively substituting taris along the tree.
(ii) Any other optimal
π̃
and
π,
as the initial optimal plan
Lemma 7 below guarantees the existence of Lagrange multipliers only under positive relaxation
Lagrange multipliers below are denoted as
function itself is denoted as
L(.).
λij
i-th
for each
constraint
i → j (j = 0, 1, ..., n),
ρ > 0.
The
the Lagrangian
We keep the same notations as in Lemma 6 and denote gradients as
∇z vi (z) ≡
0
0
(viz
, ..., viz
) ∈ Rl .
1
l
Assume quasiseparable costs, real quality space X = Rl and continuously dierentiable net valuations vi . Then: (i) For any ρ-solution (x̄, τ̄ ) to the normalized problem (4)-(6) with
ρ > 0, there exists some LA-graph Gλ+ generating some Lagrange multipliers λ = (λ1,0 , λ1,2 , ..., λn,n−2 , λn,n−1 ) ∈
that satisfy the following rst-order conditions and the supplementary inequalities:
Rn∗n
+
Lemma 7 (FOC characterization).
∂L(x̄, τ̄ , λ)
∂ti
∇xi L(x̄, τ̄ , λ)
= mi −
X
j∈Siad (Gλ
+)
= ∇xi vi (x̄i )
X
λij +
X
λij −
j∈Siad (Gλ
+)
0
0
0
0
Gλ+
(ii)
λji = 0 ∀i > 0,
X
λji ∇xi vj (x̄i ) = 0; ∀i > 0,
= vi (x̄i ) − τ̄i ∀(i, 0) ∈
Gλ+ ,
Gλ+ ,
(17)
(18)
≤ vi (x̄i ) − τ̄i − vi (x̄j ) + τ̄j + ρ ∀(i, j) 6∈
≤ vi (x̄i ) − τ̄i ∀(i, 0) 6∈
(16)
j∈Piad (Gλ
+)
= vi (x̄i ) − τ̄i − vi (x̄j ) + τ̄j + ρ ∀(i, j) ∈ Gλ+ ,
= {(ij)|λij > 0}
(15)
j∈Piad (Gλ
+)
x̄i ∈ X,
Gλ+ ,
where
(19)
(20)
.
(21)
The Lagrange multipliers of the constraints successive to any i are bounded as
X
j∈Siad (Gλ
+)
P Gλ
+
λij ≤ Mi
X
:=
j∈P (i,Gλ
+ )∪{i}
5
mj
∀i;
(22)
moreover, when the river Gλ+ is a tree, the positive multiplier for the unique successor s1i of i is found as the mass
of all predecessors of s1i :
P Gλ
+
λis1i (Gλ+ ) = Mi
Proof.
.
When the Lagrange multipliers exist, the FOC (15)-(19) to be proved follow straightforwardly from
L(x, τ, λ) of our relaxed problem (4)-(5). We have only reformulated the usual FOC
∀j ∈ Piad (Gλ+ ), ∀j ∈ Siad (Gλ+ ) among predecessors and successors instead
of summation over simpler equivalent index-set (∀j 6= i). (These representation of predecessors and successors is
λ
helpful for practically nding local maxima from hypotheses about graphs G+ ).
dierentiating the Lagrangian
in graph terms with summation over
More challenging question is the applicability of the Kuhn-Tucker theorem about existence of the Lagrange
multipliers
λ
for a solution
(x̄, τ̄ ).
Usual Slater constraints-regularity condition is not applicable and direct check
of linear independence of the constraints is not helpful.
Only the assumption (ρ
> 0)
helps, because in this
case Lemmas 1-3 ensure that there are no cycles, and Lemma 2 provides the strict order among net taris.
To
characterize our non-convex optimization program, we exploit a non-convex version of the Kuhn-Tucker theorem
with the Mangasarian-Fromovitz constraint-qualication, namely, Theorem 4.2 in Rockafellar (1993), reformulated
as the following Lemma M.
Lemma M. Take an optimization program {maxz∈Rn̄ u(z); gk (z) ≥ 0, k = 1, ..., m} with continuously dierentiable
functions, and a locally optimal point z̄ .Assume the cone of admissible directions at z̄ is solid, i.e., there exists a directionvector w ∈ Rn̄ with a positive scalar product to the gradient of each active inequality-constraint so that w∇gk (z̄) >
0 ∀k : gk (z̄) = 0. Then there exist dual vector λ ∈ Rm
+ satisfying the rst-order conditions with this optimum z̄ .
To apply this lemma, we reformulate its terms in our notation for the case of one-dimensional quality and zero
z̄ = (x̄, τ̄ ), n̄ = 2n, m = n×n, (k) = (i, j), gij (x, τ ) := vi (xi )−vi (xj )−τi +τj ≥ 0
l > 1, ρ > 0 the proof is similar, only the notation would expand). In this case the solid-cone
relaxation in the absence of cycles:
(for broader conditions
restriction on the active constraints means
∃w ∈ R2n : w∇gij (x̄, τ̄ ) > 0 ∀(i, j) ∈ Ḡ(x̄, τ̄ ) : where
Ḡ(x̄, τ̄ ) : = {(i, j) : gij (x̄, τ̄ ) := vi (x̄i ) − vi (x̄j ) − τ̄i + τ̄j = 0}.
So, to apply Lemma 6, we should build a vector
small
> 0,
the direction
w
w
(23)
(24)
satisfying condition (23). In other words, for all suciently
would lie in the admissible cone's interiority, so that
(x̄, τ̄ ) + w
would be a strictly
admissible plan (a plan involving only strict inequalities).
First, we renumber the agents in the order of their prot-contributions (net taris), so that
τ̄ (j) < ... < τ̄(n) (all these numbers dier from each other because of Lemma
specic vector w̄ from zeros in its x half and from the net taris otherwise:
w̄ = (0n , w̄n+1 , w̄n+2 , ..., w̄2n ) = (0, ..., 0, −τ̄(1) , −τ̄(2) , −τ̄(3) , ..., −τ̄(n) ) ∈ R2n .
Using acyclic property of
Ḡ
taris makes a feasible point
τ̄(1) < τ̄(2) < ... <
2). Second, we construct the needed
(25)
and ordered net taris, one can easily check that such proportional decrease in all
(x̄, τ̄ ) + w̄
strictly
incentive compatible. In other words, our
w̄
satises the regularity
condition (23):
w̄∇gij (x̄, τ̄ ) = 0 · v̇i (x̄i ) − 0 · v̇i (x̄j ) − 1 · (−τ̄i ) + 1 · (−τ̄j ) > 0.
(26)
Hence, Lemma 6 is applicable and the Lagrange multipliers do exist.
(ii) To estimate the multipliers in the case of a tree, we can recursively nd all
to our graph
GT
into formula (15) of the river-specic program. When
because each node has a unique successor
Siad (G) = {s1i (G)}.
λis1i = mi .
predecessors), we nd their multiplier magnitude
GT
λij
substituting them according
is a tree, simple substitution is possible
Starting from all sources (top nodes without
Then we exclude these sources from the graph,
λjsj for the emerging new sources, and continue in this way recursively to nd the Lagrange multiplier
P GT
λis1i = Mi
for each i and the unique IC constraint leading from i downwards as required in (22).
For the case of a river which is not a tree, we see from (15) that the sum of the succeeding multipliers λij exceeds
similarly nd
the sum of the preceding ones, diering exactly by magnitude
mi +
X
λji =
j∈Piad (Gλ
+)
maxλ≥0 {
(22). If we search for
upper bound
P
j∈Siad (G+λ )
mi :
X
λij .
j∈Siad (Gλ
+)
λij } among non-negative λij
6
satisfying all these equations, we nd the needed
Computer program for screening
Here we present the text of our computer program optimizing a screening menu for 3 agent types with quadratic
net valuations. The program is written in Wolfram-Mathematika language by Vladimir Nakoryakov together with
Sergey Kokovin in 2011 and seriously revised by Sergey Kokovin in 2016. The authors have tested it on not too
numerous examples; about 2500. There can be bugs and incorrect work under some parameters.
The idea and scheme of the calculations see in our paper Kokovin and nahata 2016 and in comments within the
program.
(* Wolfram-Mathematika-6 program (version with given rivers-list March2016) for screening- optimization with
quadratic valuations*)
(* Hint for Mathematica-5.2: First open BeginPackage["Temporary`"]; << DiscreteMath`Combinatorica`; then,
in next Run of Proga, remove both.., then you work always in one block with Context "Temporary`" *)
ClearAll[Partns,Layers,LayMat,AdjMat,i,j,k,n, AllAdMatrs, Nskipped, lenAllPreodTrrivs, Usedgraphs, BadTrees,
niterRand, nAllRandskipped];
BeginPackage["Temporary`"]; <<Combinatorica`;
(* This program (version with given rivers-list March2016) for screening- optimization with quadratic valuations
nds a solution quantities X1,X2,X3 and taris under any valuations, searching among various possible graphs
of constraints.
(It is written in the form allowing to expand program to case of 4 or more agent types, but
expansion is not accomplished, somewhere dimensionality 3 remains xed in progr instead of \ exible n.)
rst performs Layers-method for building all graph- rivers from all trees and enumerates them.
It
Column #1 in
adjacency- matrix is the root (agent-node 0 in theory), nodes #2, #3, #4 (nodes 1, 2, 3 in theory) are agents.
Construction of graphs works like this: From labelled nodes {1,2,...,n} out of a would-be digraph, connecting the
nodes, we build all possible trees one-by-one, classifying them into Families, according to longest-path- layers (see
paper ScreeningStructures.tex). After building a tree, we enrich each tree with various bypasses to get Relatives
of this tree, i.e., to build its river- family.
But afterwards we should exclude intersections generated.
Namely,
from each tree, we try ALL bypasses downstream, throwing away rivers with the same unique INDEX as existed
previously! Preorder-vector makes INDEX, it is a vector characterizing a river, generated from the arcs- vector like
{{1,0},{2,1},{3,1},{3,2},{4,2}}. INDEX can look like ={1,1, 1,1} or {1,2,2,1} or bigger. Each attempt("try") to
build a river, adds +1 to the lowest layer of index. When a layer in index becomes full we make it empty and the
next layer gets +1. INDEX looks like decimal numbers, but is n-imal instead of decimal, and rightmost=biggest. I
have an Example with almost- equal prot on dierent trees: A={4,5,6}; B={1.2,2,2.1}; dd={0,0,0};Mm={1,1,2};
A={1.0,0.8,1.2}; B={1.2,1.0,1.7}; dd={0,0,0}; Mm={1,1,2}; - example when funkc prot concave? *)
ClearAll[a,b,c,f,niterRand]; Print["Start preparing data:"];
ClearAll[Partns,Layers,LayMat,AdjMat,i,j,k,n,AllAdMatrs,Nskipped, lenAllPreodTrrivs,Usedgraphs]; Pechat=False;
Pech=False
(* False True printing main details of calculations *);
Pech11=False (* False True - shows AdjMatrix*);
Pech22=False (* False True -shows adding elements of AdjMatrix*)
Pech33=False (* False True -shows other details *)
(* three regimes of Printing certain intermediate results of optimization. List of Rivers explored: *);
RiversList={{{2,1},{3,1},{4,1}}, {{2,1},{3,2},{4,2}}, {{2,1},{3,1},
{3,2}, {4, 2}}, {{2,1},{3,2},{4,1},{4,2}}, {{2,1},{3,1},{3,2},{4,1},{4,2}}, {{3, 1},{4,1},{2,3}}, {{3,1},{4,1},{2,4}},
{{3,1},{4,1},{2,1},{2,3}}, {{3, 1},{4,1},{2,1},{2,4}}, {{3,1},{4,1},{2,3},{2,4}},{{3,1},{4,1}, {2, 1},{2,3},{2,4}},
{{2,1},{3,1},{4,2}},{{2,1},{3,1},{4,3}}, {{2,1},{3, 1},{4,1},{4,2}},
{{2,1},{3,1},{4,1},{4,3}}, {{2,1},{3,1},{4,2},{4, 3}},{{2,1},{3,1},{4,1},{4,2},{4,3}}, {{4,1},{2,4},{3,4}},
{{4,1},{2, 1},{2,4},{3,4}}, {{4,1},{2,4},{3,1},{3,4}}, {{4,1},{2,1},{2,4},{3, 1},{3,4}}, {{2,1},{4,1},{3,2}},
{{2,1},{4,1},{3,4}}, {{2,1},{4,1},{3, 1},{3,2}}, {{2,1},{4,1},{3,1},{3,4}},
{{2,1},{4,1},{3,2},{3,4}}, {{2, 1},{4,1},{3,1},{3,2},{3,4}},
{{3,1},{2,3},{4,3}}, {{3,1},{2,1},{2, 3},{4,3}},{{3,1},{2,3},{4,1},{4,3}},
{{3,1},{2,1},{2,3},{4,1},{4, 3}},{{2,1},{3,2},{4,3}}, {{2,1},{3,1},{3,2},{4,3}},
{{2,1},{3,2},{4, 1},{4,3}}, {{2,1},{3,1},{3,2},{4,1},{4,3}},
{{2,1},{3,2},{4,2},{4, 3}},
{{2,1},{3,1},{3,2},{4,2},{4,3}},
7
{{2,1},{3,2},{4,1},{4,2},{4, 3}}, {{2,1},{3,1},{3,2},{4,1},{4,2},{4,3}},
{{2,1},{4,2},{3,4}},
{{2, 1},{4,1},{4,2},{3,4}},
{{2,1},{4,2},{3,1},{3,4}}, {{2,1},{4,1},{4, 2},{3,1},{3,4}},{{2,1},{4,2},{3,2},{3,4}},
{{2,1},{4,1},{4,2},{3, 2},{3,4}}, {{2,1},{4,2},{3,1},{3,2},{3,4}},
{{2,1},{4,1},{4,2},{3, 1},{3,2},{3,4}},
{{3,1},{2,3},{4,2}}, {{3,1},{2,1},{2,3},{4,2}},
{{3, 1},{2,3},{4,1},{4,2}},
{{3,1},{2,1},{2,3},{4,1},{4,2}},
{{3,1},{2, 3},{4,3},{4,2}}, {{3,1},{2,1},{2,3},{4,3},{4,2}},{{3,1},{2,3},{4, 1},{4,3},{4,2}},
{{3,1},{2,1},{2,3},{4,1},{4,3},{4,2}},
{{3,1},{4, 3},{2,4}}, {{3,1},{4,1},{4,3},{2,4}},
{{3,1},{4,3},{2,1},{2,4}},{{3, 1}, {4,1},{4,3},{2,1},{2,4}},{{3,1},{4,3},{2,3},{2,4}},
{{3,1},{4, 1},{4,3},{2,3},{2,4}}, {{3,1},{4,3},{2,1},{2,3},{2,4}}, {{3,1},{4, 1},{4,3},{2,1},{2,3},{2,4}},
{{4,1},{2,4},{3,2}},
{{4,1},{2,1},{2, 4},{3,2}}, {{4,1},{2,4},{3,1},{3,2}},
{{4,1},{2,1},{2,4},{3,1},{3, 2}},
{{4,1},{2,4},{3,4},{3,2}},
{{4,1},{2,1},{2,4},{3,4},{3,2}}, {{4, 1},{2,4},{3,1},{3,4},{3,2}},
{{4,1},{2,1},{2,4},{3,1},{3,4},{3, 2}},
{{4,1},{3,4},{2,3}},
{{4,1},{3,1},{3,4},{2,3}},
{{4,1},{3,4},{2, 1},{2,3}},
{{4,1},{3,1},{3,4},{2,1},{2,3}},
{{4,1},{3,4},{2,4},{2, 3}},
{{4,1},{3,1},{3,4},{2,4},{2,3}},
{{4,1},{3,4},{2,1},{2,4},{2, 3}},
{{4,1},{3,1},{3,4},{2,1},{2,4},{2,3}}};
lenRivList=Length[RiversList];
(*Proga for solving Tree-optimization through dierentiating function Pi *)
n=3; (* 3 agents-types (I have not expanded progr for 4!), data for quadratic valuations like d+ax-bx^2/2 are
dened and modied by the following coecients: *)
niterRa=800; nAllRandskipped=0; ntimesstartoptAll=0;
For[itrand=1,itrand<= niterRa,itrand++,
(ClearAll[A,B,dd,Vo,Votx,Vi,Vv,Mm, Xcurr, niterRand, x1,x2,x3,t1,t2,t3, VmatrotX, TT, ndmax1];
Print[itrand,"=itrand ********"];
A=A={1.0,0.8,1.2};
(* coe-linear of V *)
B={1.6,2.4,1.4}; (* coe-quadratic of V *)
dd={0,0,0}; (* zero value of V at zero *)
Mm={5,0.5,2 (* ,4 ,5 *)};
Mm[[1]]=0.1+Random[]+0.1; Mm[[2]]=0.1+Random[]+0.1; Mm[[3]]=0.1+Random[]+0.1;
A[[1]]=0.1+Random[]+0.1; A[[2]]=0.1+Random[]+0.1; A[[3]]=0.1+Random[]+0.1;
B[[1]]=0.1+Random[]+0.1; B[[2]]=0.1+Random[]+0.1; B[[3]]=0.1+Random[]+0.1;
Uopt=-100; (*starting value of lower bound for naive solution *) rho=-0.00001; (*relaxation parameter *)
ntimesstartopt=0; (* Vector Vi of functions (#1 means "argument 1") - 3 quadratic valuations for 3 agents is
dened as: *)
Vi=Function[{dd[[1]]+A[[1]] #1[[1]]-B[[1]]#1[[1]]^2, dd[[2]]+A[[2]] #1[[2]]-B[[2]]#1[[2]]^2, dd[[3]]+A[[3]] #1[[3]]B[[3]]#1[[3]]^2 (*,5 #1[[4]]-#1[[4]]^2, 6 #1[[5]]-#1[[5]]^2*) }]; Print["Vi=",Vi[{x,y,z}]]; (* *) Xall={x1,x2,x3 (* ,1.5
,2 *)};
XTopt={{0,0,0},{0,0,0}}; T={t1,t2,t3}; T2T={t1,t2,t3};
Vv[x_]=Vi[Table[x,{i,n}]]; (* Vv[x_]=Table[dd[[i]]+ A[[i]]x-B[[i]]x^2/2,{i,n}];*)
Usedgraphs={}; Skippedgraphs={}; Lowprotvalidgraphs={}; Print["Vv=",Vv[x]];
8
(* From the above valuations, this module make the matrix of constraints\[GreaterEqual]0 to be satised by
any solution x: *)
VmatrotX[Vv_,Xcurr_,n_]:= Module[{Vooo,x}, PechMatr=True; Nage=n;
If[(Pechat), Print["Preliminary 2: *********** start making Vls from Vi *****
Xcurr=" ,Xcurr]; ] ; (*Vooo[x_]=Vv[Table[x,{i,Nage}]];*)
Vooo=Function[Vi[Xcurr]]; If[(Pechat),
Print[" Vooo[[1]]= ", Vi[{x1,x1,x1}][[2]] ]; ];
Votx=Table[0,{i,Nage},{j,Nage}];
For[i=1,i\[LessEqual]Nage,i++,(
For[j=1,j\[LessEqual]Nage,j++, (If[(i\[NotEqual] j),
(Votx[[i,j]]= Vi[{Xcurr[[i]],Xcurr[[i]],Xcurr[[i]]}] [[i]]- Vi[{Xcurr[[j]],Xcurr[[j]],Xcurr[[j]]}][[i]];
(*Print["Vooo[Xcurr[[j]]] [[i]]=", Vooo[Xcurr[[j]]] [[i]]];*)
If[(False),
Print[i,j,"Votx[[i,j]]=",Votx[[i,j]] , ",
Vooo[Xcurr [[j]]]=", Vo[Xcurr [[j]]] ]]; ),((*else*)Votx[[i,j]]= Vooo[Xcurr[[i]]][[i]]-0; If[(False),
Print[i,j,"diagVotx[[i,j]]=", Votx[[i,j]] ]]; )];)];)];
If[(Pech),Print["Votx=",MatrixForm[Votx]]];
Return[Votx]; ];
Print[" [Vv,#1 ,n]=",VmatrotX[Vv,Xall,n]];
(* Now next module generates the constraints to be used later on, in two forms:*) Constraints={}; Constraints00ij=Table[0,{i,n},{i,n}]; For[i=1, i\[LessEqual]n,i++,( For[j=1,j\[LessEqual]n,j++,( If[(i\[Equal]j),( Constraints= Append[Constraints, Votx[[i,i]]-T[[i]]\[GreaterEqual] 0.001*rho]; Constraints00ij[[i,j]]= Votx[[i,i]]-T[[i]]0.001*rho), (Constraints= Append[Constraints, Votx[[i,j]]-T[[i]]+T[[j]]\[GreaterEqual]rho]; Constraints00ij[[i,j]]=
Votx[[i,j]]-T[[i]]+T[[j]]-rho)]; )])]; If[(Pech),Print["Constraints=", MatrixForm[Constraints]]; Print["Constraints00ij=",
MatrixForm[Constraints00ij]]];
(*-*)
(* Preliminary 3: Naive Floating Algorithm to make starting Plan and starting prot LowerBound *) (* Finding
the lower bound on Prot through serving single consumers:
Prof0=0;... deleted*) Vvnew={};
Plan={{0,0,0},{0,0,0}}; Vyp={}; H={};X={}; yakor={{0,0}};
For[i=1, i\[LessEqual]n, i++,( X=Append[X,A[[i]]/(2B[[i]])];
(* absciss of peaks of initial parabolas-valuations *) H=Append[H,Vv[A[[i]]/(2B[[i]])][[i]]];
(* heights of initial parabolas-valuations *)
Vvnew=Append[Vvnew,Vv[x][[i]]-H[[i]]]; (* reduced initial parabolas-valuations *)
Vyp=Join[Vyp,{{H[[i]],X[[i]],Vvnew[[i]],i}}])];
Vyp=Vyp[[Ordering[Vyp[[All,{1}]]]]]; (* sorted list of heights, Xs, Valuations *) Vyp[[1,3]]=Vyp[[1,3]]+Vyp[[1,1]];
(* rised up rst parabola *) Plan[[1,Vyp[[1,4]]]]=Vyp[[1,2]]; (* assigned X for rst parabola *) Plan[[2,Vyp[[1,4]]]]=Vyp[[1,1]];
(* assigned T for rst parabola *) yakor=Append[yakor,{Vyp[[1,2]],Vyp[[1,1]]}]; Rasst={};
For[i=2,i\[LessEqual]n,i++,( Rasst={};
For[j=1,j\[LessEqual]Length[yakor], j++, ( Rasstij=Abs[yakor[[j,2]]-Vyp[[i,3]]/.x-> yakor[[j,1]]];
Rasst=Join[Rasst,{Rasstij}]; )];
Vyp[[i,3]]=Vyp[[i,3]]+Min[Rasst];
yakor=Append[ yakor,{Vyp[[i,2]],
ReplaceAll[Vyp[[i,3]],{x-> Vyp[[i,2]]}]}]; Plan[[1,Vyp[[i,4]]]]=yakor[[i+1,1]];
Plan[[2,Vyp[[i,4]]]]=yakor[[i+1,2]]; )]; Vyp=Vyp[[Ordering[Vyp[[All,{4}]]]]]; heightMax=Max[Plan[[2]]]; xpeakMax=Max[Plan[[1]]];
planTransp=Transpose[Plan];
Xnaive=Plan[[1]]; Tnaive=Plan[[2]];
If[(Pechat),
Print["Start nding a local optimum close to Naive Plan:"]];
ndmax1= FindMaximum[{(Mm[[1]]*t1+Mm[[2]]*t2+Mm[[3]]*t3), Constraints}, {{x1,Xnaive[[1]]},{x2,Xnaive[[2]]},
{x3, Xnaive[[3]]}, {t1,Tnaive[[1]]},{t2,Tnaive[[2]]},{t3, Tnaive[[3]]}}];
AdjMatrStart=Table[0,{i,n},{j,n}];
AdjMatrStart44=Table[0,{i,n+1},{j,n+1}];
Xoptstart={x1,x2,x3}/.ndmax1[[2]];
Toptstart={t1,t2,t3}/.ndmax1[[2]]; Uoptstart=ndmax1[[1]];
9
XTopt={Xoptstart,Toptstart};
bestRiverStart={};
For[i=1,i\[LessEqual]n,i++, (For[j=1,j\[LessEqual]n,j++, (If[(0\[Equal] Chop[(Constraints00ij[[i,j]]/.{x1-> Xoptstart[[1]], x2-> Xoptstart[[2]], x3-> Xoptstart[[3]], t1-> Toptstart[[1]],t2-> Toptstart[[2]], t3-> Toptstart[[3]]} ),
0.000001] ),
(AdjMatrStart[[i,j]]=1;
If[(i\[Equal]j), ( AdjMatrStart44[[i+1,1]]=i+1; bestRiverStart= Append[bestRiverStart,{i+1,1}];),
( (*else*) AdjMatrStart44[[i+1,j+1]]=i+1;
bestRiverStart= Append[bestRiverStart,{i+1,j+1}];)]) ]; )])];
If[(Pechat), Print[MatrixForm[ Chop[(Constraints00ij/.{x1-> Xoptstart[[1]], x2-> Xoptstart[[2]],x3-> Xoptstart[[3]], t1-> Toptstart[[1]],t2-> Toptstart[[2]], t3-> Toptstart[[3]]} )]],"= Constraints00ij"];
Print[ " bestRiverStart=",bestRiverStart];
Print[MatrixForm[AdjMatrStart],"= AdjMatrStart"]; Print[MatrixForm[AdjMatrStart44],"= AdjMatrStart44"]
];
If[(Pechat), Print[Xoptstart, "= Xoptstart, is a local optimum close to Naive Plan, Toptstart=", Toptstart]];
NaivProt=0;
For[i=1,i\[LessEqual]n,i++,
NaivProt=NaivProt+Mm[[i]]*Plan[[2,i]]];
points0=
Graphics[ {Circle[{Plan[[1,1]],Plan[[2,1]]},0.015],
Circle[{Plan[[1,2]],Plan[[2,2]]},0.015],
Circle[{Plan[[1,3]],Plan[[2,3]]},0.015]}];
points11= Graphics[ {Disk[{Xoptstart[[1]],Toptstart[[1]]},0.015],
Disk[{Xoptstart[[2]],Toptstart[[2]]},0.015],
Disk[{Xoptstart[[3]],Toptstart[[3]]},0.015]}];
Show[{Plot[{Vv[x][[1]],Vv[x][[2]],Vv[x][[3]]},{x,0,3},
PlotStyle-> {Red,Green,Blue}, PlotRange->{0,1.5},
PlotLabel-> {"3 Initial Valuations and NaivePlan "},
PlotLabel-> {x,v}] (*,points0,points11*)}]
(* MatrcesAdmissibUsed={};
MatrcesAdmissibUsed=Append[MatrcesAdmissibUsed, AdjMatrStart44];*)
Usedgraphs=Append[Usedgraphs,AdjMatrStart44]; OptMat=AdjMatrStart44;
Uopt= Uoptstart-0.000001;
(* -100 or better take Uoptstart-0.000001*)
(* approx starting value of Lower-Bound Uopt for main algorithm *)
n=3; (* LayMat[[1]]=Partns[[ipart,1]];
If[(Length[Partns[[ipart ]]]>1),
LayMat[[2]]=Partns[[ipart,2]]];
If[(Length[Partns[[ipart ]]]>2),
LayMat[[3]]=Partns[[ipart,3]]] ;*)
Lbls=Table[i+1,{i,n }]; Partns=SetPartitions[Lbls];
PermuParts={};
AllAdMatrs={}; AllTrrivs={};
(*PermPrts=Permutations[Partns];*)
(*If[(Pechat),
Print["- - - - Preliminary 4: now construct non-labelled partitions of n agents' types "];
Print["Partns=",Partns]];
Print["Length[SetPartitions[Lbls]]=",
LenPart=Length[SetPartitions[Lbls]]];
Print["PermPrts=",PermPrts];
Print[ "..PermPrts, Length[PermPrts]=",Length[PermPrts]];*)
LayMat=Table[".",{i,1,n},{j,1,n}];
NulMat=AdjMat=Table[0,{i,1,n+1},{j,1,n+1}];
BadTrees={}; IndxVersn =Table[1,{i,n+2}];
try=1;itr=1;
10
IndxVersn[[1]]=0;IndxVersn[[n+2]]=0;
ListAllTargsBel=Table[{0},{i,n+2}];
(* <-for each agent, his possible groups-sons will be here. How many versions-groups= *)
LisLenAllVersBel=Table[0,{i,n+2}];
Nskipped=0; Nonskipped=0; iadjmatr=1;
nsolvedrivers=0; nsolvedtrees=0; nadmissgraphs=0;
If[(Pechat),
Print["Start main iterations checking
Rivers one-by-one:-"]];
(*-Start main iterations checking
Rivers one-by-one, taking fromRiversList-*)
For[itriv=1,itriv\[LessEqual]
lenRivList,itriv++, ( rivTree=RiversList[[itriv]];
If[(Pechat),Print[itriv,"=itriv, rivTree=",rivTree]];
AdjMat=NulMat; LerivTree= Length[rivTree];
(*start making an Adjacency-Matrix from Tree*)
For[i=1,i\[LessEqual]LerivTree,i++,
(iojo=rivTree[[i]]; AdjMat[[iojo[[1]],iojo[[2]]]]=rivTree[[i,1]];
If[(Pech22), Print[i, "=i element of matrix, rivTree[[i]]=",rivTree[[i]] , ",
AdjMat[[iojo[[1]] ]]= ",AdjMat[[iojo[[1]] ]]];]; )];
curLenAlAdMat=Length[AllAdMatrs];
If[(Pech11),
Print[iadjmatr,"=iadjmatr, previous AllAdMatrs[[curLenAlAdMat]]=",
MatrixForm[AllAdMatrs[[curLenAlAdMat]]],", new AdjMat= ",
MatrixForm[AdjMat]];]; iadjmatr=iadjmatr+1;
If[(MemberQ[AllAdMatrs,AdjMat]),
If[(Pech11),(Print[ "==Yes. It is among previous." ];
nsymmetrii=nsymmetrii+1;
nAllsymmetrii=nAllsymmetrii+1;)],(*else*)(AllAdMatrs=
Append[AllAdMatrs,AdjMat];
AllTrrivs=Append[AllTrrivs,rivTree];(*
APreodTrrivs=Append[APreodTrrivs,rivTree];*) )];
(*thus we have excluded repetitions, if any, and supplemented list
AllTrrivs of rivers with a new one, new AdjacenyMatrix to be studied
before building next one. *)
(* Now try to solve optimal Xi for a tree or river, when functions are
quadratic and system linear: *)
Lamb=Table[0,{i,n},{j,n}];
DLt=Table[0,{i,n}]; (* derivatives of Lagrangian in T*)
DLx=Table[0,{i,n}]; (* derivatives of Lagrangian in X*) T={t1,t2,t3}; (*
vector of variables T*) Equations={}; (* = the name of the equations
array to be formed *) Constr={}; (* = the name of the graph-specic
inequalities array to be formed *)
solXTreal={}; LamNeNul={}; (* = the array of Lagrange multipliers *)
ProtReal={}; (* = attempted prot value *)
Prot1={}; (* = attempted prot value *)
Constraints1={}; (* = the name of the inequalities array to be formed *)
LamSuc=Table[0,{i,n}];
LamPre=Table[0,{i,n}]; LamPreVprim=Table[0,{i,n}];
Msum=Table[Mm[[i]],{i,n}]; (* sum of masses *) Protil=0;
Rivergood=True;
(**)
If[(Sum[AdjMat[[i,j]],{i,2,n+1},{j,1,n+1}]\[Equal] Sum[ki,{ki,2,n+1}]),
If[(Pechat), Print["GRAPH-Tree (necessarily new). We start solving it as
tree:"]];
(* because the number of arcs is n. Then simpler sequential solution
11
method: *)
(nsolvedtrees=nsolvedtrees+1;
For[j0=1,j0\[LessEqual] n,j0++,( j=n+2-j0;
(*=cs of adjacency matrix, to nd full weights:*) For[i=2,i\[LessEqual]
n+1,i++,
If[(AdjMat[[i,j]] \[NotEqual] 0),(Msum[[j-1]]=
Msum[[j-1]]+Msum[[(AdjMat[[i,j]]-1)]]; )]; ]; )];
(* start making composite Prot-function[x] plugging t:*)
Prof=Sum[Votx[[j,j]]*Mm[[j]],{j,1,n}];
For[j0=1,j0\[LessEqual] n,j0++,( j=n+2-j0;(*=raws of adjacency matrix,
to nd full weights:*)
For[i=2,i\[LessEqual] n+1,i++, If[(AdjMat[[i,j]] \[NotEqual] 0),
(Prof= Prof+(Votx[[j-1,j-1]]- ReplaceAll[ Votx[[i-1, i-1]],{Xall[[i-1]]->
Xall[[j-1]]}])*Msum[[(i-1)]])]; ]; )];
solX=Solve[{D[Prof,x1]\[Equal]0,D[Prof,x2]\[Equal]0,
D[Prof,x3]\[Equal]0},Xall];
If[(Pechat),Print[ "solX=",solX ], ", Prof=", Prof];
listOfReadyT={};
For[is=2,is\[LessEqual]n+1, is++,(If[(AdjMat[[is,1]]\[NotEqual]0),
T[[is-1]]=Votx[[is-1,is-1]];
listOfReadyT=Append[listOfReadyT,is];])]; (* =prisvoili T cherez X,kto
korniu zavoduet recursivno:*)
For[nprosmotra=1,nprosmotra\[LessEqual]n,nprosmotra++,
For[j=2,j\[LessEqual]n+1, j++,(For[i=2,i\[LessEqual]n+1,
i++,(If[((AdjMat[[i,j]]\[NotEqual]0)&&(MemberQ[ listOfReadyT,
j])&&(!MemberQ[listOfReadyT,i])), T[[i-1]]= T[[j-1]]ReplaceAll[Votx[[i-1,i-1]], Xall[[i-1]]-> Xall[[j-1]]]+ Votx[[i-1,i-1]];
listOfReadyT=Append[listOfReadyT,i];];)];)];];
If[(Pechat),Print["listOfReadyT=",listOfReadyT,", T=",T]];
TTn=Chop[T/.solX[[1]]]; XXn=Chop[Xall/.solX[[1]]];
If[(Pechat), Print[{T,Xall},"=T,Xall, po derevu TTn= ", TTn,", XXn= ",
XXn]]; Constraints1=Constraints;
For[i=1, i\[LessEqual] n, i++, ( (*Constraints1=
Constraints1//.{T2T[[i]]-> TTn[[i]], Xall[[i]]-> XXn[[i]]};*)
Constraints1=Constraints1/.T2T[[i]]-> T[[i]]; Chop[Constraints1]; )];
(*Print["Constraints:", Constraints1];*)
Constraints1=Constraints1/.solX[[1]];
(*Print["Constraints:", Constraints1];*)
validsolution=True;
For[i=1, i\[LessEqual] Length[Constraints1],i++,
If[(Constraints1[[i]]\[Equal] False), validsolution=False; ]];
Protil=Prof/.solX ;
If[(Pechat), Print[validsolution, "= validsolution,
currentNonadmissibleProt= ", Protil, ", was admiss. payo Uopt =",
Uopt]]; If[(validsolution),( (*then*)
If[(Pechat),Print["Yes, Tree "]]; (* remember this graph not to study its
relatives:*)
Usedgraphs=Append[Usedgraphs,AdjMat];
lenUsedgraphs=Length[Usedgraphs]; nadmissgraphs=nadmissgraphs+1;
If[(Protil[[1]]>Uopt),
( If[(validsolution),( Uopt= Protil[[1]]; XTopt={XXn,TTn};
OptMat=AdjMat;
If[(Pechat),Print[" validgoodsolution "]; Print["used or lowprot ",
MatrixForm[AdjMat]]];),
(*else*)Print["tree-SOLUTIONS ABSENT!??"]];),
( (*else lowprot and ..:*) Lowprotvalidgraphs=
12
Append[Lowprotvalidgraphs,AdjMat]; )]), ((*if nonvalidsolution*)
If[(Pech11), Print["tree-SOLUTION nonvalid!"]];)]; )
(*-else-graph not a tree:*) ,
( If[(Pechat), Print["GRAPH-nontreeRiver, start solving it through
quadratic \ simultaneous equations:"]];
Rivergood=True; (* start checking is the river new and promising or
to-be- rejected because old or a descendent- relative of an old admissible
graph with low prot: *)
(*duplicated in Rivers, so one cycle switched o:
For[i=1, i\[LessEqual]Length[BadTrees], i++,
If[(Min[AdjMat-BadTrees[[i]]]\[GreaterEqual]0),
(* Rivergood=False; *) Nskipped=Nskipped+1;]]; *) innonskip=True ;
If[(Rivergood),
( For[i=1, i\[LessEqual]Length[Usedgraphs], i++,
If[(Min[AdjMat-Usedgraphs[[i]]]\[GreaterEqual]0), ( innonskip=False;
(* Rivergood=False; = temporary switched-o economizing- command
because something goes wrong with it. In principle, it should economize
computer eorts. We calculate how many rivers we should (but dont so
far) pass without solving: *) i=i)]];
If[(innonskip),(Nonskipped=Nonskipped+1;),(*else*)
Nskipped=Nskipped+1;
Skippedgraphs=Append[Skippedgraphs,AdjMat]; If[(Pechat),
Print[itriv,"=itriv, added Nskipped=",Nskipped, ",
Length[Usedgraphs]=",Length[Usedgraphs]]];];
Eqntop={}; jNonenvied={}; (*list of iNonenvied agents: *) (* start
forming the list of nonzero Lagrange lambdas and graph-specic
constraints-equalities each (i,j) of graph, and list of iNonenvied agents: *)
If[(Rivergood && innonskip),(nsolvedrivers= nsolvedrivers+1;
For[j0=1,j0\[LessEqual] n,j0++,( j=n+2-j0;(*=columns of adjacency
matrix, to nd full weights:*) If[(Sum[ AdjMat[[i,j]],{i,2,
n+1}]\[Equal]0),
(jNonenvied= Append[jNonenvied,j-1]; Eqntop= Append[Eqntop,
D[Vv[Xall[[j-1]]][[j-1]], Xall[[j-1]]]\[Equal]0 ]; If[(Pechat), Print[j, "=j
top-tree agent non-envied Eqntop=", Eqntop]])]; For[i=2,i\[LessEqual]
n+1,i++,
If[(AdjMat[[i,j]] \[NotEqual] 0),( Lamb[[i-1,j-1]]=
StringExpression["la",ToString[i-1], ToString[j-1]]; LamNeNul=
Append[LamNeNul, ToExpression[Lamb[[i-1,j-1]]]]; Constr= Append[
Constr,(Votx[[i-1,j-1]]-T[[i-1]]+ T[[j-1]]\[Equal]0)]; )]; ]; )]; (*start
forming excessive equations system with top xi \ determined just now:*)
Equations=Eqntop;
For[i=2,i\[LessEqual] n+1, i++,(*=making names of diagonal elements
lambda:*)
If[(AdjMat[[i,1]] \[NotEqual] 0),( Lamb[[i-1,i-1]]=
StringExpression["la",ToString[i-1], ToString[0]]; LamSuc[[i-1]]=
LamSuc[[i-1]]+ToExpression[Lamb[[i-1,i-1]]]; LamNeNul=
Append[LamNeNul, ToExpression[Lamb[[i-1,i-1]]]]; Constr= Append[
Constr,(Votx[[i-1,i-1]]- T[[i-1]]\[Equal]0)];)];]; If[(False),Print[i, "=i,
LamSuc[[i-1]]=",LamSuc ]];
For[i=2,i\[LessEqual] n+1, i++,( (*= making Lambda Successors and
predecessors:*)
(*=raws of adjacency matrix, to nd lambda weights:*)
For[ j=1,j\[LessEqual] n+1,j++, If[(AdjMat[[i,j]] \[NotEqual] 0 ),
(LamSuc[[i-1]]=LamSuc[[i-1]]+
If[(j\[NotEqual] 1 ),(ToExpression[ Lamb[[i-1,j-1]]]),(0)]; )];
If[(AdjMat[[j,i]] \[NotEqual] 0 ),(LamPre[[i-1]]=LamPre[[i-1]]+
13
If[(j\[NotEqual] 1 ),(ToExpression[ Lamb[[j-1,i-1]]]),(0)];
LamPreVprim[[i-1]]=LamPreVprim[[i-1]]+ If[(j\[NotEqual] 1 ),
(ToExpression[Lamb[[j-1,i-1]]]* D[ReplaceAll[Votx[[j-1,j-1]],
Xall[[j-1]]-> Xall[[i-1]]], Xall[[i-1]]]),(0)]); ]; ];
(* dierentiating the Lagrangian in ti we nd lambdas-equations:*)
DLt[[i-1]]=Mm[[i-1]]-LamSuc[[i-1]]+LamPre[[i-1]];
(* dierentiating the Lagrangian in xi we nd xs- equations:*)
DLx[[i-1]]=D[Votx[[i-1,i-1]], Xall[[i-1]]]*LamSuc[[i-1]]LamPreVprim[[i-1]]; (* take together all equations:*)
Equations=Append[Equations,DLt[[i-1]]\[Equal]0];
Equations=Append[Equations,DLx[[i-1]]\[Equal]0]; ;) ]; (* join
equations-of-dierentiation with constraints- equalities:*)
Equations=Join[Equations,Constr];
XT=Join[Xall,T,LamNeNul];
(* MAIN SOLVING operation may be NSolve?? *) solXTlam=Solve[Equations ];
(* here was ,XT =nonrequired names of variables, omitted when number of equalities equal or greater \ than
number of unknowns in them *)
solXT={Xall,T,LamNeNul}/.solXTlam;
(* = array of all roots obtained, where lambdas may be indenite. It is 3- dimensional matrix: roots, category
X T or L, and member *)
lenLamNeNul=Length[LamNeNul]; nulLen=Table[0,{i,lenLamNeNul}];
solXTmax={}; (* Selecting among the obtained roots: ind= 1 means
good root. *)
For [i=1, i\[LessEqual] Length[solXT], i++, ind=1;
For[j=1,j\[LessEqual]n, j++, If[(Head[solXT[[i,1,j]]]\[Equal]
Complex),ind=0 ]; ; ]; If[(ind\[Equal]1),( Nerav={};
priznakLambd=True; solXTmax=Join[solXTmax,{solXT[[i]]}];
For[j=1, j\[LessEqual] lenLamNeNul, j++, Nerav=Join[
Nerav,{solXT[[i,3,j]]\[GreaterEqual]0}]];
(*
priznakLambd=Reduce[Nerav,LamNeNul]; this nds positivity of
lambdas= solXT[[i,3,..]]..Print["Nerav=", Nerav]; maybe? not to check
any more positivity of lambdas \ because they are sometimes
indtermined, whereas admissible X and T anyway can serve as solution *)
MinX=Min[solXT[[i,1]]];MinT=Min[solXT[[i,2]]];
For[j=1,j\[LessEqual]n, j++,
If[(MinX<0)||(MinT< 0)||(priznakLambd\[Equal]False) , ind=0 ]; (*this rejects negative X and T*) ; ];)];
If[(ind==1), solXTreal=Join[solXTreal,{solXT[[i]]}];]; ];
If[(solXTreal\[NotEqual] {}),
(* then start chacking admissibility of X and T within \ out-of graph inequalities: *)
For[i=1, i\[LessEqual] Length[solXTreal], i++,
( Constraints1=Constraints;
For[j=1, j\[LessEqual]n, j++,
Constraints1= Constraints1//.{Xall[[j]]-> solXTreal[[i,1,j]], T2T[[j]]->
solXTreal[[i,2,j]]}; (* here we have plugged magnitudes X, T found into the constraints, which becomes True or False each and check: \ *)
Chop[Constraints1]; If[(Pechat), Print["Constraints1=", Constraints1]];];
solutGood=True; For[k=1, k\[LessEqual]Length[Constraints1], k++,
solutGood=solutGood && Constraints1[[k]]; (* here we make False if
any constraint is \ False, then variable solutGood becomes False and
solXTreal[[i]]={}*) ]; If[!solutGood,solXTreal[[i]]={}]; )];];
If[solXTreal\[Equal]{{}}, solXTreal={}]; If[solXTreal\[Equal]{{},{}},
solXTreal={}]; (*If[solXTmax\[Equal]{{}}, solXTmax={}];
If[solXTmax\[Equal]{{},{}}, solXTmax={}];*)
ProtMax={};
For[i=1, i\[LessEqual]Length[solXTmax], i++,
14
( If[(solXTmax[[i]]\[NotEqual]{}), (* then it means Real- validsolution
(wihin constraints) and we \ compare prots: *) ProtMax=
Append[ProtMax, Sum[Mm[[j]]*solXTmax[[i,2,j]],{j,1,n} ]];,
(*else*)Append[ProtMax,-100]; ])];
If[(Pech11),Print["solXTlam=",solXTlam];
Print["solX[[1,1]]=",solXT[[1,1]],"solT[[1,2]]=", solXT[[1,2]] ];
If[(Length[solXT]>1),
Print["solX[[2,1]]=",solXT[[2,1]],"solT[[2,2]]=", solXT[[2,2]] ] ];
Print["solXTreal=", solXTreal];]; If[(Pech11), Print[
"Lamb=",MatrixForm[Lamb],", Equations=", Equations,",
LamNeNul=",LamNeNul];];
If[(Pechat), Print[ "nontree-river if not for constraints, its \ ProtMax= ",
ProtMax,", previous was Uopt=",Uopt]]; If[(solXTreal\[NotEqual]{}) ,(
(*then valid solution, make vector prots of vectors roots system:*)
If[(Pechat),Print["Yes, validsolution river " ]];
nadmissgraphs=nadmissgraphs+1;
For[i=1, i\[LessEqual]Length[solXTreal], i++, (
If[(solXTreal[[i]]\[NotEqual]{}), ProtReal= Append[ProtReal,
Sum[Mm[[j]]*solXTreal[[i,2,j]],{j,1, n} ]];
(* ProtReal accumulates all
reav- valued attempts to improve:*) ,
(*else*)Append[ProtReal,-100]; ])]; lenProt=Length[ProtReal];
iBest=Ordering[ProtReal][[lenProt]]; Bestsol=solXTreal[[iBest]];
If[(Pechat), Print["Best prot: ", ProtReal[[iBest]], " attained at
solution:", solXTreal[[iBest]]]];
If[ProtReal[[iBest]]>Uopt, (Uopt=ProtReal[[iBest]];
OptMat=AdjMat; XTopt=solXTreal[[iBest]];
Print["**** better ! New Uopt=",Uopt])];
Usedgraphs=Append[Usedgraphs,AdjMat];
lenUsedgraphs=Length[Usedgraphs];) , (*else*) If[(Pechat),
Print["SOLUTIONS ABSENT!"]]];
If[(Max[ProtMax]<Uopt),( Lowprotvalidgraphs=
Append[Lowprotvalidgraphs,AdjMat];)]; )]; )] )];
(*-*)
If[(Pech), Print[ " , rivTree=", rivTree ];
Print[ " AdjMat=", MatrixForm[AdjMat ]]];
(* Now we renew a delicate thing: IndxVersn= IndxVersn-of- version of river within the current Preorder. It is
increased by 1. But it is a vector, each component being limited from above by number
(LisLenAllVersBel[[i]]). So, when no room for increasing 1-st component, we make it=1, and try to increase the
2-nd, and so on. *)
For[i=2,i\[LessEqual]n,i++, If[(IndxVersn[[i]]+1\[LessEqual]
LisLenAllVersBel[[i]]), IndxVersn[[i]]
=IndxVersn[[i]]+1;Goto[outi],(*else*)
If[(IndxVersn[[i+1]]+1\[LessEqual]
LisLenAllVersBel[[i+1]]),(IndxVersn[[i+1]] = IndxVersn[[i+1]]+1;
For[k=2,k\[LessEqual]i,k++,IndxVersn[[k]] =1];
Goto[outi];)];];];Label[outi]; try=try+1;
If[(try\[LessEqual] manyVers)(*=index-or-permutiterator*),
(Goto[ startipreodr])];
LenPreodRvrs=Length[APreodTrrivs];
AllPreodTrrivs=Join[AllPreodTrrivs,APreodTrrivs];
If[(Pech11),
Print[nsymmetrii,"=nsymmetrii, APreodTrrivs=",nonprint, ",
LenPreodRvrs=",LenPreodRvrs]];
(* Print[" AllAdMatrs=",AllAdMatrs," , LenMaTrs=",
Length[AllAdMatrs]];*) )];
15
Print["=******=ku-ku end=******,nAllsymmetrii=",nAllsymmetrii];
LenAlTrvrs=Length[AllTrrivs]; (*Print[" AllTrrivs=",AllTrrivs,",
LenAlTrvrs=",LenAlTrvrs];
Print[" AllAdMatrs=",AllAdMatrs," ,
LenMaTrs=",Length[AllAdMatrs]];*) Print["how many graphs should
be skipped without solving: Nskipped=", Nskipped]; Print["approx
Uopt= ",Uopt, ", XTopt= ", XTopt];
Print[ " OptMat=", MatrixForm[OptMat ]]; Print["
AdjMatrStart44=",MatrixForm[AdjMatrStart44],"= compare them?"];
If[(OptMat\[Equal] AdjMatrStart44),( Print["Starting plan occurs
Optimal"]; ntimesstartopt=ntimesstartopt+1;
ntimesstartoptAll=ntimesstartoptAll+1),
( Print["Starting plan occurs non-Optimal"];)]; nAllRandskipped=nAllRandskipped+Nskipped;
Print[ "Start netuning this true local optimum to increase machine precision:"];
ndmax2= FindMaximum[{(Mm[[1]]*t1+Mm[[2]]*t2+Mm[[3]]*t3),
Constraints}, {{x1,XTopt[[1,1]]},{x2,XTopt[[1,2]]},{x3,
XTopt[[1,3]]},{t1,XTopt[[2,1]]},{t2,XTopt[[2,2]]}, {t3,
XTopt[[2,3]]}}, AccuracyGoal-> 8, PrecisionGoal-> 8, MaxIterations->
30]; Xoptuned={x1,x2,x3}/.ndmax2[[2]];
Toptuned={t1,t2,t3}/.ndmax2[[2]]; Uoptuned=ndmax2[[1]];
xx1=XTopt[[1,1]];xx2=XTopt[[1,2]];xx3=XTopt[[1,3]];
{xx1,xx2,xx3}=XTopt[[1]];
{tt1,tt2,tt3}=XTopt[[2]];
tt1=XTopt[[2,1]];tt2=XTopt[[2,2]];tt3=XTopt[[2,3]];
tmax=Max[tt1,tt2,tt3];
xoptMax=Max[xx1,xx2,xx3];
Surpl={Vv[xx1][[1]]-tt1,Vv[xx2][[2]]-tt2,Vv[xx3][[3]]-tt3};
points=Graphics[ {Circle[{xx1,tt1},0.015], Circle[{xx2,tt2},0.015],
Circle[{xx3,tt3},0.015]}]; Print["{xx1,xx2,xx3}=",{xx1,xx2,xx3},",
{tt1,tt2,tt3}=",{tt1,tt2,tt3}, ", points=",points];
Show[{Plot[{Vv[x][[1]]-Surpl[[1]],Vv[x][[2]]-Surpl[[2]],
Vv[x][[3]]-Surpl[[3]]},{x,0,xoptMax*1.5},
PlotStyle->{Red,Green,Blue},PlotRange-> {0,tmax*1.3}, AxesLabel-> {x,t},
PlotLabel-> "FineTuned Optimum", DisplayFunction-> Identity],points}, DisplayFunction->$DisplayFunction]
;
heightMax2=Max[tt1,tt2,tt3]; pointsPlan=
Graphics[{Point[planTransp[[1]]],Point[planTransp[[2]]], P
oint[planTransp[[3]]]}]; Print["how many rivers should be skipped
without solving Nskipped= ", Nskipped,", solved Nonskipped= ",
Nonskipped ];
Print["========nish==iterand=" ]; )];
Print["Naive Plan=",Plan,",
NaivProt=",NaivProt];
Print[Uoptstart,"=Uoptstart, Initial local-opt Plan Xoptstart=", Xoptstart,
" , Toptstart=",Toptstart];
Print["apprx Uopt= ",Uopt, ", apprx Optimal plan
{xx1,xx2,xx3}=",{xx1,xx2,xx3}, ", {tt1,tt2,tt3} =",{tt1,tt2,tt3}]; Print[
"Uoptuned= ",Uoptuned," tuned Optimal plan Xoptuned=",Xoptuned, ",
Toptuned=",Toptuned];
Print["Dierence% relative= (Uoptuned-Uoptstart)/Uoptstart= ", 100
(Uoptuned-Uoptstart)/Uoptstart,"%"];
lenAllPreodTrrivs=Length[AllPreodTrrivs];
Print["how many rivers should be skipped without solving Nskipped= ",
Nskipped,", all nAllRandskipped= ", nAllRandskipped]; Print["how
many grpahs solved without skipping: nsolvedrivers= ", nsolvedrivers,
16
"+",nsolvedtrees,"=nsolvedtrees "];
Print["= =========nish====March 2016==== =" ];
Show[{Plot[{Vyp[[1,3]],Vyp[[2,3]],Vyp[[3,3]],Vv[x][[3]]-Surpl[[3]]},
{x,0, xpeakMax*1.5},
PlotStyle->{Red,Green,Blue,{Dashing[{0.03,0.05}], Blue}},
PlotRange-> {0,heightMax2*1.2},AxesLabel-> {x,t},
PlotLabel-> "Naive and FineTuned StartingPlan", DisplayFunction->
Identity],pointsPlan,points11}, DisplayFunction->$DisplayFunction]
Print["how many ADMISSIBLE grpahs (non-contradictory active
constraints) found: nadmissgraphs= ",nadmissgraphs,",
ntimesstartoptAll=", ntimesstartoptAll]; Print[niterRa, "=niterRa, how
many rivers among (niterRa) examples skipped without solving
nAllRandskipped/(79*niterRa)= ", nAllRandskipped/(79*niterRa), ", all
nAllRandskipped= ", nAllRandskipped];
Print["==========nish========="];
EndPackage;
17
References
[1] Andersson, T., 2005. Prot maximizing nonlinear pricing.
Economic letters
88,135-9.
[2] Armstrong, M., Rochet, J-C., 1999. Multi-dimensional Screening: A User's Guide,
European Economic Review
43, 959-79.
[3] Armstrong, M., 2006. Recent Developments in the Economics of Price Discrimination, in: Blundell, Newey,
and Persson, (Eds),
Advances in Economics and Econometrics: Theory and Applications, Ninth World Congress
of the Econometric Society, Volume II, 97-141, Cambridge: Cambridge University Press.
[4] Brito, D.L., Hamilton, J.H., Slutsky, J.E., and Stiglitz, J., 1990. Pareto Ecient Tax Structures,
Economic Papers,
Oxford
42, 6177.
[5] Guesnerie, R., Seade, J., 1982. Nonlinear Pricing in a Finite Economy,
Journal of Public Economics
17,
157-179.
[6] Harary, F. and E.M. Palmer., 1973. Graphical Enumeration,
Academic press, New York and London.
[7] Kokovin, S., Nahata, B., and Zhelobodko, E., 2010. Multidimensional screening under nonlinear costs: Limits
of standard approach,
Economic letters
107(2), pp. 263-265.
[8] Nahata, B., Kokovin, S., and Zhelobodko, E., 2002. Package Sizes, Taris, Quantity Discount and Premium,
http://econpapers.repec.org/paper/wpawuwpgt/0307002.htm.
[9] Nakoryakov, V., 2011. Structures of solutions and algorithms of solutions in screening.- Bachelor degree thesis,
Novosibirsk State University, mimeo [in Russian].
[10] Rochet, J.-C., 1987. A Necessary and Sucient Condition for Rationalizability in a Quasi-linear Context,
Journal of Mathematical Economics, 16(2), 191-200.
Advances in Economics and
Econometrics: Theory and Applications: Eighth World Congress Vol I. (Eds.) Mathias Dewatripont, Lars Peter
[11] Rochet, J.-C., Stole, L., 2003. The Economics of Multidimensional Screening,
Hansen and Stephen J. Turnovsky. Cambridge: Cambridge University Press.
[12] Rochet, J.-C., Chone, P. (1998). Ironing, Sweeping and Multi-dimensional Screening,
Econometrica,
66(4),
174-93.
[13] Rockafellar, R.T., 1993. Lagrange Multipliers and Optimality,
[14] Stole, L.A. 2007. Price Discrimination and Competition in:
SIAM Review
v.35, No.2
Handbook of Industrial Organization, 2007 (rst
edition), v. 3, edts. M.Armstrong, R.H.Porter, Elsevier, Amsterdam, London, pp.2221-2299.
[15] Vohra, R.V., 2008 Paths, Cycles and Mechanism Design, mimeo, Kellogg School of Management, Northewestern U., http://www.kellogg.northwestern.edu/faculty/vohra/ftp/duke.pdf.
document
18
© Copyright 2025 Paperzz