Smooth Optimal Decision Strategies for Static Team Optimization

Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Smooth Optimal Decision Strategies for Static
Team Optimization Problems and their
Approximations
Giorgio Gnecco and Marcello Sanguineti
University of Genova
[email protected]
[email protected]
1 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Static team and statistical information struvture
Static team of n decision makers (DMs), i = 1, . . . , n.
Team: same objective, but different available information.
Static: the information of the DM i does not depend on the actions
of the other DMs.
x ∈ X ⊆ Rd0 : random variable (state of the world) describing a
stochastic environment.
Statistical information structure: the information that the n DMs
have on the state of the world x is modelled by an n-tuple of
random variables y1 , . . . , yn , associated (together with x) with the
joint probability density q(x, y1 , . . . , yn ).
yi ∈ Yi ⊆ Rdi : information that the DM i has about x.
2 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Strategies and team utility function
si : Yi → Ai ⊆ R: strategy of the i-th DM.
ai = si (yi ): action that the DM i chooses on the basis of the
information yi .
u : X × Πni=1 Yi × Πni=1 Ai → R: real-valued team utility function.
3 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Problem STO: static team optimization with statistical
information
Given the statistical information structure q(x, y1 , . . . , yn ), the team
utility function
u(x, y1 , . . . , yn , a1 , . . . , an ) ,
Pn
u : RN → R, where N = i=0 di + n, find
sup v (s1 , . . . , sn ) ,
s1 ,...sn
v (s1 , . . . , sn ) = Ex,y1 ,...,yn {u(x, {yi }ni=1 , {si (yi )}ni=1 )} .
sups1 ,...sn v (s1 , . . . , sn ): value of the team.
4 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Functional optimization and large dimensionality
Problem STO is a functional optimization problem:
one has to maximize a functional to find n optimal strategies, each
being a function of di real-valued variables, i = 1, . . . , n.
In general one has a large number of variables,
e.g.: strategies = routing functions in a packet-switching
telecommunication network; variables = lengths of the packet queues
in the nodes.
5 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Example: store-and-forward packet-switching
telecommunication network
6 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Main objectives of this work
When closed-form solutions are not available, one may be interested in
suboptimal strategies that are sufficiently accurate and easily
implementable.
Structural properties of the (unknown) optimal strategies can help in
narrowing the search for good suboptimal strategies.
Goals:
Studying existence, smoothness properties and uniqueness of an
optimal n-tuple of strategies for Problem STO.
Exploiting such structural properties to study methods of
approximate solution.
E.g., approximation error bounds for approximation schemes that
contain sufficiently accurate suboptimal strategies for a sufficiently
small number of parameters.
7 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Assumptions (I)
Assumption A1
The set X of the states of the world is compact, Y1 , . . . , Yn are compact
and convex, and A1 , . . . , An are bounded closed intervals. For an integer
m ≥ 2, the team utility function u(·) is of class C m on an open set
containing X × Πni=1 Yi × Πni=1 Ai and q is a (strictly) positive probability
density on X × Πni=1 Yi , which can be extended to a function of class C m
on an open set containing X × Πni=1 Yi .
Assumption A2
There exists τ > 0 such that the team utility function u : RN → R is
separately concave in each of the decision variables a1 , . . . , an , with
concavity τ in each of them.
8 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Recall Problem STO:
(static team optimization with statistical information)
Given the statistical information structure q(x, y1 , . . . , yn ), the team
utility function
u(x, y1 , . . . , yn , a1 , . . . , an ) ,
Pn
u : RN → R, where N = i=0 di + n, find
sup v (s1 , . . . , sn ) ,
s1 ,...sn
v (s1 , . . . , sn ) = Ex,y1 ,...,yn {u(x, {yi }ni=1 , {si (yi )}ni=1 )} .
sups1 ,...sn v (s1 , . . . , sn ): value of the team.
9 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Concavity at least τ
A concave function f defined on a convex set X has concavity (at
least) τ > 0 if for all u, v ∈ X and every supergradient au of f at u
one has f (v ) − f (u) ≤ au · (v − u) − τ kv − uk2 .
For a function of class C 2 : necessary and sufficient conditions for its
concavity τ in terms of λmax (∇2 f (u)), the maximum eigenvalue of
the Hessian ∇2 f (u).
Let X1 , . . . , Xn be convex sets and τ > 0. A function f defined on
Πni=1 Xi is separately concave in each variable, with concavity (at
least) τ > 0 in each of them, if, for any fixed choice of the
arguments except one of them, the resulting function is (concave
and) has concavity τ in the remaining argument.
10 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Assumptions (II)
Assumption A3
For every n-tuple {s1 , . . . , sn } of strategies and every
(y1 , . . . , yn ) ∈ Πni=1 Yi , the strategies defined as
ŝ1 (y1 ) := argmaxa1 ∈A1 Ex,y2 ,...,yn |y1 {u(x, {yi }ni=1 , a1 , {si (yi )}ni=2 )},
...
ŝn (yn ) := argmaxan ∈An Ex,y1 ,...,yn−1 |yn {u(x, {yi }ni=1 , {si (yi )}n−1
i=1 , an )}
do not lie on the boundaries of A1 , . . . , An , respectively.
11 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Smoothness of the optimal team strategies
Theorem 1
Let Assumptions A1, A2, and A3 hold. Then
Problem STO admits C m−2 optimal strategies (s1o , . . . , sno ) with partial
derivatives that are Lipschitz up to the order m − 2.
The proof of Theorem 1 exploits the Implicit Function Theorem and the
Ascoli-Arzelà Theorem.
For m = 2, the Implicit Function Theorem is not used in the proof
and Assumption A3 is not needed (i.e., under Assumptions A1 and
A2, there exist Lipschitz optimal strategies).
12 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Example: multidivisional firm (I)
Two autonomous divisions producing two different goods in
quantities a1 ∈ [0, A1 ] and a2 ∈ [0, A2 ].
Goods: sold in two competitive markets at prices
x1 ∈ [X1 min , X1 max ] and x2 ∈ [X2 min , X2 max ].
x1 and x2 : not known exactly before choosing a1 and a2
each division i has a private estimate yi of xi .
Total cost of producing the amounts a1 and a2 :
c(a1 , a2 ) =
1
1
c11 a12 + c12 a1 a2 + c22 a22 .
2
2
Team utility function:
u(x1 , x2 , s1 (y1 ), s2 (y2 )) =
1
1
x1 s1 (y1 ) + x2 s2 (y2 ) − c11 s1 (y1 )2 − c12 s1 (y1 )s2 (y2 ) − c22 s2 (y2 )2 .
2
2
13 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Example: multidivisional firm (II)
Checking the assumptions. . .
Assumption A1: choose a sufficiently smooth and positive joint
probability density for x1 , x2 , y1 , y2 (the required smoothness of the
team utility function u holds automatically).
Assumption A2: choose c11 , c12 and c22 such that the cost function
2
is positive definite (i.e., c11 > 0 and c11 c22 − c12
> 0).
Assumption A3: it can be imposed for suitable values of the
parameters (i.e., Ai , Xi min , Xi max , cij ).
(however, recall that for m = 2 Assumption A3 is not required).
Here di = 1, but the example can be easily extended to a
multidimensional setting.
14 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Uniqueness of the optimal team strategies
C(Yi , Ai ): set of continuous functions from Yi to Ai with the
sup-norm.
Theorem 2
Let Assumptions A1, A2,
and A3 hold for m ≥ 3 and n = 2, and let
2
β
β1,2 = max(a1 ,a2 )∈A1 ×A2 ∂a∂1 ∂a2 u(x, y1 , y2 , a1 , a2 ). If τ1,2 < 1, then
(s1o , s2o ) given in Theorem 1 is the unique optimal pair of strategies in
C(Y1 , A1 ) × C(Y2 , A2 ).
The proof of Theorem 2 exploits Banach Contraction Mapping Theorem
and a result on Frechét derivatives from (Li & Basar, 1987).
Theorem 2 can be extended to the case of more than n = 2 DMs.
15 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Some consequences of the obtained results
Theorem 1 can be applied to study
the applicability of quasi-Monte Carlo methods;
upper bounds on the approximation of the team optimal strategies
via various kinds of neural networks (w.r.t. the dimension di and the
number ki of computational units used by each neural network),
e.g., Radial-Basis-Function (RBF) networks.
Theorem 2 can be applied to study
algorithms for suboptimal strategies based on Banach Contraction
Mapping Theorem.
16 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Application of quasi-Monte Carlo methods
Evaluation of v (s1 , . . . , sn ) = Ex,y1 ,...,yn {u(x, {yi }ni=1 , {si (yi )}ni=1 )}.
When q(·), u(·) and s1 (·), . . . , sn (·) are smooth enough, this can be
approximated by quasi-Monte Carlo methods.
o
o
For the
Pnoptimal team strategies s1 (·), . . . , sn (·) and
m ≥ i=0 di + 2, Theorem 1 provides a degree of smoothness
sufficient to apply Koksma-Hlawka’s Inequality (upper bound on the
approximation error for quasi-Monte Carlo methods).
17 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Approximation by Gaussian RBF networks
Proposition 1
Let the assumptions of Theorem 1 hold for an odd integer m > maxi {di } + 1. Then
there exist K (m, di ) > 0, i = 1, . . . , n, and a positive constant C , which depends on
(s1o , . . . , sno ), such that for every positive integer k there exists an n-tuple of strategies
(s̃1k , . . . , s̃nk ) such that v (s1o , . . . , sno ) − v (s̃1k , . . . s̃nk ) ≤ √C , where
k
s̃ik (yi ) = PrjAi
k
X
j=1
Gi =
cij (gij (yi )) , gij ∈ Gi ,
gi : Yi → R | gi (yi ) = e
k
X
j=1
−
kyi −ti k2
δi
, ti ∈ Rdi , δi > 0 ,
|cij | ≤ K (m, di )kλi kL1 (Rdi ) ,
and λi ∈ L1 (Rdi ) is such that sio,ext,1 = Gm−1 ∗ λi for a suitable extension sio,ext,1 of
sio on Rdi .
(PrjAi : Projection on Ai , Gm−1 : Bessel potential of order m − 1)
18 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
(Idealized) algorithm for suboptimal strategies
Proof of Theorem 2: the operator
T : C(Y1 , A1 ) × C(Y2 , A2 ) → C(Y1 , A1 ) × C(Y2 , A2 )
such that
T1 (s1 , s2 )(y1 ) = argmaxa1 ∈A1 Ex,y2 |y1 {u(x, y1 , y2 , a1 , s2 (y2 ))} ∀y1 ∈ Y1 ,
T2 (s1 , s2 )(y2 ) = argmaxa2 ∈A2 Ex,y1 |y2 {u(x, y1 , y2 , T1 (s1 , s2 )(y1 ), a2 )} ∀y2 ∈ Y2
is a contraction operator.
Given any initial pair of smooth suboptimal strategies (s̃10 , s̃20 ) and
the unique (and a-priori unknown) optimal one (s1o , s2o ), for every
β2
M M
M 0 0
(s̃1 , s̃2 ))
positive integer M one has (for τ1,2
2 < 1 and (s̃1 , s̃2 ) = T
max max |s1o (y1 ) − s̃1M (y1 )|, max |s2o (y2 ) − s̃2M (y2 )|
y1 ∈Y1
y2 ∈Y2
2 M
β1,2
≤
max{ max |s1o (y1 ) − s̃10 (y1 )|, max |s2o (y2 ) − s̃20 (y2 )|} .
y2 ∈Y2
y1 ∈Y1
τ2
Convergence to the optimal pair of strategies: exponential in M.
19 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Developments
Theorem 1 may be extended to dynamic team optimization problems
(several dynamic team optimization problems are equivalent to static
ones (Witsenhausen, 1988) ).
Proposition 1 may be extended to other neural network
approximation schemes.
The proof techniques may be applied to stochastic games to study
the smoothness of each player’s strategy at a Nash-equilibrium.
20 / 33
Thanks for the attention
Extra slides
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Static teams versus dynamic teams
Static teams: the information of each DM is independent of the
actions of the other DMs.
Static teams were first investigated by (Marschak, 1955), (Radner,
1962), and (Marschak & Radner, 1972), who derived closed-form
solutions for some particular cases.
Dynamic teams: each DM’s information is affected by the actions of
the other members (Basar & Bansal, 1989).
(Witsenhausen, 1988) showed that many dynamic team optimization
problems can be reformulated in terms of equivalent static ones.
23 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Why searching for suboptimal strategies?
Closed-form solutions to Problem STO can be derived only under
quite strong assumptions on the team utility function and on the
way in which each DM’s information is influenced by the state of the
world (and, in the case of dynamic teams, by the actions previously
taken by the other DMs).
Most results hold:
under the LQG hypotheses (linear information structure, concave
quadratic team utility function, and Gaussian random variables) and
(for dynamic teams) with partially nested information (i.e, each DM
can reconstruct all the information known to the DMs that affect its
own information).
If closed-form solutions are not available, one may be interested in
suboptimal strategies that are sufficiently accurate and easily
implementable.
Structural properties of the (unknown) optimal strategies can help in
narrowing the search for good suboptimal strategies.
24 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Approximation by Gaussian RBF networks
Proposition 1
Let the assumptions of Theorem 1 hold for an odd integer m > maxi {di } + 1. Then
there exist K (m, di ) > 0, i = 1, . . . , n, and a positive constant C , which depends on
(s1o , . . . , sno ), such that for every positive integer k there exists an n-tuple of strategies
(s̃1k , . . . , s̃nk ) such that v (s1o , . . . , sno ) − v (s̃1k , . . . s̃nk ) ≤ √C , where
k
s̃ik (yi ) = PrjAi
k
X
j=1
Gi =
cij (gij (yi )) , gij ∈ Gi ,
gi : Yi → R | gi (yi ) = e
k
X
j=1
−
kyi −ti k2
δi
, ti ∈ Rdi , δi > 0 ,
|cij | ≤ K (m, di )kλi kL1 (Rdi ) ,
and λi ∈ L1 (Rdi ) is such that sio,ext,1 = Gm−1 ∗ λi for a suitable extension sio,ext,1 of
sio on Rdi .
(PrjAi : Projection on Ai , Gm−1 : Bessel potential of order m − 1)
25 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Centralization & decentralization: a comparison
For Gaussian RBF networks with k computational units for each
DM: upper bounds of approximate optimization of order √1k , for a
degree of smoothness m in the problem formulation that grows
linearly in maxi {di }.
More decentralization −→ Lower degree of smoothness required to
apply Proposition 1.
More decentralization −→ (In general) smaller value of the team.
26 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Example: store-and-forward packet-switching
telecommunication network
27 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Model of store-and-forward packet-switching networks
C = (N , L) , consisting of a directed graph with a set N of N nodes and
a set L of links.
00
11
00
11
00
11
00
11
1
0
00
011
1
00
11
0
1
0
1
00
11
0
1
0
1
00
0
1
011
1
00
11
0
1
0
1
00
11
1
0
0
1
0
1
0
1
0
1
0
1
00
11
00
11
0
1
00
11
00
11
0
1
00
11
00
011
1
00
11
00
11
0
1
DM1
0
1
0
01
1
0
1
1
0
0
0
1
01
1
0
1
0
0
1
01
1
0
1
01
1
0
0
1
0
1
0
1
0
01
1
01
0
1
0
1
01
1
0
DM4
DM3
00
11
00
11
00
11
00
11
00
11
00
11
00
11
00
11
00
11
11
00
00
11
00
11
00
11
00
11
00
11
00
11
00
11
00
11
00
11
00
11
00
11
DM2
0
1
0
1
0
1
0
01
1
0
1
1
0
01
1
0
0
1
0
0
1
01
1
0
1
0
1
0
1
0
0
01
1
01
1
DM5
28 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
At each node i ∈ N : stochastic messages arrive from outside and
there is a queue in which messages are stored for the destination D.
P(i), S(i) : sets of upstream and downstream neighbors of node i.
The messages are transmitted and their flows are modified
synchronously at instants 0, 1, . . . , T − 1; pij (rounded off to a
multiple of the sample period): delay in transmitting, propagating
and processing a message.
Very large number of data packets =⇒ traffic flows described by
continuous variables.
−→ Routing problem.
29 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Model of the packet-switching network, with link delays
and capacities
2 [1]
1 [1]
1
3 [1]
1 [2]
4
5
3 [1]
2
7
1
2 [2]
3 [2]
1 [1]
1
1 [3]
2
2 [1]
2 [1]
6
3 [1]
3 [1]
D
8
1 [1]
1 [1]
2
9
3
2 [2]
30 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Application in network team optimization problems (I)
In network team optimization problems the team utility function u is
the sum of individual utility functions ui (Rantzer 2008) each
dependent only on a small group of agents.
E.g.: n individual utility functions ui , each associated with the agent
i and dependent only on the actions of the agent i and its neighbors
Ni in the network.
−→ It may be easier to impose the interiority assumption made in
Theorem 1.
Classes of network team optimization problems to which Theorem 1
may apply: stochastic versions of the congestion, routing, and
bandwidth allocation problems considered in (Mansour, 2006):
stated in terms of smooth and strictly concave utility functions.
31 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Application in network team optimization problems (II)
Given any n-tuple of strategies (s1 , . . . , sn ), the n-tuple
ŝ1 (y1 ) := argmaxa1 ∈A1 Ex,y2 ,...,yn |y1 {u(x, {yi }ni=1 , a1 , {si (yi )}ni=2 )} ,
...
ŝn (yn ) := argmaxan ∈An Ex,y1 ,...,yn−1 |yn {u(x, {yi }ni=1 , {si (yi )}n−1
i=1 , an )} ,
is at least as good as (s1 , . . . , sn ).
For network team optimization problems, these formulas have a
simpler form: for each i, only the individual utility functions of the
agents interacting with agent i have to be taken into account.
−→ Number of integration variables smaller than in the centralized
case.
32 / 33
Smooth Optimal Decision Strategies for Static Team Optimization Problems and their Approximations
Application in network team optimization problems (III)
Let T s := (ŝ1 , . . . , ŝn ). Under suitable assumptions T is a
contraction operator and for any initial smooth s0 := (s10 , . . . , sn0 ),
the sequence {sj } s.t. sj+1 = T sj converges to a (unique) solution
of Problem STO.
−→ rates of convergence.
For two agents, under the assumptions of Theorem 1, T is a
β
contraction operator if τ1,2 < 1, where
∂2
β1,2 =
max
u(x,
y
,
y
,
a
,
a
)
1 2 1 2 (a1 ,a2 )∈A1 ×A2 ∂a1 ∂a2
measures the interaction between the two agents.
Similar estimates can be obtained for more than two agents.
For network team optimization problems, most interaction terms βi,j
are equal to 0, so it is easier to obtain estimates than for general
team optimization problems.
33 / 33