Eliciting the Socially Optimal Allocation from Responsible Agents

Eliciting the Socially Optimal Allocation from
Responsible Agents
Battal Doğan
University of Rochester
Motivation
Mechanism Design:
Agents have private information.
Designer is trying to elicit that information.
Motivation
Mechanism Design:
Agents have private information.
Designer is trying to elicit that information.
Agents typically only care about their self interest.
Motivation
We consider an allocation problem.
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Example: The principal of a company allocates duties among
experts in a department.
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Example: The principal of a company allocates duties among
experts in a department.
There is a socially optimal allocation.
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Example: The principal of a company allocates duties among
experts in a department.
There is a socially optimal allocation.
One duty requires accounting skills
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Example: The principal of a company allocates duties among
experts in a department.
There is a socially optimal allocation.
One duty requires accounting skills ←→ One Agent has high
accounting skills
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Example: The principal of a company allocates duties among
experts in a department.
There is a socially optimal allocation.
One duty requires accounting skills ←→ One Agent has high
accounting skills
One duty requires marketing skills
Motivation
We consider an allocation problem.
Some duties are allocated to some agents.
Example: The principal of a company allocates duties among
experts in a department.
There is a socially optimal allocation.
One duty requires accounting skills ←→ One Agent has high
accounting skills
One duty requires marketing skills ←→ One Agent has high
marketing skills
Motivation
Agents have preferences over duties.
Motivation
Agents have preferences over duties.(Not necessarily in line
with the social optimal allocation!)
Motivation
Agents have preferences over duties.(Not necessarily in line
with the social optimal allocation!)
There are responsible agents: Feel responsibility about the
implementation of the socially optimal allocation.
Motivation
Agents have preferences over duties.(Not necessarily in line
with the social optimal allocation!)
There are responsible agents: Feel responsibility about the
implementation of the socially optimal allocation.
Main Result: If there are at least 3 responsible agents, the
socially optimal allocation is implementable.
The Model
Agents: N = {1, 2, 3, . . . , n}
The Model
Agents: N = {1, 2, 3, . . . , n}
Duties: D = {d1 , d2 , d3 , . . . , dn }
The Model
Agents: N = {1, 2, 3, . . . , n}
Duties: D = {d1 , d2 , d3 , . . . , dn }
Socially optimal assignment: µo : N → D.
The Model
Agents: N = {1, 2, 3, . . . , n}
Duties: D = {d1 , d2 , d3 , . . . , dn }
Socially optimal assignment: µo : N → D.
µo
1 2 3 ... n
d4 d1 d2 . . . d3
The Model
Agents: N = {1, 2, 3, . . . , n}
Duties: D = {d1 , d2 , d3 , . . . , dn }
Socially optimal assignment: µo : N → D.
µo
1 2 3 ... n
d4 d1 d2 . . . d3
Set of all possible assignments: M.
The Model
Agents: N = {1, 2, 3, . . . , n}
Duties: D = {d1 , d2 , d3 , . . . , dn }
Socially optimal assignment: µo : N → D.
µo
1 2 3 ... n
d4 d1 d2 . . . d3
Set of all possible assignments: M.
Preferences:
R i on D, a weak order (reflexive, transitive, complete)
The Model
Agents: N = {1, 2, 3, . . . , n}
Duties: D = {d1 , d2 , d3 , . . . , dn }
Socially optimal assignment: µo : N → D.
µo
1 2 3 ... n
d4 d1 d2 . . . d3
Set of all possible assignments: M.
Preferences:
R i on D, a weak order (reflexive, transitive, complete)
Ri on M, induced by R i , may depend on µo .
The Model
Two types of agents:
The Model
Two types of agents:
1. Irresponsible Agents:
The Model
Two types of agents:
1. Irresponsible Agents:
1
2
...
i
...
n
µ µ(1) µ(2) . . . µ(i) . . . µ(n)
µ0 µ0 (1) µ0 (2) . . . µ0 (i) . . . µ0 (n)
The Model
Two types of agents:
1. Irresponsible Agents: µ Ri µ0 iff µ(i) R i µ0 (i).
1
2
...
i
...
n
µ µ(1) µ(2) . . . µ(i) . . . µ(n)
µ0 µ0 (1) µ0 (2) . . . µ0 (i) . . . µ0 (n)
The Model
Two types of agents:
1. Irresponsible Agents: µ Ri µ0 iff µ(i) R i µ0 (i).
1
2
...
i
...
n
µ µ(1) µ(2) . . . µ(i) . . . µ(n)
µ0 µ0 (1) µ0 (2) . . . µ0 (i) . . . µ0 (n)
Ri is the irresponsible extension of Ri at µo
The Model
2. Responsible Agents:
The Model
2. Responsible Agents:
o
µ is socially preferred to µ0 at µo (µ µ µ0 )
The Model
2. Responsible Agents:
o
µ is socially preferred to µ0 at µo (µ µ µ0 ) iff {Socially
Optimally assigned agents in µ0 } ( {Socially Optimally
assigned agents in µ}
The Model
2. Responsible Agents:
o
µ is socially preferred to µ0 at µo (µ µ µ0 ) iff {Socially
Optimally assigned agents in µ0 } ( {Socially Optimally
assigned agents in µ}
µ Ri µ0 iff
The Model
2. Responsible Agents:
o
µ is socially preferred to µ0 at µo (µ µ µ0 ) iff {Socially
Optimally assigned agents in µ0 } ( {Socially Optimally
assigned agents in µ}
µ Ri µ0 iff
µ(i) P i µ0 (i)
The Model
2. Responsible Agents:
o
µ is socially preferred to µ0 at µo (µ µ µ0 ) iff {Socially
Optimally assigned agents in µ0 } ( {Socially Optimally
assigned agents in µ}
µ Ri µ0 iff
o
µ(i) P i µ0 (i) OR [µ(i) I i µ0 (i) and not µ0 µ µ]
The Model
2. Responsible Agents:
o
µ is socially preferred to µ0 at µo (µ µ µ0 ) iff {Socially
Optimally assigned agents in µ0 } ( {Socially Optimally
assigned agents in µ}
µ Ri µ0 iff
o
µ(i) P i µ0 (i) OR [µ(i) I i µ0 (i) and not µ0 µ µ]
Ri is the responsible extension of Ri at µo
The Model
Ri is not necessarily transitive.
The Model
Ri is not necessarily transitive.
µo (i) = di for each i ∈ N.
µ
µ0
µ00
1
d1
d1
d1
2
d2
d4
d2
3
d5
d3
d4
4
d3
d5
d3
5
d4
d2
d5
The Model
Ri is not necessarily transitive.
µo (i) = di for each i ∈ N.
µ
µ0
µ00
1
d1
d1
d1
2
d2
d4
d2
3
d5
d3
d4
µ R1 µ0 R1 µ00 . However, µ00 P1 µ.
4
d3
d5
d3
5
d4
d2
d5
The Model
A problem: (R, µo )
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
Set of all problems: P
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
Set of all problems: P
Set of admissible problems: P ⊆ P
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
Set of all problems: P
Set of admissible problems: P ⊆ P
Set of all admissible problems with k responsible agents: P k
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
Set of all problems: P
Set of admissible problems: P ⊆ P
Set of all admissible problems with k responsible agents: P k
Optimal Rule: F o (R, µo ) = µo for each (R, µo ) ∈ P.
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
Set of all problems: P
Set of admissible problems: P ⊆ P
Set of all admissible problems with k responsible agents: P k
Optimal Rule: F o (R, µo ) = µo for each (R, µo ) ∈ P.
Game Form: Γ = (S, g )
The Model
A problem: (R, µo ) s.t. for each i ∈ N, there is a weak order
R i on D s.t. Ri is a responsible or irresponsible extension of
Ri .
Set of all problems: P
Set of admissible problems: P ⊆ P
Set of all admissible problems with k responsible agents: P k
Optimal Rule: F o (R, µo ) = µo for each (R, µo ) ∈ P.
Game Form: Γ = (S, g )
g (NE (Γ, R, µo )) = µo for each (R, µo ) ∈ P.
Results
Necessary condition for Nash implementability:
Results
Necessary condition for Nash implementability:
For each (R, µ), (R 0 , µ0 ) ∈ P;
Results
Necessary condition for Nash implementability:
For each (R, µ), (R 0 , µ0 ) ∈ P;
µ 6= µ0
Results
Necessary condition for Nash implementability:
For each (R, µ), (R 0 , µ0 ) ∈ P;
µ 6= µ0
=⇒
∃i ∈ N, µ00 ∈ M : µ Ri µ00 , µ00 Pi0 µ.
Results
Necessary condition for Nash implementability:
For each (R, µ), (R 0 , µ0 ) ∈ P;
µ 6= µ0
=⇒
∃i ∈ N, µ00 ∈ M : µ Ri µ00 , µ00 Pi0 µ.
Theorem: F o is Nash implementable if P = P >2 .
Results
Necessary condition for Nash implementability:
For each (R, µ), (R 0 , µ0 ) ∈ P;
µ 6= µ0
=⇒
∃i ∈ N, µ00 ∈ M : µ Ri µ00 , µ00 Pi0 µ.
Theorem: F o is Nash implementable if P = P >2 .
Proposition: F is not Nash implementable if P 0 ⊆ P or
P 1 ⊆ P or P 2 ⊆ P.
Results
Necessary condition for Nash implementability:
For each (R, µ), (R 0 , µ0 ) ∈ P;
µ 6= µ0
=⇒
∃i ∈ N, µ00 ∈ M : µ Ri µ00 , µ00 Pi0 µ.
Theorem: F o is Nash implementable if P = P >2 .
Proposition: F is not Nash implementable if P 0 ⊆ P or
P 1 ⊆ P or P 2 ⊆ P.
F o does not satisfy no–veto-power when there are at least 3
responsible agents.
Results
Game Form, Γo .
Results
Game Form, Γo .
Agent Generic Strategy
1, 2, 3
(µi , (k i , l i ), t i )
4, . . . , n
(k i , l i ), t i
k i 6= i, l i 6= i
k i 6= i, l i 6= i
Results
Game Form, Γo .
Agent Generic Strategy
1, 2, 3
(µi , (k i , l i ), t i )
4, . . . , n
(k i , l i ), t i
k i 6= i, l i 6= i
k i 6= i, l i 6= i
m(s): agent announcing the highest number.
Results
Game Form, Γo .
Agent Generic Strategy
1, 2, 3
(µi , (k i , l i ), t i )
4, . . . , n
(k i , l i ), t i
k i 6= i, l i 6= i
k i 6= i, l i 6= i
m(s): agent announcing the highest number.
T i (µ): The allocation obtained from µ by switching the
duties of k i and l i .
Results
Game Form, Γo .
Agent Generic Strategy
1, 2, 3
(µi , (k i , l i ), t i )
4, . . . , n
(k i , l i ), t i
k i 6= i, l i 6= i
k i 6= i, l i 6= i
m(s): agent announcing the highest number.
T i (µ): The allocation obtained from µ by switching the
duties of k i and l i .
(
T m(s) (µ)
g (s) =
T m(s) (µ1 )
if ∃{i, j} ⊂ {1, 2, 3} : µi = µj = µ
otherwise
Extensions
The number of duties exceeds the number of agents
=⇒ Results still hold.
Extensions
The number of duties exceeds the number of agents
=⇒ Results still hold.
Agents may receive more than one duty
=⇒ F o may not be Nash implementable even if all the
agents are responsible.
Results
µ0
µ
. . . d(n−1)n+1
. . . d(n−1)n+2
..
...
.
d1 dn+1
d2 dn+2
..
..
.
.
dn d2n . . .
dn2
Results
µ0
µ
. . . d(n−1)n+1
. . . d(n−1)n+2
..
...
.
d1 dn+1
d2 dn+2
..
..
.
.
dn d2n . . .
dn2
For each i ∈ N, R i on 2D is such that µ(i) is top ranked.
Results
µ0
µ
. . . d(n−1)n+1
. . . d(n−1)n+2
..
...
.
d1 dn+1
d2 dn+2
..
..
.
.
dn d2n . . .
dn2
For each i ∈ N, R i on 2D is such that µ(i) is top ranked.
Let Ri , Ri0 be the responsible extensions of R i wrt. µ and µ0 ,
respectively.
Results
µ0
µ
. . . d(n−1)n+1
. . . d(n−1)n+2
..
...
.
d1 dn+1
d2 dn+2
..
..
.
.
dn d2n . . .
dn2
For each i ∈ N, R i on 2D is such that µ(i) is top ranked.
Let Ri , Ri0 be the responsible extensions of R i wrt. µ and µ0 ,
respectively.
Consider (R, µ), (R 0 , µ0 ).
Results
µ0
µ
. . . d(n−1)n+1
. . . d(n−1)n+2
..
...
.
d1 dn+1
d2 dn+2
..
..
.
.
dn d2n . . .
dn2
For each i ∈ N, R i on 2D is such that µ(i) is top ranked.
Let Ri , Ri0 be the responsible extensions of R i wrt. µ and µ0 ,
respectively.
Consider (R, µ), (R 0 , µ0 ).
Each agent can receive at most n − 1 duties, and all the
agents are responsible =⇒ F o is Nash implementable.
Related Literature
Social choice models with honest agents:
“Role of honesty in full implementation.” Matsushima (2008),
JET.
“Nash Implementation with Partially Honest Individuals.”
Dutta and Sen (2011), working paper.
“Eliciting the Socially Optimal Rankings from Unfair Jurors”,
Amoros (2009), JET.