NetworkEconomics
-Lecture2:Incentivesinonline
systemsI:freeridingandeffort
elicitation
PatrickLoiseau
EURECOM
Fall2016
1
References
• Main:
– N.Nisam,T.Roughgarden,E.Tardos andV.Vazirani (Eds).“Algorithmic
GameTheory”,CUP2007.Chapters23(seealso27).
• Availableonline:
http://www.cambridge.org/journals/nisan/downloads/Nisan_Nonprintable.pdf
• Additional:
– Yiling ChenandArpita Gosh,“SocialComputingandUserGenerated
Content,”EC’13tutorial
• Slidesathttp://www.arpitaghosh.com/papers/ec13_tutorialSCUGC.pdf and
http://yiling.seas.harvard.edu/wpcontent/uploads/SCUGC_tutorial_2013_Chen.pdf
– M.Chiang.“NetworkedLife,20QuestionsandAnswers”,CUP2012.
Chapters3-5.
• Seethevideosonwww.coursera.org
2
Outline
1.
2.
3.
4.
Introduction
TheP2Pfilesharinggame
Free-ridingandincentivesforcontribution
Hiddenactions:theprincipal-agentmodel
3
Outline
1.
2.
3.
4.
Introduction
TheP2Pfilesharinggame
Free-ridingandincentivesforcontribution
Hiddenactions:theprincipal-agentmodel
4
Onlinesystems
• Resources
– P2Psystems
• Information
– Ratings
– Opinionpolls
• Content(user-generatedcontent)
–
–
–
–
P2Psystems
Reviews
Forums
Wikipedia
• Labor(crowdsourcing)
– AMT
• Inallthesesystems,thereisaneedforuserscontribution
5
P2Pnetworks
• Firstones:Napster(1999),Gnutella(2000)
– Free-ridingproblem
• Manyusersacrosstheglobeself-organizingto
sharefiles
– Anonymity
– One-shotinteractions
àDifficulttosustaincollaboration
• Exacerbatedby
– Hiddenactions(nondetectable defection)
– Cheappseudonyms(multipleidentitieseasy)
6
Incentivemechanisms
• Goodtechnologyisnotenough
• P2Pnetworksneedincentivemechanismsto
incentivizeuserstocontribute
– Reputation(KaZaA)
– Currency(calledscrip)
– Barter(BitTorrent)– directreciprocity
7
Extensions
• Otherfree-ridingsituations
– E.g.,mobilead-hocnetworks,P2Pstorage
• Richstrategyspace
– Share/notshare
– Amountofresourcescommitted
– Identitymanagement
• Otherapplicationsofincentives/reputation
systems
– Onlineshopping,forums,etc.
8
Outline
1.
2.
3.
4.
Introduction
TheP2Pfilesharinggame
Free-ridingandincentivesforcontribution
Hiddenactions:theprincipal-agentmodel
9
TheP2Pfile-sharinggame
• Peer
– Sometimesdownloadà benefit
– Sometimesuploadà cost
• Oneinteraction~prisoner’sdilemma
C
D
C
2,2
-1,3
D
3,-1
0,0
10
Prisoner’sdilemma
C
• Dominantstrategy:D
• Sociallyoptimal(C,C)
• Singleshotleadsto(D,D)
D
C
2,2
-1,3
D
3,-1
0,0
– Sociallyundesirable
• Iteratedprisoner’sdilemma
– Tit-for-tatyieldssociallyoptimaloutcome
11
P2P
• Manyusers,randominteractions
Feldmanetal.2004
• Directreciprocitydoesnotscale
12
P2P
• Directreciprocity
– EnforcedbyBittorrent atthescaleofonefilebut
notoverseveralfiles
• Indirectreciprocity
– Reputationsystem
– Currencysystem
13
Howtotreatnewcomers
• P2Phashighturnover
• Ofteninteractwithstrangerwithnohistory
• TFTstrategywithCwithnewcomers
– Encouragenewcomers
– BUTFacilitateswhitewashing
14
Outline
1.
2.
3.
4.
Introduction
TheP2Pfilesharinggame
Free-ridingandincentivesforcontribution
Hiddenactions:theprincipal-agentmodel
15
Reputation
• Longhistoryoffacilitatingcooperation(e.g.
eBay)
• Ingeneralcoupledwithservicedifferentiation
– Goodreputation=goodservice
– Badreputation=badservice
• Ex:KaZaA
16
Trust
• EigenTrust (SepKamvar,MarioSchlosser,and
HectorGarcia-Molina,2003)
– Computesaglobaltrustvalueofeachpeerbased
onthelocaltrustvalues
• Usedtolimitmalicious/inauthenticfiles
– Defenseagainstpollutionattacks
17
Attacksagainstpollutionsystems
•
•
•
•
Whitewashing
Sybilattacks
Collusion
Dishonestfeedback
• Seenextlecture…
• Thislecture:howreputationhelpsineliciting
effort
18
AminimalistP2Pmodel
• Largenumberofpeers(players)
• Peeri hastypeθi (~“generosity”)
• Actionspace:contributeorfree-ride
• x:fractionofcontributingpeers
à1/x:costofcontributing
• Rationalpeer:
– Contributeifθi >1/x
– Free-rideotherwise
19
Contributionswithnoincentive
mechanism
• Assumeuniformdistributionoftypes
20
Contributionswithnoincentive
mechanism(2)
• Equilibria stability
21
Contributionswithnoincentive
mechanism(3)
• Equilibria computation
22
Contributionswithnoincentive
mechanism(4)
• Result:Thehigheststableequilibrium
contributionlevelx1 increaseswithθm and
convergestooneasgoesθm toinfinitybut
fallstozeroifθm <4
• Remark:ifthedistributionisnotuniform:the
graphicalmethodstillapplies
23
Overallsystemperformance
• W=ax-(1/x)x=ax-1
• Evenifparticipationprovideshighbenefits,
thesystemmaycollapse
24
Reputationandservicedifferentiation
inP2P
• Considerareputationsystemthatcancatch
free-riderswithprobabilitypandexclude
them
– Alternatively:catchallfree-ridersandgivethem
servicealteredby(1-p)
• Twoeffects
– Loadreduced,hencecostreduced
– Penaltyintroducesathreat
25
Equilibriumwithreputation
• Q:individualbenefit
• R:reducedcontribution
• T:threat
26
Equilibriumwithreputation(2)
27
Systemperformancewithreputation
• W=x(Q-R)+(1-x)(Q-T)=(ax-1)(x+(1-x)(1-p))
• Trade-off:Penaltyonfreeridersincreasesxbut
entailssocialcost
• Ifp>1/a,thethreatislargerthanthecost
à Nofreerider,optimalsystemperformancea-1
28
FOX(FairOptimaleXchange)
• Theoreticalapproach
• Assumesallpeerarehomogeneous,with
capacitytoservekrequestsinparalleland
seektominimizecompletiontime
• FOX:distributedsynchronizedprotocolgiving
theoptimum
– i.e.,allpeerscanachieveoptimumiftheycomply
• “grimtrigger”strategy:eachpeercancollapse
thesystemifhefindsadeviatingneighbor
29
FOXequilibrium
30
Outline
1.
2.
3.
4.
Introduction
TheP2Pfilesharinggame
Free-ridingandincentivesforcontribution
Hiddenactions:theprincipal-agentmodel
31
Hiddenactions
• InP2P,manystrategicactionsarenotdirectly
observable
– Arrival/departure
– Messageforwarding
• Samewithmanyothercontexts
– Packetforwardinginad-hocnetworks
– Worker’seffort
• Moralhazard:situationinwhichapartyismore
willingtotakeariskknowingthatthecostwillbe
supported(atleastinpart)byothers
– E.g.,insurance
32
Principal-agentmodel
Aprincipalemploysasetofnagents:N={1,…,n}
ActionsetAi={0,1}
Costc(0)=0,c(1)=c>0
Theactionsofagentsdetermine(probabilistically)an
outcomeoin{0,1}
• Principalvaluationofsuccess:v>0(nogainincaseof
failure)
• Technology(orsuccessfunction)t(a1,…,an):probabilityof
success
•
•
•
•
• Remark:manydifferentmodelsexist
– Oneagent,differentactionsets
– Etc.
33
Read-oncenetworks
• Onegraphwith2specialnodes:sourceandsink
• Eachagentcontrols1link
• Agentsaction:
– loweffortà succeedwithprobabilityγ in(0,1/2)
– Higheffortà succeedwithprobability1-γin(1/2,1)
• Theprojectsucceedsifthereisasuccessful
source-sinkpath
34
Example
• ANDtechnology
• ORtechnology
35
Contract
• Theprincipalagentcandesigna“contract”
– Paymentofpi≥0uponsuccess
– Nothinguponfailure
• Theagentsareinagame:
ui (a) = pi t(a) − c(ai )
• Theprincipalwantstodesignacontractsuch
thathisexpectedprofitismaximized
%
(
u(a, v) = t(a)⋅ ' v − ∑ pi *
& i∈N )
36
Definitionsandassumptions
• Assumptions:
– t(1,a-i)>t(0,a-i)foralla-i
– t(a)>0foralla
• Definition:themarginalcontributionofagent
i givena-i is
Δ i (a−i ) = t(1, a−i ) − t(0, a−i )
• Increaseinsuccessprobabilityduetoi’seffort
37
Individualbestresponse
• Givena-i,agent’si beststrategyis
ai = 1 if
c
pi ≥
Δ i (a−i )
ai = 0 if
pi ≤
c
Δ i (a−i )
38
Bestcontractinducinga
• Thebestcontractfortheprincipalthat
inducesa asanequilibriumconsistsin
– pi = 0 fortheagentschoosingai=0
c
– pi =
fortheagentschoosingai=1
Δ i (a−i )
39
Bestcontractinducinga (2)
• Withthisbestcontract,expectedutilitiesare
– ui = 0 fortheagentschoosingai=0
$ t(1, a−i ) '
– ui = c ⋅ &
−1) fortheagentschoosingai=1
% Δ i (a−i ) (
%
c (
* fortheprincipal
– u(a, v) = t(a)⋅ '' v − ∑
*
Δ
(a
)
& i:ai =1 i −i )
40
Principal’sobjective
• Choosingtheactionsprofilea* thatmaximizes
hisutilityu(a,v)
• EquivalenttochoosingthesetS* ofagents
withai=1
• Dependsonvà S*(v)
• Wesaythattheprincipalcontractswithi if
ai=1
41
Hiddenvs observableactions
• Hiddenactions:
$ t(1, a−i ) '
ui = c ⋅ &
−1)
% Δ i (a−i ) (
%
c (
*
u(a, v) = t(a)⋅ '' v − ∑
*
Δ
(a
)
& i:ai =1 i −i )
ifai=1and0otherwise
• Ifactionswereobservable
– Givepi=ctohigh-effortagentsregardlessofsuccess
– Yieldsfortheprincipalautilityequaltosocialwelfare
u(a, v) = t(a)⋅ v − ∑ c
i:ai =1
à Choosea tomaximizesocialwelfare
42
(POU)PriceofUnaccountability
• S*(v):optimalcontractinhiddencase
• S0*(v):optimalcontractinobservablecase
• Definition:thePOU(t)ofatechnologytis
definedastheworst-caseratioovervofthe
principal’sutilityintheobservableandhidden
actionscases
POU(t) = sup
v>0
t(S0* (v))⋅ v − ∑
i∈S0* (v)
c
%
(
c
t(S (v))⋅ ' v − ∑ *
*
i∈S (v) t(S * (v)) − t(S * (v) \ {i}) 43
&
)
*
Remark
• POU(t)>1
44
Optimalcontract
• Wewanttoanswerthequestions:
• Howtoselecttheoptimalcontract(i.e.,the
optimalsetofcontractingagents)?
• Howdoesitchangewiththeprincipal’s
valuationv?
45
Monotonicity
• Theoptimalcontractsweaklyimproveswhen
vincreases:
– Foranytechnology,inboththehidden- and
observable-actionscases,theexpectedutilityof
theprincipal,thesuccessprobabilityandthe
expectedpaymentoftheoptimalcontractareall
non-decreasingwhenvincreases
46
Proof
47
Proof(2)
48
Consequences
• Anonymoustechnology:thesuccess
probabilityissymmetricintheplayers
• Fortechnologiesforwhichthesuccess
probabilitydependsonlyonthenumberof
contractedagents(e.g.AND,OR),thenumber
ofcontractedagentsisnon-decreasingwhenv
increases
49
OptimalcontractfortheAND
technology
• Theorem:ForanyanonymousANDtechnology
withγ =γi =1-δi foralli
– Thereexistsavaluationfinitev*suchthatforany
v<v*,itisoptimaltocontractwithnoagentand
foranyv>v*,itisoptimaltocontractwithall
agents(forv=v*,bothcontractsareoptimal)
– Thepriceofunaccountabilityis
"1 %
POU = $ −1'
#γ &
n−1
"
γ %
+ $1−
'
# 1− γ &
50
Remarks
• ProofinM.Babaioff,M.FeldmanandN.
Nisan,“CombinatorialAgency”,inProceedings
ofEC2006.
• POUisnotbounded!
– Monitoringcanbebeneficial,evenifcostly
51
Example
• n=2,c=1,γ=1/4
• Computeforallnumberofagents
–t
–Δ
– Utilityofprincipal
52
OptimalcontractfortheOR
technology
• Theorem:ForanyanonymousORtechnology
withγ =γi =1-δi foralli
– Thereexistfinitepositivevaluesv1,…,vn suchthat
foranyvin(vk,vk+1),itisoptimaltocontractk
agent.(Forv<v0,itisoptimaltocontract0agent,
forv>vn,itisoptimaltocontractnagentandfor
v=vk,theprincipalisindifferentbetween
contractingk-1orkagents.)
– Thepriceofunaccountabilityisupperboundedby
5/2
53
Example
• n=2,c=1,γ=1/4
• Computeforallnumberofagents
–t
–Δ
– Utilityofprincipal
54
Illustration
• Numberofcontractedagents
12000
200
10000
v 150
8000
3
3
6000
100
4000
50
0
0
0
0.1
0.2
gamma
0.3
0.4
2000
0
0.2
2 1
0.25
0.3
0.35
gamma
0.4
0
0.45
55
© Copyright 2026 Paperzz