Slides

Query Incentive Networks
Jon Kleinberg
and
Prabhakar Raghavan
- Presented by: Nishith Pathak
Motivation






Understanding networks of interacting agents as
economic systems
Users pose queries and offer incentives for answers
The queries and incentives are propagated in the
network
Vetting – Nodes along the path validate the
relationship between the end-points
Can be formulated as a game played by nodes in
the network
This game has a Nash Equilibrium
Motivation



In case of users seeking information without
incentives the critical behavior is at branching
parameter 1
However, for users seeking information with
incentives, the critical behavior is at
branching parameter 2
Between parameters 1 and 2, the answer is
within vicinity but the incentive required is too
high
Formulating a Model




An infinite d-ary tree structure T is assumed
With each step the incentive keeps diminishing
The set of strategies for every node is the set of
functions which decides the split between pay-off
and reward to child nodes
Parameters –





q : Probability of a node being active given that its parent is
active
b = qd : branching factor (Mean number of offsprings)
Based on q, only a subset of T, T’ will be active
If b<1 then T’ is almost surely finite
If b>1 then T’ is infinite with probability, 1-eq,d>0
Formulating a Model


How much utility r* is required by the root node v* in order to achieve a
probability s of obtaining an answer from the network
Utility r* depends on probability (1-p) that a node has the answer


Value on effort





1 out of every n nodes have the answer (rarity n of the answer), where n = (1-p)-1
Utilities are dealt as integers only to prevent degenerate case
Every node on the path to the answer has to accept a minimum reward of 1 utility
This is incorporated in the model by placing a value on the communication effort of the
node
This minimum utility of 1 does not count towards the payoff
Three step process –




Query is propagated outwards from the root
The identities of the nodes with the answer are propagated back to the root
The root establishes communication with one of the above nodes and receives the
answer from it
In the third step all nodes along the path as well as the node with the answer receive
their rewards
Nash Equilibrium





av(f,x) is the probability that the subtree below v possesses the
answer given that v offers rewards x and v itself does not have
the answer
bv(f,x) = 1 - av(f,x)
bv(f,x) = P w is child of v[1-q(1-pbw(f,fw(x)))]
Pay-off for node v = c1 + c2(r-x-1)av(g,x)
 r is reward offered to v
 x is the reward v offers to its children
 g is Nash Equilibrium strategy if each gv in g maximizes the payoff for node v, for all nodes v (Theorem 2.1)
 gv is same for all nodes i.e. all nodes play the same strategy in
the state of Nash Equilibrium
If p generalizes q then the Nash Equilibrium is unique (Theorem
2.2)
Breakpoint Structure of Rewards



Rs(n,b): minimum utility required by root v* in order to obtain an
answer with probability at least s.
Assume n>1 and b>1 are fixed
 The set of possible values for s is partitioned into intervals
 Rs(n,b) is constant within each interval but increases at a
‘breakpoint’ between two intervals
 If we increase utility r* at the node, nodes tend to push the reward
deeper into the tree
 However a change in the minimum utility Rs(n,b) is observed only
when this tendency to push, propagates the query to an extra
level of depth in the tree
d(r): Number of nodes the query would reach if the root had utility
r, all nodes were active and no node possessed the answer i.e.
the maximum possible level that a query can reach if the root has
utility r.
Breakpoint Structure of Rewards






In case of networks with no incentives fj probability that no node
in the first j levels has the answer given that the root does not
We have, bv*(g,r) = fd(r)
uj is minimum r for which d(r)>j-1
For a given initial utility r, the optimal reward root v* can offer to
its children in order to maximize its pay-off is of the form ui for
some i
Pay-off for root having utility r and offering reward ui is given by
li(r)=(r-ui-1)(1-fi)
Suppose for all r >= uj, we have lj-1(r) > lj-2(r) > … > l1(r)
 yj+1 is the point where lj intersects lj-1 and uj+1 = greatest_int(yj+1)
 We have, for all r >= uj+1, lj(r) > lj-1(r) > … > l1(r)
If D’j = yj – uj-1 and Dj = uj – uj-1 then,
Growth Rate of Rewards

Let function t(x) = (1-q(1-px)) and we have fj = t(fj-1)
Growth Rate of Rewards (b<2)




Choose s0 < s and n large enough such that
pb(1-2bds0)>1
Consider sequence of fj values up to the
point it drops below 1-s
First segment of sequence of fj to be the set
of indices j for which fj >= 1-k0/n for
k0 > b/(2-b)
Second segment to be set of indices j for
which 1-k0/n > fj >= 1-s0
Growth Rate of Rewards (b<2)
Growth Rate of Rewards (b>2)




Choose s0 < s and n large enough such that
pb(1-2bds0)>2
Consider sequence of fj values up to the
point it drops below 1-s
First segment of sequence of fj’s to be set if
indices j for which fj >= 1-s0
Second segment to be set of indices j for
which 1-s0 > fj >= 1-s
Growth Rate of Rewards (b>2)
Extensions and Future Directions



Analysis of the neighborhood of b=2
Behavior of lower bound when b approaches
1 from above
Incorporating more complexity in the model



More complex queries
Adding more factors such as response time
Incentive Queries in Directed Acyclic Graphs
and a Model of Competition