Keep Your Promise: Mechanism Design against Free

Keep Your Promise: Mechanism Design against
Free-riding and False-reporting in Crowdsourcing
Xiang Zhang, Student Member, IEEE, Guoliang Xue, Fellow, IEEE, Ruozhou Yu, Student Member, IEEE,
Dejun Yang, Member, IEEE, and Jian Tang, Senior Member, IEEE
An important functionality of these websites is to empower
platforms not only to offer markets for laboring and resource
redistribution, but also to help decide fair prices for all providers
and requesters.
In general, a requester would post a task on one of these
platforms, and wait till some provider to work on it. To incentivize providers for their service, the requester needs to offer a
certain reward. A provider will get reward as compensation for
its cost of completing the task.
Abstract—Crowdsourcing is an emerging paradigm where users
can have their tasks completed by paying fees, or receive rewards
for providing service. A critical problem that arises in current
crowdsourcing mechanisms is how to ensure that users pay or
receive what they deserve. Free-riding and false-reporting may
make the system vulnerable to dishonest users. In this paper, we
design schemes to tackle these problems, so that each individual
in the system is better off being honest and each provider prefers
completing the assigned task. We first design a mechanism EFF
which eliminates dishonest behavior with the help from a trusted
third party for arbitration. We then design another mechanism
DFF which, without the help from any third party, discourages
dishonest behavior. We prove that EFF eliminates free-riding and
false-reporting, while guaranteeing truthfulness, transaction-wise
budget-balance, and computational efficiency. We also prove that
DFF is semi-truthful, which discourages dishonest behavior such
as free-riding and false-reporting when the rest of the individuals
are honest, while guaranteeing transaction-wise budget-balance
and computational efficiency. Performance evaluation shows that
within our mechanisms, no user could have a utility gain by
unilaterally being dishonest.
B. Auction Theory
Auction is an efficient mechanism for trading markets, with its
advantage in discovering prices for buyers and sellers. Auctions
involving the interactions among multiple buyers and multiple
sellers are called double auctions [12]. When determining
payments for buyers and sellers, it is always feared that the
prices are manipulated to make the free market vulnerable to
dishonest individuals. Therefore, several economic properties,
such as truthfulness, individual rationality, budget-balance, and
computational efficiency, are desired. Here are the definitions
of these properties:
• Truthfulness: An auction is truthful if no individual can
achieve a higher utility by reporting a value deviating from
its true valuation (or cost) regardless of the bids (or asks)
from other individuals.
• Individual Rationality: An auction is individually rational
if for any buyer or seller, it will not get a negative utility
by revealing its true valuation (or cost).
• Budget-balance: An auction is budget-balanced if the
auctioneer always makes a non-negative profit.
• Computational Efficiency: An auction is computationally
efficient if the whole auction process can be conducted
within polynomial time.
Among these properties, truthfulness is the crowning jewel
which incentivizes all participants to be honest during the
auction [5, 11, 19, 27, 34, 37]. However, this alone cannot
guarantee the robustness of the mechanism, since other dishonest behavior, such as free-riding and false-reporting (to be
defined later), may make the crowdsourcing system vulnerable.
Index Terms—Crowdsourcing, free-riding, false-reporting, game
theory, incentive mechanisms.
1. I NTRODUCTION
A. Crowdsourcing
For the past few years, we have witnessed the proliferation of
crowdsourcing [16] as it becomes a booming online market
for labor and resource redistribution. One example is mobile
crowdsourcing [25, 34], which leverages a cloud computing
platform for recruiting mobile users to collect data (such as
photos, videos, mobile user activities, etc) for applications
in various domains such as environmental monitoring, social
networking, healthcare, transportation, etc. Several commercial
crowdsourcing websites, such as Yelp [1], Yahoo! Answers [2],
Amazon Mechanical Turk [3], and UBER [4], provide trading markets where some users (service requesters) can have
their tasks completed by paying fees, and some other users
(service providers) can receive rewards for providing service.
Zhang, Xue, and Yu are with Arizona State University, Tempe, AZ 85287.
Email: {xzhan229, xue, ruozhouy}@asu.edu. Yang is with Colorado School of
Mines, Golden, CO 80401. Email: [email protected]. Tang is with Syracuse
University, Syracuse, NY 13244. Email: [email protected]. This research was
supported in part by ARO grants W911NF-09-1-0467 and W911NF-13-1-0349
and NSF grants 1217611, 1218203, 1420881, 1421685, and 1444059. The
information reported here does not reflect the position or the policy of the
federal government. This is an extended and enhanced version of the paper [35]
that appeared in IEEE GLOBECOM 2014.
C. Free-riding and False-reporting in Crowdsourcing
Truthfulness of an auction can prevent individuals benefit from
lying about the prices. However, this alone is not adequate.
1
The main contributions of this paper are the following:
• To the best of our knowledge, we are the first to solve
free-riding and false-reporting in each single round.
• We design a mechanism EFF, which, with the help of
arbitrations, eliminates free-riding and false-reporting, and
incentivizes providers to complete their assigned tasks. We
also prove that EFF is truthful, transaction-wise budgetbalanced, and computationally efficient.
• We design a mechanism DFF, which, without any arbitration, discourages individuals from being dishonest.
We prove that DFF is semi-truthful, which means that
no individual could have a higher utility by lying when
others are honest. We also prove that DFF is transactionwise budget-balanced and computationally efficient, while
incentivizing providers to finish their assigned tasks.
The remainder of this paper is organized as follows. In Section 2, we briefly review the current literature on crowdsourcing
mechanisms, truthful auctions, and mechanisms avoiding freeriding and false-reporting. In Section 3, we describe the system
model. We present and analyze EFF and DFF in Section 4 and
Section 5, respectively. We present performance evaluation in
Section 6 and draw our conclusions in Section 7.
Free-riding and false-reporting can make the mechanism vulnerable to dishonest users.
If the payment is made before the provider starts to work,
a provider always has the incentive to take the payment and
devote no effort to complete the task, which is known as
“free-riding” [36]. If the payment is made after the provider
completes its work, the requester always has the incentive to
refuse the payment by lying about the status of this task, which
is known as “false-reporting” [36]. Thus, extra precautions
are necessary. Most recent works [10, 15, 36] are focused
on avoiding free-riding and false-reporting by building up
reputation systems or grading systems. These mechanisms may
discourage such dishonest behavior overall, but based on the
assumption that all individuals are patient and they would stay
in the system. However, in reality, some dishonest individuals
may stay in the system for a short period of time, and get huge
rewards by being dishonest. In addition, impatient users may
leave the platform since they feel unsafe and being cheated.
In this paper, we tackle free-riding and false-reporting by
a game-theoretic approach, such that free-riding and falsereporting are eliminated or discouraged.
D. Game Theory Basics
Game theory [23] is a field of study where interactions among
different players are involved. Each player wants to maximize
its own utility, which depends on not only its own strategies, but
also other players’ strategies. We need the concept of dominant
strategy and Nash Equilibrium [24] as follows.
• Dominant Strategy: A strategy s dominates all other
strategies if the payoff to s is no less than the payoff to any
other strategy, regardless of the strategies chosen by other
players. s is a strongly dominant strategy if its payoff is
strictly larger than the payoff to any other strategy.
• Nash Equilibrium (NE): When each player has chosen a
strategy and no player can benefit by unilaterally changing
its strategy, the current set of strategies constitutes an NE.
Extensive-form game theory [23] is an important branch
of game theory. An extensive-form game is a game where
sequencing players’ strategies and their choices at each decision point is allowed. The corresponding equilibrium, which
is called Sub-game Perfect Equilibrium (SPE), is a strategy
profile where in every sub-game, an NE is reached by the
corresponding subset of strategies.
2. R ELATED W ORK
Many incentive mechanisms have been proposed for crowdsourcing and mobile sensing [5, 6, 9, 11, 17, 20, 26, 27, 31,
32, 34, 37]. In 2012, Yang et al. [34] studied two models:
user-centric and platform-centric models. For the user-centric
model, an auction-based mechanism is presented, which is
computationally efficient, individually rational, profitable, and
truthful. For the platform-centric model, the Stackelberg game
is formulated and the unique Stackelberg Equilibrium is calculated to maximize the platform utility. In [9], DiPalantino
and Vojnovic proposed an all-pay auction, where each provider
may choose the job that it wants to work on. In [17], an online
question and answer forum was introduced, where each user
has a piece of information and can decide when to answer
the question. Singla and Krause in [27] proposed BP-UCB,
which is an online truthful auction mechanism achieving nearoptimal utilities for all providers. A crowdsourcing Bayesian
auction is proposed in [5], where the system is aware of the
distribution of valuations of all users. With this assumption, two
techniques were applied to the new Bayesian optimal auction
with high system performance. Zhao et al. [37] proposed
an online truthful auction for crowdsourcing with a constant
approximation ratio with respect to the platform utility. In [11],
a truthful auction algorithm of task allocation to optimize the
total utility of all smartphone users was proposed by Feng et al.
for the offline model, and an online model was also introduced
which reaches a constant approximation ratio of 12 .
Many other truthful auctions also fit for crowdsourcing and
mobile sensing. There are two basic truthful double auctions,
VCG [7, 14, 28] and McAfee [21], from which most of the
current truthful auctions are derived. Along this line, many
E. Summary of Contributions
In this paper, we design two novel auction-based mechanisms,
EFF and DFF, to avoid free-riding and false-reporting, while
incentivizing providers to complete their assigned tasks. These
mechanisms are based on any existing truthful double auction
scheme for winner selection and pricing. Each auction winner is
required to submit a warranty first, then submit a report on the
status of the corresponding task. Based on these reports from
providers and requesters, the final payments are determined by
the platform.
2
− αji ) ≥ 0. Throughout this paper, we assume that M is truthful, individually rational, transaction-wise
budget-balanced, and computationally efficient. All mechanisms in [8, 18, 29, 31, 33] satisfy these properties. In particular,
TASC [33] is used in our implementation.
For each (Ri , Pj ) ∈ W, the platform collects warranties
from both Ri and Pj . We model the post-auction process as
an extensive-form game between Pj and Ri . In this game,
Pj first decides the effort towards completing Ti ; then Ri
and Pj evaluate the result and submit independent reports
on the status of Ti to the platform. We assume that Pj is
capable of completing Ti , hence the status of Ti depends on the
willingness of Pj . The reports are either C, which is short for
Completed, or I, which is short for Incomplete. Both Ri and
Pj know the status of Ti . However, the platform is not aware
of Ti ’s status. The platform decides the final payment for each
individual according to these reports. If Ri and Pj submit the
same report, the platform would believe that both of them are
telling the truth. Otherwise, the platform concludes that one of
them lies and consults for arbitration if available. We assume
that there are no ambiguities on the status of Ti .
The utility of Ri is defined as
mechanisms [8, 13, 18, 19, 22, 29, 33] are proposed. While
they are not specifically designed for crowdsourcing platforms,
they can be applied to the scenario with minor modifications.
These auction schemes are based on the assumption that all
providers will complete the assigned tasks, and all requesters
are satisfied with the status of the tasks. This may not always happen. Therefore, there are works focusing on how
to make the mechanism robust against free-riding and falsereporting [10, 15, 36]. In these papers, free-riding and falsereporting are avoided by building reputation systems or grading
systems. Such systems are based on the assumption that all
individuals are patient and will stay in the system for a very
long time. Thus, a user may gain extra benefit by free-riding or
false-reporting during a short period of time. Impatient users
may feel being cheated and turn to other platforms.
P
(Ri ,Pj )∈W (βi
3. S YSTEM M ODEL
In this section, we present the crowdsourcing model and the
corresponding double auction model, and define the utilities
for the platform, the providers, and the requesters.
There is a set of m requesters R = {R1 , R2 , ..., Rm }. For
each Ri , it requires service to complete its task Ti , which has
a private valuation vi > 0 to Ri if Ti is completed. Ti is
tagged with a bid bi by Ri , which is the maximum reward
that Ri is willing to pay for the completion of Ti . Note that
bi is not necessarily equal to vi . We define the bid vector b =
(b1 , b2 , ..., bm ).
There is a set of n providers P = {P1 , P2 , ..., Pn }. Each
provider Pj has a cost cij > 0 to complete Ti , where cij is set to
+∞ if Pj is unable to complete Ti . Pj would post an ask aij for
Ti , which is the minimum amount of reward that Pj demands
for completing Ti . Note that aij is not necessarily equal to cij .
We define the cost vector cj = (c1j , c2j , ..., cm
j ), the ask vector
aj = (a1j , a2j , ..., am
),
and
the
ask
matrix
A
= (a1 ; a2 ; ...; an ).
j
We assume that providers and requesters are non-colluding.
By applying a sealed-bid single-round double auction where
requesters are buyers and providers are sellers, we get a set W
of winning requester-provider pairs, such that (Ri , Pj ) ∈ W
if and only if Pj is assigned to complete Ti . Each provider is
assigned to at most one task, while no more than one provider
works on each task. We define the function δ(·) : P → R such
that δ(j) = i if and only if (Ri , Pj ) ∈ W, and δ(j) = 0 if
Pj is not assigned to any task. As a result of the auction, Ri
δ(j)
will be charged βi , and Pj will receive αj . As a technical
convention, we define βi = 0 if Ri is not a winning requester,
and αji = 0 if (Ri , Pj ) ∈
/ W. We define β = (β1 , β2 , ..., βm ),
αj = (αj1 , αj2 , ..., αjm ), and α = (α1 ; α2 ; ...; αn ). The platform
(auctioneer) has a profit Σ(Ri ,Pj )∈W (βi −αji ) from the auction.
The auction mechanism, denoted as M, takes the bid vector b
and the ask matrix A as input. M outputs the winning requesterprovider pair set W and the payments α and β. Thus, we have
(W, β, α) ← M(b, A).
We call an auction mechanism M transaction-wise budgetbalanced if βi ≥ αji , for each (Ri , Pj ) ∈ W. Note that
transaction-wise budget-balance implies budget-balance, i.e.,
UiR = xi vi − (wiR − β̄i ),
(3.1)
where xi is the indicator of the status of Ti : xi = 1 if Ti is
completed; xi = 0 otherwise. wiR is the warranty that Ri pays
to the platform, and β̄i is the final payment that Ri receives
from the platform after the platform collects the reports.
The utility of Pj is defined as
δ(j)
UjP = (ᾱj
δ(j) δ(j)
cj ,
− wjP ) − yj
δ(j)
(3.2)
δ(j)
where yj is the indicator of Pj ’s devotion on Tδ(j) : yj = 1
δ(j)
if Pj worked on Tδ(j) ; yj = 0 otherwise. wjP is the warranty
δ(j)
from Pj , and ᾱj
is the final amount that Pj receives from
the platform after the platform collects the reports.
The platform utility is defined as
X
δ(j)
U=
[(wiR − β̄i + (wjP − ᾱj ) − zi τi ], (3.3)
(Ri ,Pj )∈W
where zi is the indicator of arbitration on Ti : zi = 1 if the
platform has consulted for arbitration on Ti ; zi = 0 otherwise.
τi is the corresponding arbitration fee. We assume that τi is a
value known by all providers, requesters, and the platform. For
each (Ri , Pj ) ∈ W, we define
δ(j)
U (Ri , Pj ) = (wiR − β̄i ) + (wjP − ᾱj
) − zi τi ,
(3.4)
which is the portion
of platform utility contributed by (Ri , Pj ).
P
Clearly, U = (Ri ,Pj )∈W U (Ri , Pj ).
Remark 3.1: The utilities of providers, requesters, and the
auctioneer in this paper are different from those in [35]. Another
difference between this paper and [35] is that we model the
post-auction process into an extensive-form game, in contrast
to a simultaneous game.
2
The notations of this paper are summarized in Table 1.
3
TABLE 1
N OTATIONS
Symbol
Meaning
Ri , P j
R, P
Ti
vi
b/bi
cj /cij
A/aj /aij
W
δ(·)
βi
β
αij
αj
α
M
UiR /UjP /U
xi
δ(j)
yj
zi
τi
β̄i
δ(j)
ᾱj
provider (seller), requester (buyer)
the set of providers, requesters
Ri ’s task
valuation of Ti
bid vector/ bid from Ri
cost vector of Pj / cost of Pj to finish Ti
ask matrix/ ask vector of Pj / ask of Pj to finish Ti
set of winning provider-requester pairs
assignment function from providers to requesters
payment of Ri from the auction
payment vector for requesters from the auction
payment of Pj for Ti from the auction
payment vector of Pj from the auction
payment matrix of providers from the auction
the auction mechanism
utility of Ri / Pj / the platform
indicator of Ti ’s status
indicator of Pj ’s devotion on Tδ(j)
indicator of arbitration on Ti
arbitration fee of Ti
final payment that Ri receives from the platform
final payment that Pj receives from the platform
Algorithm 1: EFF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
4. EFF: ELIMINATING FREE - RIDING AND
FALSE - REPORTING WITH A RBITRATION
18
19
In this section, we present a crowdsourcing mechanism EFF,
which, with the help from a trusted third party for arbitration,
eliminates free-riding and false-reporting and incentivizes each
provider to complete the assigned task.
20
21
22
A. Description of EFF
In EFF, a trustworthy third party is available, who can provide
arbitration on the status of Ti with an arbitration fee τi > 0.
The first part of EFF applies auction M, where W, β, and α
are decided. After the auction, the platform collects a warranty
wiR = βi + τi from Ri and a warranty wjP = θαji + τi from
Pj for each (Ri , Pj ) ∈ W, where θ > 0 is a system parameter.
We use θαji to help incentivize Pj to complete Ti . If Pj has
been assigned to work on Ti but does not complete Ti , it will
receive a penalty of θαji .
After Ri and Pj submitted their warranties, an extensiveform game is applied. Pj decides whether or not to work on
Ti . If Pj chooses to devote effort on Ti , Ti would be completed.
Otherwise, Ti is incomplete with no effort devoted by Pj . Then
Ri and Pj submit their independent reports on the status of Ti .
Gathering the reports from Ri and Pj , the platform decides
the final payments β̄i and ᾱji for Ri and Pj , respectively. The
formal description of EFF is presented as Algorithm 1.
With the payments β̄i and ᾱji from Algorithm 1, utilities
of Ri and Pj can be computed based on equations (3.1) and
(3.2), respectively. These utility values are shown in Table 2.
The first row indicates Pj ’s effort on Ti and the second row
lists the reports from Pj . The first column lists the reports from
(W, β, α) ← M(b, A);
∀(Ri , Pj ) ∈ W, wiR ← τi + βi , wjP ← τi + θαji ;
Ri submits wiR and Pj submits wjP to the platform;
β̄i ← wiR ; ᾱji ← wjp ;
Pj decides whether or not to work on Ti , and devotes the
corresponding effort;
Ri and Pj submit independent reports on the status of Ti
to the platform;
if Reports are different then
The platform consults arbitration for the status of Ti ,
and pays τi to the arbitrator;
if Ri submits a false report then
β̄i ← β̄i − τi ;
else
ᾱji ← ᾱji − τi ;
end
// The arbitration fee is taken from
the liar
else
The platform adopts the reports from Ri and Pj ;
end
if The platform concludes that Ti is completed then
β̄i ← β̄i − βi ; ᾱji ← ᾱji + αji ;
// The auction payment is made
else
ᾱji ← ᾱji − θαji ;
// θαji is taken from Pj for not
completing Ti
end
The platform returns β̄i to Ri and ᾱji to Pj , respectively.
Ri . For each of the 2-tuples, the first element is the utility of
Ri , and the second element is the utility of Pj .
We pick two entries in Table 2 and explain how the corresponding utilities are derived. First, we explain the entry where
Ti is completed and both Ri and Pj submit C (marked in red
in Table 2). In Line 2 of Algorithm 1, we have wiR = τi + βi
and wjP = τi + θαji . In Line 4, we have β̄i = τi + βi and
ᾱji = τi +θαji . Because both Ri and Pj submit C, no arbitration
is consulted, and the platform concludes that Ti is completed.
Thus, β̄i = (τi + βi ) − βi = τi . Since Pj has devoted effort
on Ti and Ti is completed, we have yji = 1 and xi = 1. In
Line 18, we have ᾱji = (τi + θαji ) + αji . Based on equations
(3.1) and (3.2), Ri ’s utility is vi − βi and Pj ’s utility is αji − cij .
Next, we explain the entry where Ti is incomplete, Ri submits
I, and Pj submits C (marked in blue in Table 2). In Line 2,
we have wiR = τi + βi and wjP = τi + θαji . In Line 4, we
have β̄i = τi + βi and ᾱji = τi + θαji . Because Ri and Pj
submit different reports, an arbitration is consulted and the
arbitration result indicates that Pj submits the dishonest report.
In Line 12, we have ᾱji = (τi + θαji ) − τi = θαji . Because Ti
is incomplete, we have yji = 0, xi = 0. In Line 20, we have
4
TABLE 2
U TILITIES OF Ri AND Pj IN EFF FOR (Ri , Pj ) ∈ W , T HE ENTRY FOR THE UNIQUE SPE IS MARKED BY X.
Pj
Ri
C
I
Ti completed
C
I
(vi − βi , αij − cij )X
(vi − βi , αij − cij − τi )
(vi − βi − τi , αij − cij )
(vi , −θαij − cij )
Ti not completed
C
I
(−βj , αij )
(−τi , −θαij )
(0, −θαij − τi )
(0, −θαij )
Suppose that Pj loses the auction by asking aj = cj . Its
utility is 0. Suppose that Pj changes its ask. If Pj still loses
the auction, its utility remains to be 0. If Pj wins the auction
and is assigned to complete Ti , Pj ’s new payment in the auction
αji is no larger than cij since M is truthful. If Pj completes Ti ,
according to Table 2, its utility is no larger than αji − cij ≤
aij − cij = 0. If Pj does not complete Ti , its utility is no more
than −θαji ≤ 0 since Ri would report I. Thus, Pj could not
benefit from being dishonest.
To sum up all cases, EFF eliminates free-riding and falsereporting, while guaranteeing truthfulness.
Remark 4.1: By Lemma 4.1, dishonest behaviors are eliminated in EFF. According to the unique SPE in the extensiveform game in EFF, for each (Ri , Pj ) ∈ W, Pj would finish Ti ,
which indicates that EFF incentivizes each winning provider
to complete its assigned task.
2
Lemma 4.2: EFF is transaction-wise budget-balanced. 2
Proof: Note that by the transaction-wise budget-balance of
M, βi − αji ≥ 0 for each (Ri , Pj ) ∈ W. We prove this lemma
by the following case analysis:
• If Ri and Pj both submit C, the platform believes that Ti
is completed without consulting for arbitration. Thus, we
have β̄i = τi and ᾱji = τi + (1 + θ)αji . By equation (3.4),
we have U (Ri , Pj ) = βi − αji ≥ 0.
• If Ri and Pj both submit I, the platform believes that Ti
is incomplete without consulting for arbitration. Thus, we
have β̄i = τi + βi and ᾱji = τi . By equation (3.4), we
have U (Ri , Pj ) = θαji ≥ 0.
• If Ri submits C and Pj submits I when Ti is completed,
the platform consults for arbitration, and the result shows
that Pj lies in the report. Thus, we have β̄i = τi + βi and
ᾱji = (1 + θ)αji . By equation (3.4), we have U (Ri , Pj ) =
βi − αji ≥ 0.
• If Ri submits C and Pj submits I when Ti is incomplete,
the platform consults for arbitration, and the result shows
that Ri lies in the report. Thus, we have β̄i = βi and ᾱji =
τi . By equation (3.4), we have U (Ri , Pj ) = θαji ≥ 0.
• If Ri submits I and Pj submits C when Ti is completed,
the platform consults for arbitration, and the result shows
that Ri lies in the report. Thus, we have β̄i = 0 and ᾱji =
τi + (1 + θ)αji . By equation (3.4), we have U (Ri , Pj ) =
βi − αji ≥ 0.
• If Ri submits I and Pj submits C when Ti is incomplete,
the platform consults for arbitration, and the result shows
that Pj lies in the report. Thus, we have β̄i = τi + βi and
ᾱji = 0. By equation (3.4), we have U (Ri , Pj ) = θαji ≥ 0.
Thus, EFF is transaction-wise budget-balanced.
ᾱji = θαji − θαji = 0. Based on equations (3.1) and (3.2), Ri ’s
utility is 0 and Pj ’s utility is −θαji − τi .
According to Table 2, there is a unique SPE. If Pj completes
Ti , Pj would report C regardless of what Ri reports. Since
Pj reports C, Ri prefers reporting C to reporting I. Hence, if
Pj completes Ti , both Ri and Pj would report C. For similar
reason, if Pj does not complete Ti , both Ri and Pj would report
I. Comparing the utilities of Pj when completing Ti (which is
αji − cij > 0) and not completing Ti (which is −θαji < 0), Pj
prefers completing Ti . Thus, the unique SPE is reached when
Pj completes Ti , and both Ri and Pj submit C.
B. Analysis of EFF
In this subsection, we prove several properties of EFF.
Theorem 1: EFF eliminates free-riding and false-reporting,
while guaranteeing truthfulness, transaction-wise budgetbalance, and computational efficiency.
2
To prove this theorem, we need to prove lemmas 4.1-4.3.
Recall that auction M is individually rational, transaction-wise
budget-balanced, computationally efficient, and truthful.
Lemma 4.1: EFF eliminates free-riding and false-reporting,
while guaranteeing truthfulness.
2
Proof: Suppose that (Ri , Pj ) is in W when Ri bids vi and
Pj bids cj . The unique SPE is achieved when Pj completes
Ti and both Ri and Pj submit C. According to Table 2 and
the individual rationality of M, Ri ’s utility is vi − βi ≥ 0
and Pj ’s utility is αji − cij ≥ 0. When Ri or Pj changes
its bid during the auction, due to the truthfulness of M, the
corresponding payment of Ri could not be lower than βi
and the corresponding payment of Pj could not be higher
than αji . Thus, neither Ri nor Pj has incentive to change its
bid. When Ri submits I for false-reporting, Pj still reports
C since it is Pj ’s dominant strategy when Ti is completed.
Ri ’s corresponding utility would be vi − βi − τi < vi − βi .
When Pj chooses not to complete Ti but submits C for freeriding, Ri still reports I since it is Ri ’s dominant strategy
when Ti is incomplete. Pj ’s corresponding utility would be
−θαji − τi < 0 ≤ αji − cij . Thus, neither Ri nor Pj could
benefit from being dishonest.
Suppose that Ri loses the auction by bidding bi = vi . Its
utility is 0. Suppose that Ri changes its bid. If Ri still loses
the auction, its utility remains to be 0. If Ri wins the auction,
its new payment in the auction βi is no smaller than vi since M
is truthful. If Ti is completed, according to Table 2, Ri ’s utility
is no more than vi − βi ≤ vi − bi = 0 since Pj would always
report C. If Ti is incomplete, Ri could not have a positive
utility. Thus, Ri could not benefit from being dishonest.
5
Lemma 4.3: EFF is computationally efficient.
2
Proof: Since M is computationally efficient, and report
submission, arbitration, and pricing all take constant time, EFF
is computationally efficient.
Lemmas 4.1-4.3 complete the proof for Theorem 1.
Remark 4.2: EFF is not individually rational. To incentivize each winning provider to complete its assigned task, we
impose a penalty θαji in Algorithm 1, Line 20. If Pj fails to
complete Ti but behaves honestly, its utility is −θαji < 0, which
violates individual rationality. However, this incentivizes Pj to
complete the task. This part differs from that in [35], where no
such penalty is imposed and individual rationality is achieved.
2
Algorithm 2: DFF
1
2
3
4
5
6
7
8
9
10
11
5. DFF: DISCOURAGING FREE - RIDING AND
FALSE - REPORTING WITHOUT A RBITRATION
12
13
14
EFF successfully eliminates free-riding and false-reporting
with the help from a trusted third party for arbitration. However,
it is not true that an arbitration is always available. In this
section, we present another mechanism DFF, which, without
any arbitration, discourages free-riding and false-reporting,
while still guaranteeing several economic properties.
15
16
(W, β, α) ← M(b, A);
∀(Ri , Pj ) ∈ W, wiR ← (1 + ζ)βi , wjP ← (ζ + η)αji ;
Ri submits wiR and Pj submits wjP to the platform;
β̄i ← wiR ; ᾱji ← wjp ;
Pj decides whether or not to work on Ti , and devotes the
corresponding effort;
Ri and Pj submit independent reports on the status of Ti
to the platform;
if (Both Ri and Pj submit C) then
β̄i ← β̄i − βi ; ᾱji ← ᾱji + αji ;
else if (Both Ri and Pj submit I) then
ᾱji ← ᾱji − ηαji ;
else if (Ri submits C and Pj submits I) then
β̄i ← β̄i − βi ; ᾱji ← ᾱji − ηαji ;
else
β̄i ← β̄i − (1 + ζ)βi ; ᾱji ← ᾱji − (ζ + η)αji ;
end
The platform returns β̄i to Ri and ᾱji to Pj , respectively.
Table 3. Notations in Table 3 represent similar meanings as
those in Table 2.
We pick two entries and explain how the corresponding
utilities are derived. First, we explain the entry where Ti is
completed and both Ri and Pj submit C (marked in red in
Table 3). In Line 2 of Algorithm 2, we have wiR = (1 + ζ)βi
and wjP = (ζ + η)αji . In Line 4, we have β̄i = (1 + ζ)βi and
ᾱji = (ζ + η)αji . Because both Ri and Pj submit C, in Line 8,
we have β̄i = (1 + ζ)βi − βi = ζβi and ᾱji = (ζ + η)αji + αji .
Since Pj has devoted effort on Ti and Ti is completed, we have
yji = 1 and xi = 1. Based on equations (3.1) and (3.2), Ri ’s
utility is vi −βi and Pj ’s utility is αji −cij . Next, we explain the
entry where Ti is incomplete, Ri submits I, and Pj submits C
(marked in blue in Table 3). In Line 2, we have wiR = (1+ζ)βi
and wjP = (ζ + η)αji . In Line 4, we have β̄i = (1 + ζ)βi and
ᾱji = (ζ + η)αji . Because Ri submits I and Pj submits C,
in Line 14, we have β̄i = (1 + ζ)βi − (1 + ζ)βi = 0 and
ᾱji = (ζ + η)αji − (ζ + η)αji = 0. Because Ti is incomplete, we
have yji = 0 and xi = 0. Based on equations (3.1) and (3.2),
Ri ’s utility is −βi − ζβi and Pj ’s utility is −ζαji − ηαji .
A. Description of DFF
Same as EFF, the first part of DFF applies the auction M. For
each (Ri , Pj ) ∈ W, DFF collects a warranty wiR = (1 + ζ)βi
from Ri and a warranty wjP = (ζ + η)αji from Pj , where
ζ > 0 and η > 0 are system parameters. In DFF, we use ζ to
help discourage dishonest behavior, and η to help incentivize
providers to complete their assigned tasks.
After the auction, Ri and Pj pay their warranties to the
platform. We model the post-auction part of DFF as another
extensive-form game. Pj first decides whether or not to work on
Ti , and devotes the corresponding effort. Then both Ri and Pj
submit their independent reports on the status of Ti . With the
reports from Ri and Pj , the platform decides the final payments
β̄i and ᾱji for each of the following cases:
• When Ri and Pj both submit C, we have β̄i = ζβi and
ᾱji = (ζ + η)αji + αji . This means that Ri pays βi to the
platform and Pj receives αji from the platform.
• When Ri and Pj both submit I, we have β̄i = (1 + ζ)βi
and ᾱji = ζαji . This means that Ri pays nothing to the
platform and Pj pays ηαji to the platform.
• When Ri submits C and Pj submits I, we have β̄i = ζβi
and ᾱji = ζαji . This means that Ri pays βi to the platform
and Pj pays ηαji to the platform.
• When Ri submits I and Pj submits C, we have β̄i = 0
and ᾱji = 0. This means that Ri pays (1 + ζ)βi to the
platform and Pj pays (ζ + η)αji to the platform.
The formal description of DFF is presented as Algorithm 2.
Based on the final payments β̄i and ᾱji from Algorithm 2,
utilities of Ri and Pj can be computed based on equations
(3.1) and (3.2), respectively. These utility values are shown in
B. Analysis of DFF
In this subsection, we introduce the definition of semitruthfulness and prove several properties of DFF.
Definition 5.1: A mechanism is semi-truthful if each
individual has no positive increment in utility when it unilaterally behaves dishonestly while others are being honest.
2
Comparing the definitions of semi-truthfulness and truthfulness, semi-truthfulness requires a stronger requirement, assuming that the other individuals are honest, while truthfulness has
no such assumption.
Theorem 2: DFF is semi-truthful, transaction-wise budgetbalanced, and computationally efficient.
2
6
TABLE 3
U TILITIES OF Ri AND Pj IN DFF FOR EACH (Ri , Pj ) ∈ W , T HE ENTRY FOR EACH EQUILIBRIUM IS MARKED BY X.
Pj
Ri
C
I
Ti completed
C
I
(vi − βi , αij − cij )X
(vi − βi , −cij − ηαij )
(vi − βi − ζβi , −cij − ζαij − ηαij )
(vi , −cij − ηαij )
To prove Theorem 2, we need the following three lemmas.
Lemma 5.1: DFF is semi-truthful.
2
Proof: Suppose that (Ri , Pj ) is in W when Ri bids vi
and Pj bids cj . If Pj completes Ti and no individual lies, Ri ’s
utility is vi − βi ≥ 0 and Pj ’s utility is αji − cij ≥ 0. Suppose
that Ri bids a value deviating from vi , it will not pay a lower
payment due to the truthfulness of M. If Ri lies by reporting
I instead of C, and Pj does not lie and reports C, Ri ’s utility
would be vi − (1 + ζ)βi0 ≤ vi − (1 + ζ)βi ≤ vi − βi , where βi0 is
the corresponding payment after Ri changes its bid. Thus, Ri
has no incentive to lie when Ti is completed and Pj is honest.
For Pj , by deviating its ask from cj , it will not receive a higher
payment. If Pj lies by reporting I, and Ri does not lie and
i
reports C, Pj ’s corresponding utility would be −ηα0 j − cij ≤
i
i
α0 j − cij ≤ αji − cij , where α0 j is the corresponding payment
of Pj after Pj changes its bid. Thus, Pj has no incentive to lie
when Ti is completed and Ri is honest. If Pj chooses not to
i
complete Ti and reports I honestly, its utility would be −ηα0 j <
0. If Pj reports C dishonestly, since Ri is honest and reports
i
I, Pj ’s utility would be −(η + ζ)α0 j < 0. Thus, Pj would
complete Ti . Therefore, if Ri or Pj knows that the other one
is honest, it would be honest.
Suppose that Ri loses the auction by bidding bi = vi . Its
original utility is 0. Suppose that Ri changes its bid. If it still
loses the auction, its utility remains to be 0. If it wins the
auction and Pj is assigned to Ti , Ri ’s payment in the auction
βi is no smaller than vi since M is truthful. If Ti is completed,
according to Table 2, Ri ’s utility is no larger than vi − βi ≤
vi −bi = 0 since Pj is honest and reports C. If Ti is incomplete,
Ri could not have a positive utility. Thus, Ri could not benefit
from unilaterally being dishonest.
Suppose that Pj loses the auction by asking aj = cj . Its
original utility is 0. Suppose that Pj changes its ask. If it still
loses the auction, its utility remains to be 0. If it wins the
auction and is assigned to Ti , Pj ’s payment in the auction αji
is no larger than cij since M is truthful. If Pj completes Ti ,
according to Table 2, its utility is no larger than αji − cij ≤
aij − cij = 0. If Pj does not complete Ti , it utility is no more
than −θαji ≤ 0 since Ri is honest and reports I. Thus, Pj could
not benefit from unilaterally being dishonest.
To sum up all cases, DFF is semi-truthful
Remark 5.1: With semi-truthfulness, we know that there is
one equilibrium left in correspondence with each status of Ti .
If Pj completes Ti , both Ri and Pj would report C. If Pj does
not complete Ti , both Ri and Rj would report I. Comparing
these two equilibria, Pj has a utility of αji − cij > 0 in the first
equilibrium, and a utility of −ηαji < 0 in the second one. Thus,
Ti not completed
C
I
(−βi , αij )
(−βi , −ηαij )
(−βi − ζβi , −ζαij − ηαij )X
(0, −ηαij )
Pj prefers to complete Ti , which makes the first equilibrium the
unique SPE and encourages all winning providers to complete
their assigned tasks, while discouraging free-riding and falsereporting. Note that in [35], the simultaneous game computes
two equilibria with no preference of Pj to complete Ti or
not. However, by applying the extensive-form game instead
of the simultaneous game, we have one unique SPE which
incentivizes all winning providers to complete the tasks and
discourages dishonest behaviors.
2
Lemma 5.2: DFF is transaction-wise budget-balanced. 2
Proof: Since M is transaction-wise budget-balanced, we
have βi ≥ αji for each (Ri , Pj ) ∈ W. If both Ri and Pj
submit C, the platform utility generated from this winning pair
is U (Ri , Pj ) = βi − αji ≥ 0. If Ri submits I and Pj submits
C, U (Ri , Pj ) = (1 + ζ)βi + (ζ + η)αji ≥ 0. If Ri submits C
and Pj submits I, U (Ri , Pj ) = βi + ηαji ≥ 0. If both of them
submit I, U (Ri , Pj ) = ηαji ≥ 0. Thus, DFF is transaction-wise
budget-balanced.
Lemma 5.3: DFF is computationally efficient.
2
The proof is the same as that of Lemma 4.3 since the two
mechanisms have the same time complexity.
Lemmas 5.1 − 5.3 complete the proof for Theorem 2.
Remark 5.2: Truthfulness is not guaranteed by DFF. For
instance, when Ti is incomplete and Ri reports C, Pj may
benefit by reporting C dishonestly instead of reporting I.
Individual rationality is another economic property that DFF
does not guarantee. Because when Ti is incomplete and Pj
lies, Ri ’s utility is negative even if Ri reports honestly.
2
6. P ERFORMANCE E VALUATION
To evaluate the performance of EFF and DFF, we implemented
both mechanisms, and carried out extensive testing on various
cases. In both EFF and DFF, we used TASC [33] as the auction
mechanism M. We chose Maximum Matching Algorithm [30]
as the bipartite matching algorithm used in TASC. The tests
were run on a PC with a 3.5 GHz CPU, and 16 GB memory.
Remark 6.1: TASC [33] is a double auction based incentive
mechanism proposed for cooperative communication, where the
relay nodes offer relay services for rewards. TASC is truthful, individually rational, transaction-wise budget-balanced, and
computationally efficient. TASC consists of two major steps.
First, it applies a bid-independent bipartite matching algorithm
to compute an assignment from the buyers to the sellers. Then
it applies the McAfee [21] double auction to determine final
payments for all buyers and sellers.
2
Performance Metrics: We first studied the impact of the
size of users on the running time as a demonstration of
7
computational efficiency, and compare the running time of EFF
and DFF with that of TASC. We next studied the impact of the
size of users and user behavior on the platform utility. Finally,
we studied the utilities of providers and requesters.
DFF have the same running time, we only show the running
time of EFF. From Fig. 1, we observe that with the increase
of m, the running time increases quadratically when m < n,
then grows linearly when m > n. This is because the first part
of TASC is a bipartite matching algorithm [30], which takes
O(min{m, n}×|E|) time, where |E| is the number of edges in
the graph and is bounded by O(mn). Let l = min{m, n}. The
total time complexity is O(l|E| + l log l). Thus, in Fig. 1, when
m < n = 100, the running time is O(m2 ); when m > n = 100,
the running time is O(m). Fig. 2 can be interpreted by a similar
approach. Comparing Fig. 1 and Fig. 2, we observe that the
impact from m and n are almost the same. Another observation
is that the running time of EFF is almost the same as that
of TASC, which implies that applying EFF or DFF does not
introduce a high computation cost to M.
A. Simulation Setup
For EFF, we set τi = 10 for all Ti and θ = 0.1. For DFF,
we set ζ = 0.6 and η = 0.1. Note that this setting is for
simplicity only. For properties of τi , θ, ζ, and η, see Section 4
and Section 5, respectively. Valuations, costs, bids, and asks
are uniformly distributed over [0, 20]. All results in Figs. 1-8
are averaged over 50, 000 runs for each configuration.
To study the behavior of the running time as a function of the
number of users, we first fixed m = 100 and let n increase from
10 to 1000; then we fixed n = 100 and let m increase from 10
to 1000. The corresponding results are reported in Fig. 1 and
Fig. 2, respectively.
To study the behavior of the platform utility as a function
of the number of users, we first fixed m = 100 and let n
increase from 10 to 1000; then we fixed n = 100 and let m
increase from 10 to 1000. We chose the scenario where all
providers complete their tasks and all individuals are honest
(we will present the impact of dishonest users with incomplete
tasks on the platform utilities later). In this case, according
to Algorithm 1, Algorithm 2, and equation (3.3), the platform
utilities in EFF and DFF are the same. Thus, we only show
the platform utilities in EFF, and the corresponding results are
reported in Fig. 3.
To study the platform utility as a function of the percentage
of completed tasks in DFF, we fixed m = 100 and n = 100,
and let the percentage of completed tasks vary from 0% to
100% with an increment of 5%. To study the impact of the
number of honest users (who submit honest reports) on platform
utility, we first set all requesters to be honest and monitor
different percentages of honest providers with different system
parameter values of η and ζ in Fig. 4 and Fig. 5, respectively.
Then we set all providers to be honest, and monitor different
percentages of honest requesters in Fig. 6.
To study the platform utility as a function of the percentage
of completed tasks in EFF, we fixed m = 100 and n = 100,
and let the percentage of completed tasks vary from 0% to
100% with an increment of 5%. The corresponding results are
presented in Fig. 7 and Fig. 8.
To study the utilities of requesters and providers, we set
m = n = 100, and ran both EFF and DFF. The corresponding
results are reported in Figs. 9-12, where R87 is a randomly
picked requester and P31 is the corresponding provider, with
v87 = 17, c87
31 = 4. R87 is matched up with P31 by the
Maximum Matching Algorithm.
Running Time (ms)
40
30
20
10
0
0
EFF
TASC
200
400
600
800
1000
m
Fig. 1.
Running time of EFF and TASC with n = 100
Running Time (ms)
40
30
20
10
0
0
EFF
TASC
200
400
600
800
1000
n
Fig. 2.
Running time of EFF and TASC with m = 100
Platform Utility
We study the platform utility of EFF as a function of the
number of providers and requesters in Fig. 3. We observe
that the platform utility is non-negative, which is guaranteed
by the transaction-wise budget-balance of EFF. The platform
utility increases at first, then stays steady when m > 100.
This is because the platform utility depends on not only the
final payments, but also |W|, which is bounded by min{m, n}.
Therefore, when m < n, |W| increases and the platform utility
increases. When |W| reaches min{m, n}, the platform utility
stays at a steady level. Since in this scenario, the platform
utilities in EFF and DFF are the same. we only show the
platform utilities in EFF.
B. Simulation Results
Running Time
We study the impact of the size of users on the running time
as a demonstration of computational efficiency. Since EFF and
8
14
14
12
10
8
6
0
200
400
600
800
1000
1200
1200
PlatformUtility
Utility
Platform
16
Platform Utility
Platform Utility
16
12
1000
1000
10
8
6
4
0
200
400
n
600
800
1000
m
(a)
(b)
800
800
600
600
Fig. 3. Platform utilities in EFF
Fig. 3. Platform Utility in EFF: (a) m = 100, (b) n = 100.
0
0
200
0
0
0
0
PlatformUtility
Utility
Platform
5000
5000
4000
4000
3000
3000
80
% 80
Completion %
100
100
0% Requesters Honest
0% Requesters
25%
Requesters Honest
Honest
25%Requesters
RequestersHonest
Honest
50%
50%Requesters
RequestersHonest
Honest
75%
75% Requesters Honest
100%
100% Requesters Honest
2000
2000
1000
1000
0
00
0
20
40
60
20Completion
40
60
80
% 80
Completion %
100
100
Fig. 6. Platform utilities in DFF when all providers are honest
Fig. 6. Platform utilities in DFF when all providers are honest
utility decreases. This is because with more tasks completed,
tasks,
the platform
utility
because
with more
the
platform
collects
lessdecreases.
penalties This
fromis the
providers
for
tasks completed,
platform
collects is
lessthat
penalties
from of
the
incomplete
tasks. the
Another
observation
the number
providers
incomplete
Another
observation
is that
honest
usersfordoes
not have tasks.
an impact
on the
platform utility.
the number
of honest
not have
an impact
on the
This
is because
in EFF,users
if andoes
individual
submits
a dishonest
platform
utility.τiThis
is because
EFF, if anthe
individual
report,
it pays
to the
platform.inHowever,
platformsubmits
needs
a pay
dishonest
report, it pays
tonumber
the platform.
However,
the
to
τi for arbitration.
Thus,τithe
of honest
users does
platform
needs
pay τi for
arbitration.
not
influence
thetoplatform
utility
in EFF. Thus, the number of
honest users does not influence the platform utility in EFF.
0% Providers Honest
25%
ProvidersHonest
Honest
0% Providers
50%
25% Providers
Providers Honest
Honest
75%
50% Providers
Providers Honest
Honest
100%
ProvidersHonest
Honest
75% Providers
1200
1200
1000
1000
800
800
600
600
400
400
200
20
40
60
20Completion
40
60
Fig. 5. Platform utilities in DFF with honest requesters, η = 0.3 and ζ = 4
Fig. 5. Platform utilities in DFF with honest requesters, η = 0.3 and ζ = 4
100% Providers Honest
580
580
Platform
Utility
Platform
Utility
Platform
Utility
Platform
Utility
We study the impact of user behavior on the platform utility
study
impact
user behavior
utility
in We
DFF.
Thethe
results
are of
presented
in Fig.on
4, the
Fig.platform
5, and Fig.
6,
in
DFF.
The
results
are
presented
in
Fig.
4,
Fig.
5,
and
Fig.
6,
respectively. In Fig. 4 and Fig. 5, we observe that when all
respectively.
In
Fig.
4
and
Fig.
5,
we
observe
that
when
all
requesters are honest in DFF, with the increasing percentage of
requesters
honest
in DFF, utility
with the
increasing
percentage
of
completed are
tasks,
the platform
decreases.
This
is because
completed
tasks,completed,
the platform
decreases.
because
with more tasks
theutility
platform
collectsThis
lessispenalties
with
tasksfor
completed,
thetasks.
platform
collects
from more
providers
incomplete
However,
in less
Fig. penalties
5 where
from
tasks.
Fig. 5 where
we setproviders
η = 0.3 for
andincomplete
ζ = 4, with
the However,
increasinginpercentage
of
we
set η =tasks,
0.3 and
ζ = 4, with
theincreases.
increasingThis
percentage
of
completed
the platform
utility
is because
completed
tasks, the platform
increases.
because
with more completed
tasks, theutility
platform
collectsThis
moreispenalties
with
completed
tasks,providers
the platform
collects
from more
the situation
where
are lying
onmore
their penalties
reports.
from
the situation
where
providers
are lyingtrends
on their
Comparing
Fig. 4 and
Fig.
5, the different
are reports.
due to
Comparing
Fig. 4parameter
and Fig. values.
5, the different
to
different system
Since it trends
is not are
the due
major
different
parameter
values.
it is not
concern ofsystem
this paper
on how
to setSince
the values
for τthe
and
i , η,major
concern
thisprovide
paper on
how to set
the values
for impact
τi , η, and
ζ, we doof not
theoretical
analysis
on the
of
ζ,
do not
theoretical
analysis
on 4,theFig.
impact
of
τi , we
η and
ζ onprovide
the platform
utility.
In Fig.
5, and
τFig.
ζ on
the platform
utility.honest
In Fig.
4, the
Fig.platform
5, and
we can
observe
that with more
users,
i , η5, and
Fig.
5, decreases.
we can observe
with more
the platform
utility
This isthat
because
whenhonest
Ri andusers,
Pj both
behaves
utility
decreases.
isjbecause
when
Ri andwould
Pj both
honestly
for each This
(Ri , P
) ∈ W, the
platform
notbehaves
collect
honestly
each
(Rifor
, Pjbeing
) ∈ W,
the platform would not collect
penalties for
from
them
dishonest.
penalties from them for being dishonest.
0% Providers Honest
0% Providers
25%
Providers Honest
Honest
25%Providers
ProvidersHonest
Honest
50%
50%
Providers
Honest
75% Providers Honest
75% Providers
100%
Providers Honest
Honest
100% Providers Honest
0% Providers Honest
50%
Providers Honest
0% Providers
100%
Providers Honest
50% Providers
100% Providers Honest
560
560
540
20
40
60
80
100
% 80
Completion %
100
20 Completion
40
60
540
520
Fig. 4. Platform utilities in DFF with honest requesters, η = 0.1 and ζ = 0.6
Fig.Fig.
4. Platform
in DFF with
honest requesters,
and ζ = 0.6
6 showsutilities
the platform
utilities
of EFF ηas= a0.1function
of
520
500
0
500
0
theFig.
number
of completed
tasks,
and itofcan
be interpreted
by of
a
6 shows
the platform
utilities
EFF
as a function
similar
approach
as that oftasks,
Fig. and
5. However,
can be proved
the
number
of completed
it can beitinterpreted
by a
theoretically
that changing
and
ζ would
not
similar
approach
as that ofsystem
Fig. 5.constants
However,η it
can
be proved
change the increasing
trendsystem
in Fig.constants
6. Since itη is
the major
theoretically
that changing
andnotζ would
not
concernthe
in this
paper, we
doinnot
show
the analysis
change
increasing
trend
Fig.
6. Since
it is nothere.
the major
We study
the paper,
impactwe
of do
usernot
behavior
on analysis
the platform
concern
in this
show the
here.utility
in We
EFF,
andthe
theimpact
resultsofare
shown
in Fig.
7 and
Fig. 8.utility
We
study
user
behavior
on the
platform
observe
all requesters
are honest
in EFF,
in
EFF,that
andwhen
the results
are shown
in Fig.
7 andwith
Fig.the
8,
increasing percentage
of that
the completed
tasks, theare
platform
respectively.
We observe
when all requesters
honest
20
40
60
80
20Completion
40
60
% 80
Completion %
100
100
Fig. 7. Platform utilities in EFF with honest providers
Fig. 7. Platform
utilities in EFF with honest providers
Individual
Utility
We monitor
the utilities of R87 and P31 in EFF. Fig. 9 shows
Individual
Utility
theWe
utilities
of Rthe
and P31of
, where
P31Pcompleted
the assigned
87 utilities
monitor
R87 and
31 in EFF. Fig. 9 shows
task
in EFF.of
From
Fig. 9b,
observe that
neither
the utilities
R87 Fig.
and 9a
P31and
, where
P31we
completed
the assigned
R
P31 can
have
a utility
it hasR87
when
87 nor
task
in EFF.
From
Fig.
9, we higher
observethan
thatthat
neither
nor
bidding
= av87
= 17higher
and asking
a87
c87
= 4.bidding
From
31 it=has
31when
P31 canb87
have
utility
than that
in EFF, with the increasing percentage of the completed
9
9
87
b87 = v87 = 17 and asking a87
31 = c31 = 4. From Fig. 9b,
10
10
10
10
540
540
540
540
520
520
520
520
Completion %
Completion
%
10
10
10
10
55
5
5
00
0
0
−5
−5
−5
00
0
−5
0
55
5
5
(a)
10
10
15
15
Bid from
10 R87 15
87
Bid from
Utility
of RR8787
20
20
20
−C
PP31
31−C
P31
−C
31−C;P −I
R87
R
−C;P31
31−I
87
87
R87−C;P31
−I
31
R87
−I;P31
−I
R
87−I;P
31−I
87
31
R87−I;P31−I
5
55
5
10
10
10
15
15
15
Ask
Ask from
from
P31
10 P
31 15
31
Ask from
P
Utility
of P
P31
Utility
of
31
31
20
20
20
20
(b)
(b)
(a)
(b)
Fig. 9.
9. Completed tasks in EFF
Fig.
Fig. 9. Completed tasks in EFF
Fig.Fig.
9. Completed
EFF: (a)when
UtilityPof
Utility
of P31 . the
87 , (b)not
10 showstasks
the inutilities
not
complete
the
complete
31Rdoes
31
P31−C;R
−C;R87−C
−C
P
31
87
P31−C;R87−C
P31
−I;R87−C
−C
P
−I;R
−C;R
−C
87
31
87
P31−I;R87−C
R
−I
P31
−C
R
−I
87−I;R
87
87
R87−I
R87−I
−4
−4
−6
−6
−6
−6
−8
−8
−8
−8
−10
−10
0
0
−10
−100
0
5
5
5
5
Bid
Bid
10
10
from
from
10
10
15
15
R87 15
R
87 15
20
20
20
20
10
10
5
5
Utility
ofofPof
Utility
Utility
of PP31
Utility
P
31
31 31
Utility
Utility
of R
R87
Utility
R
Utility
ofofRof
8787 87
Fig.
shows
the utilities
whenthat
P31nodoes
not complete
task
in 10
EFF.
Again,
we observe
individual
can get
getthe
individual
can
aa
Fig.in 10
shows
the utilities
whenthat
P31no
does
not complete
thea
task
EFF.
Again,
we
observe
individual
can
get
positive utility increment by being dishonest using
using aa similar
similar
task
in EFF.
Again,
we observe
thatdishonest
no individual
get a
positive
utility
increment
by
being
using can
a similar
analysis
approach.
approach. Hence,
Hence, both
both R
R87
and PP3131 would
wouldreport
reportI.I.
87 and
positive
utility
increment
by
being
dishonest
using
a
similar
analysis
approach.
Hence,
both
R
and
P
would
report
I.
87
31
Utility of R87 Utility of P31
analysis
approach.
Hence,
both
R
and
P
would
report
I.
87
31
0
10
0
10
0
0
−2
−2
−2
−2
−4
−4
5
5
0
0
0
0
−5
−5
−5
−5
−10
−10
−10
−10
−15
−15
0
0
−15
−15 0
0
15
20
20
2020
0
0
00
Utility
Utility
of PP31
Utility
of of
P
31 31
Utility
Utility
of R
R87
Utility
of of
R
8787
15
15
15
15
6
66
6
4
44
4
2
22
2
0
00
0
−2
−2
−2
−2
−4
−4
−4
−4
−6
−6
−6
0
00
−6
0
10
Bid from
R87 15
10
15
10
10
15
Bidfrom
fromRR87
Bid
(a) Utility
of 87R
87 87
(a) Utility
Utility
of RR87
(a)of
87
(a)
5
10
Ask from
10
1010
Askfrom
fromPP31
Ask
(b) Utility
of31P
3131
(b)Utility
Utility
(b)ofofPP3131
(b)
5
55
Fig. 11. Completed tasks in DFF
Fig. 11.
11. Completed
Completed tasks
tasks ininDFF
DFF
Fig.
Fig. 11. Completed tasks in DFF: (a) Utility of R87 , (b) Utility of P31 .
100
100
100
100
Fig. 9b, we observe that reporting C is the dominant strategy
Fig. 9b, we observe that reporting C is the dominant
dominant strategy
strategy
for P31 . According to Fig. 9a, R87 would report C when
P31
for
P
.
According
to
Fig.
9a,
R
would
report
C
when
87 dominant
we observe
that reporting C is the
strategy
for PPP31
report
C when
31
31.
31 C. Thus,
31
reports
both R87 and P87
31 would report C. Utility of
reports C. Thus,
P31
wouldCreport
report
C.P31 reports
31report
According
to Fig.both
9a, R
R87
would
whenC.
87 and
R87 Utility of P31
C. Thus, both R87 and P31 would report C.
R
−C
R87
87−C
R87
87−C;R
P
−I
P31
−C;R87
31
87−I
P31
−C;R −I
31−I;R 87
P
−I
P31
31
87
P31
−I;R87
−I
31
87
5
5
55
R31−C;P87−C
R
−C;P
−C
RR
−C;P
−C
−I;P−C
−C
31
87
R31
−C;P
3131
87
87 87
R31−I;P
−C
−I;P
−C
−I −C
87
R RP
−I;P
31
87
31 87 87
P
−I
P87
−I
P87
−I
87
15
20
P31 15
20
1515
2020
R87−C;P
−C;P31−C
−C
R
87
31
−C;P31
−C
RR87
−I;P
−C
87
R
−C;P
−C
R
−I;P
−C
31
87
31
87
31
R
−I;P
−C
P31
−I 31
87
R
−I;P
−C
P
−I
87
31
31
P 15
−I
5
10
20
P3131−I
5
10
15
20
Ask from
from
P31 15
5 Ask
10 P
20
31 15
5
10
20
Ask from
P31
(b) Utility
of PP
31
Ask from
3131
(b) Utility
(b) of P31
Bid from
(a) Utility
of RR
R8787
87
Bid from
87
(a) Utility
(a) of R87
Fig. 10. Incomplete tasks in EFF
Fig.
tasks in
in EFF
EFF
Fig. 10.
10. Incomplete
Incomplete tasks
the tasks
utilities
of P(a)
in EFF
when
completes
31 Utility
Fig.Comparing
10. Incomplete
in EFF:
of R
of P31 .the
87 , (b)itUtility
10
10
1010
5
5
55
−5
−5
−5
−5
−10
−10
−10
−10
−15
−15
−15
−15
−20
−20 0
−20
−20
0
00
31
20
40
60
80
20
40
60
80
20
40
60
Completion
% 80
20 Completion
40
60 %
80
Fig. 8. Platform utilities in EFF with honest requesters
with honest
honest requesters
Fig.
requesters
Fig. 8.
8. Platform utilities in EFF with
Fig. 8. Platform utilities in EFF with honest requesters
20
20
20
20
5
5
55
0
0
0
000
0
0
10
10
10 5
5
5
5 0
0
00
−5
−5
−5−5
−10
−10
−10
−10
−15
−15
−15
−15
−20
0
−20
−20
−20
0
00
R87−C
R87−C
−C
−C
RR
P87
−C;R87−I
87
31
P31−C;R
−C;R87−I−I
−I
−C;R
PP
P31
−I;R87
87−I
31
31
87
P
−I;R
−I
−I;R87−I−I
PP31−I;R
31 5 87
87
31
10
10
Bid 10
from
10
Utility of P31
Utility
of P
P31
Utility
of
Utility
of
P
31
500
500
5000
500 0
0
0
10
R −C
87
−C
87
P87
−C;R87−I
RR87
−C
31
P
−C;R
−I
P31
−I;R87
−I
31
87
P31
−C;R
−I
87
31
87
−I;R87
31
87
PP31
−I;R
−I−I
31
87
31
15
15
15
15
Utility
ofof
Utility
Utility
ofRR
R
87
Utility
of
R87
87
87
560
560
560
560
20
20
20
20
Utility of P31
Utility
of P
P31
Utility
of
Utility
of
P
31
0% Requesters Honest
0%
Requesters
Honest
0%
Honest
50%Requesters
Requesters
Honest
0%
Requesters
Honest
50%
Requesters
Honest
100%Requesters
Requesters
Honest
50%
Honest
50%
Requesters
Honest
100% Requesters
Requesters Honest
Honest
100%
100% Requesters Honest
Utility
of
Utility
of
R
Utility
ofR
R87
Utility
of
R87
87
87
Platform
Utility
Platform
Utility
Platform
Utility
Platform
Utility
580
580
580
580
R31−C;P87−C
R
−C;P
−C
RR
−C;P−C
−C
R31
−C;P
−C
31
3131−I;P
87
8787
87
−I;P
RP
−I;P
−C
RR
−I;P
−I −C−C
−10
−10
−10
−10
15
15
15
15
R87
Bidfrom
fromRR87
Bid
87
(a) Utility
of 87
R
87
(a)of
(a) Utility
Utility
of RR87
(a)
87
5
55
0
0
00
−5
−5
−5−5
20
20
2020
−15
0
−15
−15
−15
0
00
3187 8787
87
3131
5
5
55
10
10
Ask10from
10
P
−I
P87
−I
P87
−I
87
15
P 1515
15
31
Askfrom
fromPP31
Ask
3131
(b) Utility
of31P
(b)ofofPP3131
(b)Utility
Utility
(b)
20
20
2020
Fig. 12. Incomplete tasks in DFF
Fig. 12.
12. Incomplete
Incomplete tasks
tasks ininDFF
DFF
Fig.
Fig. 12. Incomplete tasks in DFF: (a) Utility of R87 , (b) Utility of P31 .
Comparing the utilities of P31 in DFF when it completes the
Comparing the
the utilities
utilitiesof
ofPP31
inDFF
DFFwhen
whenititcompletes
completesthe
the
Comparing
31 in
task
and whenthe
it does
not of
complete
the task
(represented
bythe
the
Comparing
utilities
P
in
DFF
when
it
completes
31
task
and
when
it
does
not
complete
the
task
(represented
by
the
task and when it does not complete the task (represented by the
blue
line
in
Fig.
11b
and
the
red
line
in
Fig.
12b,
respectively),
task
and
doesand
notthe
complete
(represented
by the
blue line
line when
in Fig.
Fig.it11b
11b
and
the
redline
linethe
Fig.12b,
12b,respectively),
respectively),
blue
in
red
inintask
Fig.
P31 line
would
choose
toand
complete
the
task
for 12b,
a higher
utility.
blue
in
Fig.
11b
the
red
line
in
Fig.
respectively),
P
would
choose
to
complete
the
task
for
a
higher
utility.
P31
31 would choose to complete the task for a higher utility.
P31 would choose to complete the task for a higher utility.
7. CONCLUSIONS
ONCLUSIONS
7.
7. CCONCLUSIONS
7. C ONCLUSIONS
In this
this paper,
paper, we
we proposed
proposed
two novel
novel mechanisms
mechanisms toto tackle
tackle
In
two
In
this paper,
we proposed
two
novel mechanisms
to tackle
free-riding
and
false-reporting
in
crowdsourcing.
We
first
preIn
this paper,
proposed two
novel
mechanisms
to
free-riding
andwe
false-reporting
crowdsourcing.
Wefirst
firsttackle
prefree-riding
and
false-reporting
inin crowdsourcing.
We
presented
EFF
and
proved
that
with
arbitration,
EFF
eliminates
free-riding
in crowdsourcing.
We
first presented EFF
EFFand
andfalse-reporting
proved that
that with
with
arbitration, EFF
EFF
eliminates
sented
and
proved
arbitration,
eliminates
free-riding
and
false-reporting,
while
guaranteeing
truthfulness,
sented
EFFand
and
proved that with
arbitration,
EFF
eliminates
free-riding
false-reporting,
while
guaranteeing
truthfulness,
free-riding
and
false-reporting,
while
guaranteeing
truthfulness,
transaction-wise
budget-balance,
andcomputational
computational
efficiency.
free-riding
and false-reporting,
while
guaranteeing
truthfulness,
transaction-wise
budget-balance,
and
efficiency.
transaction-wise
budget-balance,
and
computational
efficiency.
We
then
presented
DFF
and
proved
that
without
arbitratransaction-wise
budget-balance,
and computational
efficiency.
We
DFF
that
arbitraWe then
then presented
presented
DFF and
and proved
proved
that without
without
arbitration,
DFF
is
semi-truthful,
while
guaranteeing
transactionWe
DFF and
proved
that without
arbitration,
DFF
isis semi-truthful,
while
guaranteeing
transactiontion, then
DFFpresented
semi-truthful,
while
guaranteeing
transactionwise budget-balance,
budget-balance,
and computational
computational
efficiency.
Another
tion,
DFF
is semi-truthful,
while guaranteeing
transactionwise
and
efficiency.
Another
wise
budget-balance,
and
computational
efficiency.
Another
feature
of our
our mechanisms
mechanisms
thatproviders
providers
areincentivized
incentivized
wise
budget-balance
and computational
efficiency.
Another
feature
of
isisisthat
are
feature
of
our
mechanisms
that
providers
are
incentivized
to
complete
their
assigned
tasks.
We
implemented
bothEFF
EFF
feature
of our
mechanisms
is that
providers
are incentivized
to
their
assigned
We
implemented
both
to complete
complete
their
assigned tasks.
tasks.
We
implemented
both
EFF
and
DFF.
Extensive
numerical
results
are
presented
to
study
to
their assigned
tasks.
We implemented
EFF
and
DFF.
numerical
results
are
toto study
results
are presented
presentedboth
study
andcomplete
DFF. Extensive
Extensive
numerical
theperformance
performance
ofour
our
proposed
mechanisms.
and
DFF.
Extensive
numerical
results
are
presented
to
study
the
of
proposed
mechanisms.
the performance of our proposed mechanisms.
the performance of our proposed mechanisms.
EFERENCES
RRREFERENCES
EFERENCES
[1] [Online].
[Online].Available:
Available:http://www.yelp.com/
http://www.yelp.com/
[1]
R
EFERENCES
[1] [Online]. Available: http://www.yelp.com/
Comparing
the utilities
utilities of
of P
P31
in EFF
EFF when
when itit completes
completes the
31 in
Comparing
task
and when the
it does not complete
the task
(represented by the
the [2]
[2] [Online].
[Online].Available:
Available:http://answers.yahoo.com/
http://answers.yahoo.com/
[2]
[Online].
Available:
http://answers.yahoo.com/
[1]
http://www.yelp.com/
task
and
when
it does
does
not of
complete
the task
task
(represented
bythe
the [3]
Comparing
the
utilities
P
in
EFF
when
it
completes
the
task
and
when
it
not
complete
the
(represented
[3] [Online].
[Online].Available:
Available:https://www.mturk.com/mturk/welcome
https://www.mturk.com/mturk/welcome
31
blue line in Fig. 9b and the red line in Fig. 10b), P31 by
would
[3]
[Online].
Available:
https://www.mturk.com/mturk/welcome
[2]
[Online].
Available:
http://answers.yahoo.com/
[4] [Online].
[Online].Available:
Available:https://www.uber.com/
https://www.uber.com/
blue
line
in Fig.
Fig.
9b and
and
the
red line
line
in
Fig.
10b), PP31
would
31 would
[4]
task
and
when
it
does
not
complete
the
task
(represented
by
the
blue
line
in
9b
the
red
in
Fig.
10b),
[4]
[Online].
Available:
https://www.uber.com/
choose to complete the task for a higher utility.
[3]
[Online].
Available:
https://www.mturk.com/mturk/welcome
[5] P.
P.Azar,
Azar,J.J.
Chen,and
and
S.Micali,
Micali,“Crowdsourced
“CrowdsourcedBayesian
Bayesianauctions,”
auctions,”ininin
[5]
Chen,
S.S.
choose
to
complete
the
task
for
a
higher
utility.
Micali,
“Crowdsourced
Bayesian
auctions,”
[5]
P.
Azar,
J.
Chen,
and
blue
line
in
Fig.
9b
and
the
red
line
in
Fig.
10b),
P
would
choose
to
complete
the
task
for
a
higher
utility.
[4]
[Online].
Available:
https://www.uber.com/
31
Next we monitor the utilities of R87 and P31 in DFF. From
Proc.ACM
ACMITCS
ITCS’12,
’12,pp.
pp.236–248.
236–248.
Proc.
236–248.
Proc.
ACM
ITCS
’12,
pp.
[5]
P.S.Azar,
J. Chen,
and
S. Micali,
“Crowdsourced
Bayesian
auctions,”conin
Next
we
the
utilities
of
R
and
intask,
DFF.
From
87 and
31
monitor
R
PP31
DFF.
From
choose
task
forPof
a31higher
utility.
[6] S.
Chawla,
D.Hartline,
Hartline,
andB.
B.
Sivan,“Optimal
“Optimal
crowdsourcing
87
J.J.J.D.
and
Sivan,
crowdsourcing
conFig.
11,to
wecomplete
observe the
thatutilities
when
completes
thein
there
is [6]
[6]
S. Chawla,
Chawla,
D.
Hartline,
and
B.
Sivan,
“Optimal
crowdsourcing
conProc.
ITCS
’12,
pp. 236–248.
tests,”ACM
inProc.
Proc.
ACM-SIAM
SODA
’12,
pp.
856–868.
Fig.
11,
we
when
P
completes
the
task,
there
is
tests,”
in
ACM-SIAM
SODA
’12,
pp.
856–868.
31
observe
that
when
P
completes
the
task,
there
is
Next
we
monitor
the
utilities
of
R
and
P
in
DFF.
From
31
SODA
’12,
pp.“Optimal
856–868.crowdsourcing contests,”
in Proc.
ACM-SIAM
no incentive for an individual to be 87
dishonest31when the other
[6]
J. “Multipart
D.
Hartline,
and
B.
Sivan,
[7] S.
E. Chawla,
H. Clarke,
pricing
ofpublic
publicgoods,”
goods,”Public
Public
Choice,
vol.
11,
“Multipart
pricing
ofof
Choice,
vol.
11,
nohonest.
incentive
individual
to
becompletes
dishonest
when
the
other
public
Public
Choice,
vol.
11,
for both
an individual
to
be
dishonest
when
the
other
[7] E.
E. H.
H.Clarke,
Clarke,
“Multipart
pricing
Fig.
11, weThus,
observe
that
when
the
task,
there
is [7]
tests,”
in Proc.
ACM-SIAM
SODA
’12,
pp.goods,”
856–868.
R
report
C.
is
R87
and P
P31
would
report
C. Utility
of
31
pp.
17–33,
1971.
87 and
31 would
pp.
17–33,
1971.
pp.
17–33,
1971.
[7]
E.
H.
Clarke, “Multipart
pricing J.
ofD.
public
goods,”
Public
Choice,“Truthful
vol. 11,
is Fig.
honest.
and to
P31
would
report
C.
both
R87
P
report
C.
87 and
31
no
incentive
an
bewould
dishonest
when
[8] K.
K.Deshmukh,
Deshmukh,
A.V.
V.Goldberg,
Goldberg,
Hartline,
andA.
A.R.R.
R.
Karlin,
12 Thus,
shows
theindividual
semi-truthfulness
when
P31
failsthe
to other
com- [8]
A.
J.J.D.
and
Karlin,
“Truthful
R
offorP31
[8]
K.
Deshmukh,
A.
V.
Goldberg,
D.Hartline,
Hartline,
and
A.
Karlin,
“Truthful
87 Utility
pp.
17–33,
1971.
Fig.
12
semi-truthfulness
when
P
fails
to
comand
competitive
double
auctions,”
in
ESA’02,
pp.
361–373.
shows
the
semi-truthfulness
when
P
fails
to
comand
competitive
double
auctions,”
in
ESA’02,
pp.
361–373.
31
is
honest.
Thus,
both
R
and
P
would
report
C.
31
plete
DFF.
we observe
thereto is
no
inHartline,
ESA’02, and
pp. A.
361–373.
87Again, 31
Fig.the
12 task
showsinthe
semi-truthfulness
when Pthat
comandDeshmukh,
competitive double
auctions,”
[8]
V. Goldberg,
J. D.“Crowdsourcing
R. Karlin,
“Truthful
31 fails
[9] K.
D. DiPalantinoA.
and
M. Vojnovic,
and
all-payauctions,”
auctions,”
[9]
and
M.
“Crowdsourcing
and
all-pay
plete
the
Again,
we
observe
that
there
is no
no
“Crowdsourcing
and
all-pay
auctions,”
taskanin
inindividual
DFF.
Again,
observe
there
is
[9] D.
D. DiPalantino
DiPalantino
and
M.Vojnovic,
Vojnovic,
Fig.the
12 for
shows
the
semi-truthfulness
when when
Pthat
to
comand
competitive
double
auctions,”
in ESA’02, pp. 361–373.
incentive
to bewe
dishonest
the
other
is
31 fails
plete
task
DFF.
Again,
we
observe
that
there
is
no
inProc.
Proc.
ACMEC’09,
EC’09,
pp.
119–128.
in
ACM
pp.
119–128.
119–128.
in
Proc.
ACM
EC’09,
pp.
[9] D.
DiPalantino
and
M. Vojnovic,J. “Crowdsourcing
and all-pay
auctions,”
incentive
individual
toP31
bewe
dishonest
when
the other
other
is [10]
for
anboth
individual
to
be
dishonest
when
the
is
plete
theHence,
taskan
inindividual
DFF.
Again,
observe
that
there
[10]
M.Feldman,
Feldman,C.
C.Papadimitriou,
Papadimitriou,
Chuang,and
andI.I.I.Stoica,
Stoica,
“Free-riding
and
honest.
R87 and
would
report
I. Utility
ofisRno
M.
J.J.Chuang,
“Free-riding
and
incentive
for
to
be
dishonest
when
the
other
is
87
[10]
M.
Feldman,
Papadimitriou,
Chuang,
and
Stoica,
“Free-riding
and
in
Proc.
ACMC.
EC’09,
pp. 119–128.
honest.
Hence,
and
P
would
report
I.
whitewashing
in
peer-to-peer
systems,”
IEEE
JSAC,
vol.
24,
2006.
both
R
and
P
would
report
I.
whitewashing
in
peer-to-peer
systems,”
IEEE
JSAC,
vol.
24,
2006.
87
31
87
31
incentive
for
an
individual
to
be
dishonest
when
the
other
is
Utility of
P31 both R87 and P31 would report I.
IEEE
vol.“Free-riding
24, 2006. and
honest.
Hence,
whitewashing
peer-to-peer systems,”
[10] M.
Feldman, C.inPapadimitriou,
J. Chuang,
and JSAC,
I. Stoica,
honest. Hence, both R87 and P31 would report I.
whitewashing in peer-to-peer systems,” IEEE JSAC, vol. 24, 2006.
10
10
10
10
Xiang Zhang (Student Member 2013) received his
B.S. degree from University of Science and Technology of China, Hefei, China, in 2012. Currently he
is a Ph.D student in the School of Computing, Informatics, and Decision Systems Engineering at Arizona
State University. His research interests include network economics and game theory in crowdsourcing
and cognitive radio networks.
[11] Z. Feng, Y. Zhu, Q. Zhang, H. Zhu, J. Yu, J. Cao, and L. Ni, “Towards
truthful mechanisms for mobile crowdsourcing with dynamic smartphones,” in Proc. IEEE ICDCS’14, pp. 11–20.
[12] D. Fundenberg and J.Tirole, Game Theory. MIT, 2010.
[13] A. V. Goldberg, J. D. Hartline, and A. Wright, “Competitive auctions and
digital goods,” in Proc. ACM-SIAM SODA’01, pp. 735–744.
[14] T. Groves, “Incentives in Teams,” Econometrica, vol. 41, pp. 617–31,
1973.
[15] S. Guo, Y. Gu, B. Jiang, and T. He, “Opportunistic flooding in lowduty-cycle wireless sensor networks with unreliable links,” in Proc. ACM
MOBICOM’09.
[16] J. Howe, Crowdsourcing: Why the Power of the Crowd Is Driving the
Future of Business. Crown Publishing Group, 2008.
[17] S. Jain, Y. Chen, and D. Parkes, “Designing incentives for online question
and answer forums,” in Proc. ACM EC’09, pp. 129–138.
[18] R. Lavi and C. Swamy, “Truthful and near-optimal mechanism design via
linear programming,” in Proc. IEEE FOCS’05, pp. 595–604.
[19] X.-Y. Li, Z. Sun, and W. Wang, “Cost sharing and strategyproof mechanisms for set cover games,” in Proc. STACS 2005, pp. 218–230.
[20] W. Mason and D. J. Watts, “Financial incentives and the “performance
of crowds”,” in Proc. ACM HCOMP’09, pp. 77–85.
[21] R. McAfee, “A dominant strategy double auction,” Journal of Economic
Theory, vol. 56, pp. 434 – 450, 1992.
[22] A. Mu’Alem and N. Nisan, “Truthful approximation mechanisms for restricted combinatorial auctions,” Games and Economic Behavior, vol. 64,
no. 2, pp. 612–631, 2008.
[23] R. B. Myerson, Game Theory: Analysis of Conflict. Harvard University
Press, 1991.
[24] J. F. Nash Jr., “Equilibrium points in n-person games,” PNAS, vol. 36,
pp. 48–49, 1950.
[25] X. Sheng, J. Tang, X. Xiao, and G. Xue, “Sensing as a service: Challenges, solutions and future directions,” IEEE Sensors Journal, vol. 13,
pp. 3733–3741, 2013.
[26] Y. Singer and M. Mittal, “Pricing tasks in online labor markets,” in Human
Computation, AAAI Workshops, vol. WS-11-11, 2011.
[27] A. Singla and A. Krause, “Truthful incentives in crowdsourcing tasks
using regret minimization mechanisms,” in Proc. WWW’13, pp. 1167–
1178.
[28] W. Vickrey, “Counterspeculation, auctions, and competitive sealed tenders,” Journal of Finance, vol. 16, pp. 8–37, 1961.
[29] Q. Wang, K. Ren, and X. Meng, “When cloud meets ebay: Towards
effective pricing for cloud computing,” in Proc. IEEE INFOCOM’12, pp.
936–944.
[30] R. J. Wilson, Introduction to Graph Theory. John Wiley & Sons, Inc.,
1986.
[31] W. Xu, H. Huang, Y.-e. Sun, F. Li, Y. Zhu, and S. Zhang, “DATA:
A double auction based task assignment mechanism in crowdsourcing
systems,” in Proc. IEEE CHINACOM’12, pp. 172–177.
[32] D. Yang, G. Xue, X. Fang, and J. Tang, “Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones,” IEEE/ACM Transactions on
Networking, vol. PP, no. 99, pp. 1–13, 2015.
[33] D. Yang, X. Fang, and G. Xue, “Truthful auction for cooperative
communications,” in Proc. ACM MOBIHOC’11, pp. 9:1–9:10.
[34] D. Yang, G. Xue, X. Fang, and J. Tang, “Crowdsourcing to smartphones:
Incentive mechanism design for mobile phone sensing,” in Proc. ACM
MOBICOM’12, pp. 173–184.
[35] X. Zhang, G. Xue, R. Yu, D. Yang, and J. Tang, “You better be honest:
Discouraging free-riding and false-reporting in mobile crowdsourcing,”
in Proc. IEEE GLOBECOM’14.
[36] Y. Zhang and M. van der Schaar, “Reputation-based incentive protocols
in crowdsourcing applications,” in Proc. IEEE INFOCOM’11, pp. 2140–
2148.
[37] D. Zhao, X.-Y. Li, and H. Ma, “How to crowdsource tasks truthfully
without sacrificing utility: Online incentive mechanisms with budget
constraint,” in Proc. IEEE INFOCOM’14, pp. 1213–1221.
Guoliang Xue (Member 1996, Senior Member 1999,
Fellow, 2011) is a Professor of Computer Science and
Engineering at Arizona State University. He received
the PhD degree (1991) in computer science from
the University of Minnesota, Minneapolis, USA. His
research interests include survivability, security, and
resource allocation issues in networks. He is an Editor
of IEEE Network and the Area Editor of IEEE Transactions on Wireless Communications for the area of
Wireless Networking. He is an IEEE Fellow.
Ruozhou Yu (Student Member 2013) received his
B.S. degree from Beijing University of Posts and
Telecommunications, Beijing, China, in 2013. He is
currently pursuing the Ph.D degree in the School
of Computing, Informatics, and Decision Systems
Engineering at Arizona State University. His interests include software-defined networking, data center
networking, network function virtualization.
Dejun Yang (Student Member 2008, Member 2013)
is the Ben L. Fryrear Assistant Professor of Computer
Science at Colorado School of Mines. He received his
PhD degree from Arizona State University in 2013
and BS degree from Peking University in 2007. His
research interests lie in optimization and economic
approaches to networks. He received Best Paper
Awards at IEEE ICC’2011 and IEEE ICC’2012. He
has served as a TPC member for many conferences
including IEEE INFOCOM.
Jian Tang (Member 2006, Senior Member 2013) is
an associate professor in the Department of Electrical Engineering and Computer Science at Syracuse
University. He earned his Ph.D degree in Computer
Science from Arizona State University in 2006. His
research interests lie in the areas of Cloud Computing,
Big Data and Wireless Networking. He received an
NSF CAREER award in 2009. He serves as an editor
for IEEE Transactions on Vehicular Technology and
an editor for IEEE Internet of Things Journal.
11