A Multi-player Markov Stopping Game for
Delay-tolerant and Opportunistic Resource Sharing
Networks
Xiaofan He and Huaiyu Dai
Department of ECE
North Carolina State University, USA
Emails: {xhe6,hdai}@ncsu.edu
Abstract—Opportunistic resources are often present in various
resource sharing networks for the users to exploit, but their
qualities often change over time. Fortunately, many user tasks
are delay-tolerant, which offers the network users a favorable
degree of freedom in waiting for and accessing the opportunistic
resource at the time of its best quality. For such delay-tolerant
and opportunistic resource sharing networks (DT-ORS-Net), the
corresponding optimal accessing strategies developed in existing
literature mainly focus on the single-user scenarios, while the
potential competition from other peer users in practical multiuser DT-ORS-Net is often ignored. Considering this, a multiplayer Markov stopping game (M-MSG) is developed in this
work, and the derived Nash equilibrium (NE) strategy of this MMSG can guide network users to properly handle the potential
competition from other peers and thus exploit the time diversity
of the opportunistic resource more effectively, which in turn
further improves the resource utilization efficiency. Applications
in the cloud-computing and the mobile crowdsourcing networks
are demonstrated to verify the effectiveness of the proposed
method, and simulation results show that using the NE strategy
of the proposed M-MSG can provide substantial performance
gain as compared to using the conventional single-user optimal
one.
I. I NTRODUCTION
In many of the existing and emerging resource sharing networks, opportunistic resources are often present in addition
to the conventional regular resources, and can be exploited
by the network users for further performance enhancement.
For instance, besides the opportunistic spectrum resource exploited in cognitive radio networks [1, 2], idle virtual machines
may be offered by a cloud provider as an opportunistic
computing resource in cloud-computing networks [3]; lowprice harvested energy (e.g., wind or solar energy) may be
provided by a power station as an opportunistic resource in
power networks [4]; and mobile users with different taskaccomplishment capabilities provide another instantiation of
opportunistic resources in mobile crowdsourcing networks [5].
Two notable characteristics of such opportunistic resources
are that their qualities may change over time and are often
difficult to be predicted accurately, and that a past opportunity
usually cannot be recalled and hence users have to make their
This work was supported in part by the National Science Foundation
under Grants CNS-1016260, ECCS-1307949, and EARS-1444009.
Peng Ning and Rudra Dutta
Department of CSC
North Carolina State University, USA
Emails: {pning,dutta}@ncsu.edu
access decisions online. On the other hand, it is noticed that
many practical user tasks are delay-tolerant.1 In the presence
of resource quality dynamics, such deferrable feature of user
tasks provides the users a favorable degree of freedom in
waiting for and accessing the opportunistic resource at the time
of its best quality. For example, as many computing tasks of
a cloud user are not real-time requests, the user can wait and
then access the opportunistic cloud computing resource when
its price is low; similarly, in mobile crowdsourcing networks,
a recruiter can wait to offload its task on the mobile user with
the highest task-accomplishment capability.
To achieve the best performance in such delay-tolerant and
opportunistic resource sharing networks (DT-ORS-Net), using
a proper resource access strategy is crucial to the network
users.2 In literature, some pioneering works have already been
done in this direction, in the contexts of different applications.
For example, an optimal cloud accessing rule is derived in
[10] that can guide a company in choosing a suitable time
to migrate its computation assignments to a cloud, whose
service quality, however, may fluctuate due to random security
events. When the opportunistic resource of interest is the
sequentially arrived secondary-users that can serve as relays
for primary-users in cognitive radio networks, an optimal
relay selection strategy is developed in [11]. In [12], the
dynamism of electricity price and the deferrability of some
home applications are exploited to design an optimal electricity
utilization rule that can balance the electricity expense and
waiting time. In [13], as a more economic-friendly alternative
to the conventional direct cellular communications, vehicleassisted data delivery is proposed for the smart grid, where
the smart meter users need to decide whether to hire a certain
passing vehicle to deliver its electricity-related measurements
to a control center based on the vehicle’s task-accomplishment
capability (e.g., delivering delay).
Nonetheless, these prior works mainly focus on the single1 Although there are other tasks that are real-time, our focus will be on
the delay-tolerant ones.
2 Note that the delay-tolerant feature makes the resource access problem
considered in this work distinctive from the conventional distributed resource
allocation problems [6–9] in which the luxury of resource selection over time
is usually not available.
2
user cases and build their results on the classic optimal
stopping theory [14, 15] that can provide the optimal online
strategy for accessing the sequentially arrived rewards when
there is only one decision-maker. In fact, it has already been
noticed in [11] that a substantial amount of collisions will
occur when all users follow the single-user optimal strategy,
though no effective remedy is provided. To bypass this multiuser difficulty, a random contention access model is adopted
in [16] to simplify the multi-user problem into a single-user
one, where a fixed and pre-assigned contention probability is
assumed for each user. However, no effective regulation mechanism is provided to prevent selfish users from intentionally
increasing their contention probabilities.
Considering the above, a new resource access strategy
design framework that can both properly handle the potential
competition from other peers and enforce more effective
regulations to users’ access behaviors is highly desirable to
DT-ORS-Net applications. To this end, the two-player Markov
stopping game (MSG) developed in [17] can serve as a good
starting point, which extends the classic optimal stopping
theory into a game theoretic setting so as to handle the
potential conflicts between the two players.
However, [17] does not provide a systematic method to
deal with a general number of players and hence is still
not readily applicable to practical DT-ORS-Net. With this
consideration, in this work, a multi-player Markov stopping
game (M-MSG) is developed as a generalization of the twoplayer case. Subsequently, a recursive construction of the Nash
equilibrium (NE) strategy is derived for the M-MSG, through
which the network users can suitably adapt their decisions to
the potential conflicts from other users so as to effectively
exploit the opportunistic resources. The NE property of the
derived strategy also automatically ensures that users have no
incentive to deviate. The applications of the proposed M-MSG
framework to the cloud-computing networks and the mobile
crowdsourcing networks will be demonstrated in this work
as two concrete examples, though this framework is rather
general and may assume broader applications in various DTORS-Net.
The remainder of this paper is organized as follows. In Section II, two exemplary resource access problems in DT-ORSNet are formulated and the background of the basic two-player
MSG is briefly discussed. In Section III, the proposed MMSG is developed and its applications to the resource access
problems are demonstrated. Simulation results are presented
in Section IV. Section V concludes this work.
II. P ROBLEM F ORMULATION AND BACKGROUND
T WO - PLAYER M ARKOV S TOPPING G AMES
ON
In this section, two exemplary resource access problems in
DT-ORS-Net are described, followed by a brief introduction
to the basic two-player MSG.
A. Resource Access in Opportunistic Cloud-computing Networks
The first considered problem is resource access in opportunistic cloud-computing networks. In such networks, the cloud
provider can offer two types of computing resources: the regular resource and the opportunistic resource [3, 18]. Particularly,
the opportunistic resources can be a set of virtual machines
that are currently idle. To increase utilization efficiency, the
opportunistic resources can be made available to the cloud
users, at different prices depending on the front-end workload
dynamics. To minimize their per-task payments to the cloud,
cloud users need to strategically compete with each other for
the limited opportunistic resource and strive to access it at
the lowest possible price before the task deadline; cloud users
that fail to access the opportunistic resources have to purchase
the usually more expensive regular resources so as to ensure
that their tasks can be completed in time. For this problem,
the challenge comes from not only that users cannot accurately
predict the future prices of the opportunistic resources but also
that collisions may occur when other users intend to access the
resource at the same time.
B. Participant Recruitment in Mobile Crowdsourcing
Another problem of similar nature is participant recruitment
in mobile crowdsourcing networks, in which multiple nearby
recruiters contend for their potential task participants within
the same shared pool of passing mobile users. For example, in
a vehicle based crowdsourcing network [5, 19], companies can
set up access points near a highway and ask passing vehicles
to advertise their products at a certain place using on-vehicle
LED lights. Due to difference in the time of reaching and
stay at the appointed locations, the task completion qualities
of vehicles may be different from one to another; also each
vehicle may only be able to perform one such task at a time.
Similarly to the previous resource access problem, to maximize
their own rewards, recruiters must strategically select the most
suitable mobile user as the task participant and properly deal
with potential competition from other recruiters at the same
time.
C. Background on Two-player Markov Stopping Game
Before introducing the two-player MSG, some basics of the
classic Markov stopping problem [14, 15] are reviewed. In the
classic Markov stopping problem with finite time horizon N , a
single player monitors a Markov sequence {Xn }N
n=1 with state
transition probabilities PX (Xn+1 |Xn ), and after observing
Xn , the player can decide to either (1) stop monitoring and
accept the current reward f (Xn ) (with f (·) the observationdependent reward function), or (2) continue monitoring. The
goal of the player is to find an optimal stopping time (or
stopping rule) T ∗ such that its expected reward E[f (XT ∗ )]
is maximized. Specifically, for a given probability space
(Ω, F , P) and a filtration {Fn }N
n=1 (e.g., the natural filtration
Fn , σ({Xi }ni=1 ) generated by the observation history3 ), a
3 Here, σ({X }n ) denotes the sigma-algebra generated by the random
i i=1
variables X1 , ..., Xn .
3
stopping time T is a random variable taking values in the set
{1, ..., N } such that {ω ∈ Ω|T ≤ n} ∈ Fn for n = 1, ..., N
(i.e., the stopping decision does not depend on future observations). Under a mild condition for the boundedness of Xn ,
the single player optimal stopping time is given by [14]
T ∗ = inf{1 ≤ n ≤ N |f (Xn ) = γn },
(1)
where γn , ess supT ∈Tn E [f (XT )|Fn ] (with ess sup denoting the essential supremum) represents the optimal expected
reward if the player stops no earlier than time n, given
observations up to time n, and Tn , {T |P(T ≥ n) = 1}.
Intuitively, (1) says that, when the current reward f (Xn )
equals the best reward γn that can be obtained in the future,
the player should stop.
As in the two problems described in Section II-A and
Section II-B and many other similar applications in DT-ORSNet, it is usually the case that multiple players monitor the
same sequence of opportunistic resource and contend with
each other. To derive the optimal strategy for each player in
such situations, the two-player MSG framework developed in
[17] may serve as a basis. Particularly, in the two-player MSG,
two rational and selfish players monitor the same sequence
{Xn } and make their stopping decisions independently. When
two players decide to access the reward (i.e., stop) at the
same time n, only one of them (chosen uniformly at random
by a third-party) can successfully receive the reward f (Xn )
and then leave the game, reducing the two-player MSG to a
classical Markov stopping problem (with time horizon N − n)
for the remaining player. However, a systematic method for
handling a general number of players in a MSG is missing
from [17]. Considering this, a general M-MSG is proposed in
the next section so as to address the resource access problems
in DT-ORS-Net.
III. G ENERAL M ULTI - PLAYER M ARKOV S TOPPING G AME
In this section, a general M-MSG is developed along with
a recursive construction of the corresponding NE strategy.
Then, its applications to the resource access and the participant
recruitment problems described in the previous section will be
discussed.
A. Extension to General Multi-player Markov Stopping Game
As in the two-player case, when multiple players decide to stop
at the same time in the M-MSG, only one of them, chosen
uniformly at random, can successfully receive the reward,
while all other players continue to play the game. Consequently, the set of remaining players Rn = {i1 , ..., i|Rn | }
(with ij ∈ {1, ..., K} and K the total number of players at the
beginning) at different timeslot n (1 ≤ n ≤ N ) is random (and
monotonically decreasing), which depends on the players’ past
actions (i.e., stop or continue) and in turn affects the future
actions of remaining players.
The underlying principle of solving the M-MSG is to
iteratively convert the multi-stage M-MSG into an equivalent
auxiliary single-stage matrix game at each timeslot n; and
then use the NE strategies of this sequence of N auxiliary
single-stage matrix games to construct the NE strategy of the
original multi-stage M-MSG. For this procedure, several new
definitions are needed. Particularly, when two players coexist
in the MSG, the randomized stopping time is used in [17]
to deal with the potential competition from the other player.
To further handle the dynamism in the number of players as
the game evolving, the concept of selection time is proposed
in [17]. However, constructing the selection time essentially
requires a full enumeration of all possible dynamics in the
player numbers, and is not suitable for solving MSG with a
general number of players. This can be evidenced from [20],
from which it can be seen that using the selection time to
solve MSG with only three players is already almost prohibitive. Considering this, instead of resorting to the selection
time, multi-player randomized stopping time, which explicitly
takes the number of players into consideration as compared
to the conventional randomized stopping time and thus can
effectively address the dynamism in the number of players in
M-MSG, is proposed in this work as follows.
Definition 1: In an M-MSG with K players and finite time
horizon N, a strategy
Pk of player k is a sequence of random
k N
variables Pn n=1 , such that, for each n, (1) 0 ≤ Pnk ≤ 1
almost surely, and (2) Pnk is measurable with respect to Hn ,
σ(Fn , Rn ). In the sequel, the set of all the possible strategies
Pk ’s will be denoted by P.
Remark 1: Intuitively, Pnk specifies the probability that
player-k decides to take the stop action at the nth timeslot,
which depends on the current observation history Fn and the
set of remaining players Rn .4 Further due to Markovity of the
stochastic process {Xn , Fn }, Pnk is actually a function of Xn
and Rn . But for simplicity, Pnk will be used in the sequel,
instead of Pnk (Xn , Rn ), when there is no confusion.
For a given strategy, the corresponding multi-player randomized stopping time is defined as follows.
Definition 2: Let {Akn }N
n=1 be a sequence of i.i.d. random
variables with uniform distribution on [0, 1] and independent of
N
the Markov process {Xn , Fn }N
n=1 and {Rn }n=1 . The multiplayer randomized stopping time of player k based on the
strategy Pk is defined as
T (Pk ) , inf{1 ≤ n ≤ N |Akn ≤ Pnk }.
(2)
Remark 2: When K = 1 and Pnk is either zero or one for
all n, T (Pk ) reduces to the classic stopping time.5 In other
words, the classic single-player stopping time can be treated as
a fixed stopping time, given the observation sequence. In the
simulation results presented later in Section IV, the optimal
single-player strategy will be denoted as Psgl .
With these notions, a recursive characterization of the NE
strategy of the M-MSG is developed in the following. Particularly, at each timeslot n in the M-MSG, each remaining
player chooses an action aik from the action space A =
4 Note that whether the player can successfully stop depends further on
other players’ decisions and the random selection of the third-party.
5 If further given that Pk is optimal, then T (Pk ) in (2) equals the optimal
stopping time defined in (1).
4
{stop,
continue}. For a given pair of Xn and Rn , denote
i|Rn |
ik
1
ŨX
(ai1 , Pin+1
)..., (ai|Rn | , Pn+1
) the expected6 reward
n
of player ik , when player ij ∈ Rn takes action aij at timeslot
n and follows strategy Pij from timeslot n + 1 and on, and
ik
denote UX
(Pi1 , ..., Pi|Rn | ) the expected reward of player ik
n
under the strategy-tuple (Pi1 , ..., Pi|Rn | ). By definition, these
two quantities admit
,
ik
UX
(Pi1 , ..., Pi|Rn | )
n
h
X
PP i1 ,...,P i|Rn | ai1 , ..., ai|Rn |
ai1 ,...,a
i|R |
n
n
n
i
i|Rn |
ik
1
· ŨX
(ai1 , Pin+1
), ..., (ai|Rn | , Pn+1
) , (3)
n
Q|R |
i|R |
where P i1
ai1 , ..., ai|Rn | = j=1n PP ij (aij ) due
n
Pn ,...,Pn
n
to the independence of players’ decisions.
Definition 3: A strategy-tuple (P̊i1 , ..., P̊iK ) (with P̊ik ,
[P̊1ik , ..., P̊Nik ]) is an NE for an M-MSG with K players, if for
all n, Xn , Rn , ik ∈ Rn , and any arbitrary strategy Pik ∈ P,
it admits
h
i
E UXikn (P̊i1 , ..., P̊ik , ..., P̊i|Rn | )|Xn , Rn
h
i
ik
i1
ik
i|Rn |
≥ E UX
(
P̊
,
...,
P
,
...,
P̊
)|X
,
R
,
(4)
n
n
n
where the expectation is over all the randomness in the future.
(That is, unilateral deviation from the NE cannot increase the
expected reward.)
Remark 3: Note that P̊ink , [0, ..., 0, P̊nik , ..., P̊Nik ] is an NE
strategy for player ik from timeslot n and on, and P̊i1k = P̊ik
in this notation.
Definition 4: The Nash value Vnik (Xn , Rn ) for player ik ∈
Rn after observing Xn at timeslot n is defined as its expected
reward if all the players follow NE strategies. In particular,
h
i
ik
i1
i|Rn |
Vnik (Xn , Rn ) , E UX
(
P̊
,
...,
P̊
)|X
,
R
. (5)
n
n
n
Remark 4: Note that when K = 1, the value at timeslot n
is given by [14]
Vn1 (Xn , 1) =
ess sup
E f XT (P1 ) |Xn . (6)
P1 ∈{P|n≤T (P)≤N }
Now it will be illustrated below how to recursively compute
an NE strategy for each player. For ease of presentation, only
symmetric games [21] will be considered in this work, in which
the reward functions f of all players are identical; while nonsymmetric ones can be handled in a similar way but with more
complicated notations. This assumption is reasonable when
homogeneous cloud users and recruiters are considered. Under
this symmetric game assumption, it is sufficient to consider
Rn , |Rn | instead ofP
Rn .
Rn
Denote by Sn =
k=1 1{aik =s} the number of players
that decide to stop at timeslot n.7 It can be verified that
6 The expectation here is over the random selection of the third-party when
multiple players decide to stop at the same time.
71
{aik =s} is the indicator function for the event that player-ik takes the
stop action.
i
ik
Rn
1
the expected8 value of ŨX
((ai1 , P̊in+1
), ..., (aiRn , P̊n+1
)) is
n
given by
h
i
iRn
1
), ..., (aiRn , P̊n+1
))|Xn , Rn =
E ŨXikn ((ai1 , P̊in+1
(7)
ik
f (Xn ),
when a = s, Sn = 1,
i
k
f (Xn )/Sn + (1 − 1/Sn )E Vn (Xn+1 , Rn − 1)|Xn ,
when aik = s, Sn ≥ 2,
i
when aik = c, Sn = 0,
E Vnk (Xn+1 , Rn )|Xn ,
i
E Vnk (Xn+1 , Rn − 1)|Xn , when aik = c, Sn ≥ 1,
for 1 ≤ n < N , and for n = N , it is given by
h
i
iRn
1
E ŨXikn ((ai1 , P̊in+1
), ..., (aiRn , P̊n+1
))|Xn , Rn
(
f (Xn )/Sn , when aik = s,
=
0, when aik = c.
(8)
Remark 5: Take the second case in (7) as an example. When
player-ik decides to stop (i.e., aik = s) and there is at least
another player deciding to stop (i.e., Sn ≥ 2), player-ik wins in
the uniformly random selection and receives a reward f (Xn )
with probability 1/Sn ; and with probability (1 − 1/Sn ), some
other player wins the selection, and player-ik has to continue
to play a (Rn − 1)-player M-MSG,
in which the best (in
the NE sense) he can make is E Vnik (Xn+1 , Rn − 1)|Xn
by definition. Other expressions in (7) and (8) are derived
similarly.
As mentioned earlier in this subsection, to compute the NE
strategy for player ik ∈ Rn at timeslot n when the observation
is Xn and Rn players remain, an auxiliary Rn -player 2-action
matrix game needs to be constructed. Specifically, the payoff
function of the auxiliary game for the nth timeslot is defined
as
ŨiXkn ,Rn (ai1 , ..., aiRn )
(9)
h
i
iRn
ik
i1
i1
iRn
, E ŨXn ((a , P̊n+1 ), ..., (a , P̊n+1 ))|Xn , Rn ,
with the right-hand side given by (7) and (8).
Denote the mixed-strategy NE of this auxiliary game by
P̂nik , representing that player ik takes the stop action with
probability P̂nik when the observation is Xn and Rn players
remain. The corresponding Nash value of the auxiliary game
is given by
h
X
iR
Ṽnik (Xn , Rn ) =
P i1
ai1 , ..., aiRn
n
ai1 ,...,aiRn
P̂n ,...,P̂n
ik i1 i1
i
iRn
iRn
· E ŨX
(a
,
P̊
),
...,
(a
,
P̊
)
|X
,
R
. (10)
n
n
n+1
n+1
n
The following proposition provides an iterative construction
of the NE strategy for the original M-MSG based on P̂nik and
shows that the Nash value of the auxiliary game Ṽnik (Xn , Rn )
actually equals that of the M-MSG Vnik (Xn , Rn ).
8 Here, the expectation is over the future randomness in both selections
and observations.
5
ik
=
h Proposition 1: Given i an NE strategy P̊n+1
ik
ik
0, ..., 0, 0, P̊n+1 , ..., P̊N
for player ik in the M-MSG
from timeslot n + 1, the NE strategy from timeslot n is given
by
h
i
ik
k
P̊ink = P̂nik ◦ P̊in+1
, 0, ..., 0, P̂nik , P̊n+1
, ..., P̊Nik , (11)
where ◦ denotes the concatenation operation and P̂nik is an NE
of player ik in the auxiliary game at timeslot n with payoff
function given by (9). Consequently,
Vnik (Xn , Rn ) = Ṽnik (Xn , Rn ).
(12)
Proof: Please see Appendix A.
The specific algorithm for computing the NE strategy is
summarized in Algorithm 1. In addition, another interesting
fact of the M-MSG and the corresponding NE strategy is given
below.
Proposition 2: For fixed time horizon N , distribution of
X1 , and transition probabilities PX (Xn+1 |Xn ), if all the
rewards f (Xn ), 1 ≤ n ≤ N , are positive and bounded above
by some constant fu , then there exists a constant K ∗ such
that, (1) for any K ≥ K ∗ , the NE strategy is to always
stop irrespective to the observation Xn and timeslot n (i.e.,
P̊nik is always 1) when there are K players in the game, and
(2) the expected reward of each player decreases inversely
proportional to K when K ≥ K ∗ and all players follow the
NE strategy.
Proof: Please see Appendix B.
Remark 6: Intuitively, this is because when the number of
players K is large, collisions with other players is the major
impediments for obtaining rewards, and hence it becomes more
necessary to contend aggressively than to wait and expect a
better reward in the future.
Algorithm 1 NE Strategy Computation of player-ik in the
M-MSG
For each possible realization of XN and RN
• Compute the payoff function of the auxiliary game using (8)
and (9).
ik
• Compute the NE strategy P̂N of the auxiliary game, and let
ik
ik
P̊N = [0, ..., 0, P̂N ].
ik
• Compute the corresponding Nash value VN (XN , RN ) using
(10) and (12).
End
For 1 ≤ n ≤ N − 1
For each possible realization of Xn and Rn
• Compute the payoff function of the auxiliary game using (7)
and (9).
ik
• Compute the NE strategy P̂n of the auxiliary game, and
ik
construct P̊n using (11).
ik
• Compute the corresponding Nash value Vn (Xn , Rn ) using
(10) and (12).
End
End
In the following, the developed M-MSG will be applied to
solve the resource access and the participant recruitment problems described in Section II-A and Section II-B, respectively.
B. Proposed Solutions to Cloud-computing and Mobile
Crowdsourcing Networks
To illustrate how to solve the resource access problem described in Section II-A using the proposed M-MSG framework, several notations and assumptions are clarified first,
which hopefully don’t change the nature of the problem while
facilitate our discussion. Particularly, it is assumed that each
cloud user generates a new computing task in every Ncc
timeslots that has to be completed before the next task arrives,9
and the required execution time for each task is L timeslots.
Also, when multiple users request the opportunistic resource
at the same time, it is assumed that the provider can choose
only one of them (uniformly) at random to grant access.10
Moreover, considering that the front-end workload dynamism
in a cloud can usually be modeled as a Markov process [3],
it is reasonable to assume that the sequence of opportunistic
prices {Yn }N
n=1 (with Yn ∈ {1, ..., M } for all n and M < ∞
the highest possible price of the opportunistic resource) is
also Markovian. The time horizon N of this problem is
N = Ncc − L and at the end of which, cloud users that
fail to access the opportunistic resources have to purchase
the regular resources at a usually higher price Yreg to meet
the task-deadline. To convert the resource access problem
into an M-MSG, the reward to a user when it successfully
access the opportunistic resource at timeslot n is defined as
Xn , Yreg − Yn , which represents its saving as compared
to using regular resource and is Markovian as Yn . Then, to
minimize its expected per-task payment, a cloud user can
simply call Algorithm 1 (with f (X) = X) to compute its
NE strategy P̊ and access the cloud resource according to the
corresponding multi-player randomized stopping time T (P̊)
given by (2).
Similarly, the participant recruitment problem described
in Section II-B can also be resolved using the M-MSG
framework. Denote the task deadline of recruiters by Nmc ,
then the time horizon of the corresponding M-MSG is set
to N = Nmc . In the participant recruitment problem, if a
recruiter fails to offload its task before the task-deadline, it
receives zero reward. Denote the charge to a recruiter for a
successfully offloaded task by c, and considering efficiency
and limited budget of recruiters, it is assumed that a recruiter
hires only one user for each task. The reward of a recruiter
for successfully offloading a task to a mobile user arriving
at the nth timeslot (after the task generation) is modeled as
Xn , Qn − c, with Qn denoting the task accomplishment
capability of that user. The task accomplishment capabilities
Qn ’s of different mobile users are assumed to follow uniform
i.i.d. distributions [13], and so are Xn ’s. Similarly to the
previous problem, a recruiter can invoke Algorithm 1 and (2)
9 This is a valid assumption for many practical applications. For example,
in geospatial analysis, the geospatial operator needs to periodically process
the ample data collected from a set of sensors and the process has to be
completed before the arrival of new sensor measurements [22].
10 Since allowing multiple users to share the same physical resource
on the cloud may incur security vulnerabilities [23, 24], exclusive resource
assignment is assumed in this work.
6
to determine optimally when to hire a mobile user.
Through the above two examples, it can be seen that the
proposed M-MSG offers practical solutions for DT-ORS-Net
applications. A network user only needs to identify a proper
reward function f (Xn ) based on its purpose and set a suitable
time horizon N of the game based on its task completion time
and deadline, and then runs Algorithm 1 to obtain its optimal
resource access strategy.
IV. S IMULATIONS
In this section, simulation results are presented to verify the
effectiveness of the developed M-MSG when applied to the
resource access and the participant recruitment problems, respectively. All the average values presented below are obtained
from 1 × 106 Monte Carlo runs.
A. Resource Access in Opportunistic Cloud-computing Networks
For the resource access problem, it is assumed that the task
generation period Ncc = 6 (timeslots) and the task execution
time L = 1 (timeslot). The price of regular resource is set to
Yreg = 11 and the price of the opportunistic resource Yn is
assumed to vary over the set {1, ..., 10}, with state transition
probabilities
0.3,
PY (Yn+1 |Yn ) = 0.5,
0.2,
when Yn+1 = Yn − 1,
when Yn+1 = Yn ,
when Yn+1 = Yn + 1,
(13)
for 2 ≤ n ≤ N −1, PY (Yn+1 |Yn ) = 0.8 when Yn+1 = Yn = 1
or 10, and PY (Yn+1 |Yn ) = 0.2 when Yn+1 = 2 and Yn = 1,
or Yn+1 = 9 and Yn = 10; and Y1 is uniformly distributed.
0.2
0.4
0.6
0.8
1
0
Access Prob. P̊n
Access Prob. P̊n
0
1
0.5
0
5
4
3
2
1
Timeslot n
6
4
2
0.4
2
0.6
0.8
0
1
0
5
4
3
Timeslot n
2
Fig. 1.
6
4
(c) Rn = 5
6
4
2
10
8
Xn
(b) Rn = 3
0.5
1
1
Timeslot n
Xn
1
0
5
3
1
2
0.8
4
Access Prob. P̊n
Access Prob. P̊n
0.2
0.6
0.5
(a) Rn = 1
0
0.4
1
10
8
0.2
Xn
8
10
0.2
0.4
0.6
0.8
1
1
0.5
0
5
4
3
2
Timeslot n
1
2
6
4
8
10
Xn
(d) Rn = 7
The NE strategy for different Rn ’s in the resource access problem.
For this scenario, the corresponding NE strategy computed
using Algorithm 1 is presented in Fig. 1. In particular, Fig. 1a
shows the NE strategy P̊n (Xn , Rn ) for different timeslot n and
saving chances Xn , when the number of remaining player is
fixed to Rn = 1.11 It can be noticed that P̊n (Xn , Rn ) = 0
in Fig. 1a, except for the last timeslot n = 5 and when
Xn reaches its maximum 10. This is because, when n = 5
or Xn = 10, it is clear that the best strategy is to stop;
while for other values of n and Xn , it can be verified that
for the specific PY (Yn+1 |Yn ) considered here and Rn = 1,
the expected future saving E[Xn+1 ] is always larger than the
current one Xn ; consequently, the user always prefers waiting
to stopping. In addition, it can be seen from Fig. 1 that, in the
presence of more remaining users, the NE strategy becomes
more aggressive. For example, for the given timeslot n = 3
and saving chance Xn = 1, the corresponding P̊n (Xn , Rn )
increases from zero (in Fig. 1a) to 0.3 (in Fig. 1b) and further
to 1 (in Fig. 1c) when Rn changes from 1 to 3 and to 5; similar
observations can be made for other pairs of n and Xn as well.
The reason is that, in the presence of fewer users, each user
focuses more on waiting for the best saving chance; while in
the presence of more users, competing for the current saving
chance becomes more important for each user, considering the
risk of failing to obtain a reward due to the high probability
of collision. Moreover, when Rn reaches 7, the NE strategy
of a user is to always stop and compete for the current saving
chance (i.e., P̊n (Xn , Rn ) = 1), as shown in Fig. 1d; this is also
the case when more than 7 users remain (c.f. Proposition 2).
For the considered resource access problem in opportunistic
cloud computing networks, the performance metric of interest
is the average saving SA , Yreg − YT ′ of a user (e.g., user1).12 In Fig. 2, for different total numbers of cloud users
K, the corresponding average savings SA of user-1 when
these K players employ different strategies are compared.
Particularly, three special cases are investigated: (1) all users
adopt the NE strategy P̊ of the corresponding M-MSG, (2)
only player-1 deviates from the NE and takes the singleplayer optimal strategy Psgl instead, and (3) all users follow
the single-player optimal strategy Psgl ; the corresponding
average savings of user-1 in these three cases are denoted by
SA1 , SA2 and SA3 , respectively. The corresponding relative
performance gains G1,3 , (SA1 − SA3 )/SA3 × 100/% and
G1,2 , (SA1 − SA2 )/SA2 × 100/% are presented in Fig. 3.
Several observations are in order. First notice from Fig. 2
that the users can achieve more savings when following the NE
strategy as compared to following the single-player optimal
strategy Psgl ; for example, as shown shown in Fig. 3, when
there are K = 10 users, the NE strategy can provide about
180% extra saving as compared to the single-player optimal
strategy. While if player-1 unilaterally deviates from the NE,
its expected saving will decrease. In addition, the behavior
change of SA2 (blue curve) around K = 5 observed in Fig. 2
provides some insights to the M-MSG and deserves further
explanation. Particularly notice that, for a small K (i.e., less
11 Note
12 T ′
and YT ′
that when Rn is fixed to 1, P̊n (Xn , Rn ) is actually Psgl .
is the actual accessing time to the opportunistic resource of that user
denotes the empirical average of YT ′ .
7
5
4
3
1
0
SA1 (all users follow NE)
SA2 (only user-1 deviates from NE)
SA3 (all users follow Psgl )
hP
i
N
Hyperbola for SA × K = E
n=1 X n
2
4
6
8
10
150
Average numbers of wasted
resource pieces
Average saving SA
6
2
5
200
Relative performance gain G (%)
7
G1,3 (all users follow NE
vs. all users follow Psgl )
G1,2 (only user-1 deviates from NE
vs. all users follow Psgl )
100
50
0
0
Number of users K
2
4
6
8
All users follow NE
Only user-1 deviates from NE
All users follow Psgl
4
3
2
1
0
0
10
2
4
6
8
10
12
Number of users K
Number of users K
Fig. 2.
Comparison of average rewards in the Fig. 3. Relative performance gains in the resource Fig. 4. Average numbers of wasted resource pieces
resource access problem.
access problem.
(out of the total N = 5 pieces) in the resource
access problem.
B. Participant Recruitment in Mobile Crowdsourcing
As for the participant recruitment problem, it is assumed that
each task has to be completed within Nmc = 4 timeslots
after generation and the payment c to the mobile user for a
successfully offloaded task is set to 1. The task accomplishment capability Q of each mobile user is assumed to follow a
uniform i.i.d. distribution over the set {2, ..., 6}.
The corresponding NE strategy for this example is plotted in
Fig. 5. Similarly to the previous example, it can be seen from
Fig. 5 that recruiters need to take more aggressive strategy as
the number of remaining recruiters Rn increases, and when Rn
exceeds 11 the optimal strategy is to always strive to hire the
current mobile user. The average rewards RW , QT ′′ − c of
recruiter-1 (with T ′′ denoting the actual time when recruiter-1
successfully hires a mobile user) are compared in Fig. 6. As in
the previous example, three cases of interest are considered and
the corresponding average rewards of recruiter-1 are denoted
by RW1 , RW2 and RW3 , respectively. Again, it can be seen
that the NE strategy can provide higher rewards to the recruiter
0.2
0.4
0.6
0.8
1
0
Access Prob. P̊n
Access Prob. P̊n
0
1
0.5
0
4
3
2
1
Timeslot n
1
2
0.2
0.4
0.4
0.2
0
4
5
3
4
0.8
1
0
0
4
3
1
2
1
3
Xn
4
5
0.2
1
Xn
0.4
0.6
0.8
1
1
0.5
0
4
3
2
Timeslot n
(c) Rn = 7
Fig. 5.
3
2
0.5
1
1
1
2
1
Timeslot n
0.8
(b) Rn = 3
0.6
2
0.6
0.6
Timeslot n
Xn
Access Prob. P̊n
0
0.4
0.8
5
4
3
0.2
(a) Rn = 1
Access Prob. P̊n
than 5 in this example), SA2 is still significantly higher than
SA3 and does not degrade much from SA1 ; while for a larger
K, SA2 approaches SA3 as K increases, indicating that it is
crucial for player-1 to follow the NE strategy for large K.
Intuitively, this is because, when all the other users follow
the NE strategy, they are likely to (successfully) stop at some
timeslots that are different from the single-player optimal one,
as evidenced by Fig. 4 where it can be seen that fewer pieces
of resources are wasted when users take the NE strategy;
and consequently fewer collisions are brought to player-1
even when player-1 follows Psgl . Nevertheless, as K further
increases, the number of remaining users Rn (which is on
average larger for a larger K) becomes the dominant factor
in causing collisions even when all users follow NE. In such
situations, player-1 needs to stop more aggressively and accept
some mildly good saving chance as specified by the NE
strategy (c.f. Fig. 1), instead of only stopping at the timeslot
specified by Psgl .
Another observation from Fig. 2 is that, as predicted by
Proposition
2,
P
SA2 follows the hyperbola of SA × K =
E N
n=1 Xn for large K.
1
1
3
2
4
5
Xn
(d) Rn = 11
The NE strategy for different Rn ’s in the participant recruitment
problem.
than the single-player optimal strategy, and unilaterally deviating from the NE results in lower rewards. For example, as
shown in Fig. 7 when K = 8, the NE strategy can provide
30% improvement in reward as compared to the single-player
optimal strategy.13 In addition, RW2 also exhibits behavioral
changes as K increases, though the change is not as apparent
as that of the cloud computing example with the Markov
model, due to the stronger randomness of the i.i.d. model.
It can also be seen from Fig. 6 that, again,
PN for large
K, RW1
follows the hyperbola RW × K = E
X
n=1 n . Moreover,
higher resource utilization efficiency can be observed in Fig. 8
when recruiters follow the NE strategy.
V. C ONCLUSIONS
In this work, an M-MSG is developed for the DT-ORSNet. By following the derived NE strategy of the M-MSG,
network users can properly handle the potential competition
13 The relative reward gains in Fig. 7 are defined as G′
1,2 = (RW1 −
RW2 )/RW2 × 100% and G′1,3 = (RW1 − RW3 )/RW3 × 100%,
respectively.
8
35
3.5
3
2.5
2
1.5
1
2.5
25
20
2
4
6
8
10
12
G′1,2 (only recruiter-1 deviates from NE
vs. all recruiters follow Psgl )
G′1,3 (all recruiters follow NE
vs. all recruiters follow Psgl )
10
5
2
14
2
1.5
15
4
6
8
10
12
0
0
14
Number of recruiters K
Number of recruiters K
1
0.5
0
0.5
All recruiters follow NE
Only recruiter-1 deviates from NE
All recruiters follow Psgl
3
30
Average numbers of wasted
resource pieces
RW1 (all recruiters follow NE)
RW2 (only recruiter-1 deviates from NE)
RW3 (all recruiters follow Psglh)
i
PN
Hyperbola for RW × K = E
n=1 X n
4
Average saving RW
Relative performance gain G′ (%)
4.5
5
10
15
Number of recruiters K
Fig. 6.
Comparison of average rewards in the Fig. 7. Relative performance gains in the partici- Fig. 8. Average numbers of wasted resource pieces
participant recruitment problem.
pant recruitment problem.
(out of the total N = 4 pieces) in the participant
recruitment problem.
from other peers and thus effectively exploit the time diversity
in the opportunistic resource, which in turn enhances the
resource utilization efficiency. Another favorable feature of
the derived resource access strategy is that it ensures that
network users have no incentives to deviate due to its NE
property. Two applications in the cloud-computing and the
mobile crowdsourcing networks are demonstrated to verify
the effectiveness of the proposed method, and corresponding
simulation results show that using the NE strategy of the MMSG can provide substantial performance gain as compared
to using the single-user optimal strategy. Moreover, simulation
results indicate that it becomes more important for network
users to follow the derived NE strategy as the number of
network users increases.
As the developed M-MSG framework is rather general,
exploring other potential applications in DT-ORS-Net remains
an interesting future direction.
A PPENDIX A
P ROOF OF P ROPOSITION 1
Proof: To show that (11) gives an NE strategy, suppose
that Rn players remain at timeslot n and that all of them follow
i
i
ik′
the strategy P̊nk′ = P̂nk′ ◦ P̊n+1
given by (11), except for
player ik that plays according to an arbitrary strategy Pink =
k
Pnik ◦ Pin+1
. Then, for each given pair of Xn and Rn , we have
E
=
h
i
ik
UX
(P̊in1 , ..., P̊ink , ..., P̊nRn )|Xn , Rn
n
X
i
ai1 ,...,a Rn
h
PP̂ i1 ,...,P̂ ik ,...,P̂ iRn
n
n
n
i
ai1 , ..., aiRn
..., (a
≥
X
i
ai1 ,...,a Rn
h
PP̂ i1 ,...,P ik ,...,P̂ iRn
n
n
n
h
ik
1
k
·E ŨX
(ai1 , P̊in+1
), ..., (aik , P̊in+1
),
n
(∗)
≥
X
i
ai1 ,...,a Rn
h
n
n
n
h
i
ik
1
ŨXkn (ai1 , P̊in+1
), ..., (aik , P̊n+1
),
E
iRn
iRn , P̊n+1
) |Xn , Rn
..., (a
h
i
ik
1
≥ E ŨXkn (ai1 , P̊in+1
), ..., (aik , Pn+1
),
(15)
i
i
iRn ..., (aiRn , P̊n+1
) |Xn , Rn .
To this end, define a binary random variable Enik such that
Enik = 1 (Enik = 0) when player ik decides to stop and
successfully receives (when player ik decides to stop but does
not receive, due to the random selection) the reward at timeslot
n. Then it follows that,
h
ik
1
k
ŨX
(ai1 , P̊in+1
), ..., (aik , P̊in+1
), ...,
n
E
i
iRn (aiRn , P̊n+1
) |Xn , Rn
(a)
ii
ii
iRn ..., (aiRn , P̊n+1
) |Xn , Rn
ai1 , ..., aiRn
iR
n
PP̂ i1 ,...,P ik ,...,P̂
where the first and the last equalities follow from the definition
in (3), and the first inequality holds since P̂nik is an NE of the
auxiliary game with payoff function given by (9). To see (*),
it is sufficient to show that, for all action-tuple (ai1 , ..., aiRn ),
we have
= 1|a , ..., a
)
·P
h
ik
i1
i1
k
+ E ŨXn (a , P̊n+1 ), ..., (aik , P̊in+1
), ...,
i
iRn
iRn
(a
, P̊n+1 ) |Xn , Rn , Enik = 0
|
{z
}
(Enik
iRn , P̊n+1
) |Xn , Rn
ai1 , ..., aiRn
ii
iRn ..., (aiRn , P̊n+1
) |Xn , Rn
h
i
i
ik
= E UX
(P̊in1 , ..., Pink , ..., P̊nRn )|Xn , Rn ,
(14)
n
h
ik
1
k
= E ŨX
(ai1 , P̊in+1
), ..., (aik , P̊in+1
), ...,
n
i
iRn iRn
(a
, P̊n+1 ) |Xn , Rn , Enik = 1
|
{z
}
h
i
ik
1
·E ŨXkn (ai1 , P̊in+1
), ..., (aik , P̊n+1
),
iRn
h
i
ik
1
·E ŨXkn (ai1 , P̊in+1
), ..., (aik , Pn+1
),
i1
iRn
(b)
· P(Enik = 0|ai1 , ..., aiRn )
h
ik
1
k
(ai1 , P̊in+1
), ..., (aik , Pin+1
), ...,
≥ E ŨX
n
i
iRn
(aiRn , P̊n+1
) |Xn , Rn , Enik = 1
|
{z
}
(a′ )
9
· P(Enik = 1|ai1 , ..., aiRn )
h
ik
1
k
+ E ŨX
(ai1 , P̊in+1
), ..., (aik , Pin+1
), ...,
n
i
i
Rn
(aiRn , P̊n+1
) |Xn , Rn , Enik = 0
|
{z
}
R EFERENCES
(b′ )
·P
= 0|a , ..., a
)
h
ik
i1
i1
ik
k
= E ŨXn (a , P̊n+1 ), ..., (a , Pin+1
), ...,
i
iRn iRn
(a
, P̊n+1 ) |Xn , Rn ,
(Enik
i1
iRn
(16)
where it is clear that (a) = (a′ ), since on the event Enik = 1,
the reward of player ik does not depend on players’ future
strategies. In addition, we have (b) ≥ (b′ ), since
h h
(b) = E
E
h h
≥E
E
i
ik
Rn
1
k
UX
(P̊in+1
, ..., P̊in+1
..., P̊n+1
)|Xn+1 , Rn+1
n+1
i
i
i
i
Xn , Rn , Enk = 0, a 1 , ..., a Rn
i
i
iRn
i
ik
1
UXkn+1 (P̊in+1
, ..., Pn+1
..., P̊n+1
)|Xn+1 , Rn+1
i
i
i
i
′
Xn , Rn , Enk = 0, a 1 , ..., a Rn = (b ), (17)
where the first and the last equalities are due to property of
conditional expectation and two facts that (F1) the distributions
of {Xm }m≥n+1 are independent of players’ strategies, and
(F2) the distribution of Rn+1 depends only on Rn and
players’ current action but not on players’ future strategies;
the inequality is due to (4).
After proving that P̊ink given in (11) is an NE for the MMSG, Vnik (Xn , Rn ) = Ṽnik (Xn , Rn ) follows readily from
their definitions.
P ROOF
A PPENDIX B
OF P ROPOSITION 2
Proof: Since the symmetric
ik NE is considered, the
expected future Nash values E Vn+1
(Xn+1 , Rn − 1)|Xn ’s for
given Xn and Rn are identical for all players and hence
can be (loosely) bounded by N · fu /(Rn − 1), which is
monotonically decreasing with respect to Rn . Therefore, there
∗
exists
ik a large R such that f (Xn ) ≥ N · fu /(Rn∗ − 1) ≥
E Vn+1 (Xn+1 , Rn − 1)|Xn holds for all Rn ≥ R and all
Xn . Then it follows that, when K ≥ K ∗ , R∗ + N − 1,
it can be ensured that Rn ≥ R∗ over the course of the MMSG (since at most one player leaves at each timeslot) and
hence, the current reward f (Xn )is always no smaller than
ik
(Xn+1 , Rn − 1)|Xn
the expected future Nash value E Vn+1
throughout the game. Then according to (7) and (8), it can be
verified that the stop action will always provide no less reward
than the continue action. This completes the proof of the first
part for Proposition 2.
As shown in the above, the NE strategy for each player
is to always stop when K ≥ K ∗ . Therefore, the total
expected
reward assigned to all players will be a fixed value
P
∗
E N
n=1 f (Xn ) , when K ≥ K and all players follow the
NE strategy. Then the second part of Proposition 2 follows
readily from the fact that the selection of the third-party is
uniformly at random.
[1] I. F. Akyildiz, B. F. Lo, and R. Balakrishnan. Cooperative spectrum
sensing in cognitive radio networks: A survey. Physical Communication
(Elsevier) Journal, 4(1):40–62, 2011.
[2] X. He, H. Dai, and P. Ning. A Byzantine attack defender in cognitive radio networks: the conditional frequency check. IEEE Wireless
Commun., 12(5):2512–2523, 2013.
[3] T. He, S. Chen, H.l Kim, L. Tong, and K. Lee. Scheduling parallel
tasks onto opportunistically available cloud resources. In Proc. IEEE
CLOUD, Honolulu, HI, Jun. 2012.
[4] S. Narasimhan, D. McIntyre, F. Wolff, Y. Zhou, D. Weyer, and S. Bhunia.
A supply-demand model based scalable energy management system for
improved energy utilization efficiency. In Proc. of IEEE IGCC, Chicago,
IL, Aug. 2010.
[5] S. Lee, G.l Pan, J. Park, M. Gerla, and S. Lu. Secure incentives for
commercial ad dissemination in vehicular networks. In Proc. of ACM
MobiHoc, Montreal, Quebec, Canada, Sept. 2007.
[6] K. Letaief and Y Zhang. Dynamic multiuser resource allocation and
adaptation for wireless systems. IEEE Wireless Commun., 13(4):38–47,
2006.
[7] F. Meshkati, H. V. Poor, and S. Schwartz. Energy-efficient resource
allocation in wireless networks. IEEE Signal Process. Mag., 24(3):58–
68, 2007.
[8] M. Ismail and W. Zhuang. A distributed multi-service resource allocation
algorithm in heterogeneous wireless access medium. IEEE J. Sel. Areas
Commun., 30(2):425–432, 2012.
[9] Q. Ye, M. Al-Shalash, C. Caramanis, and J. Andrews. Distributed
resource allocation in device-to-device enhanced cellular networks. IEEE
Trans. Commun., 63(2):441–454, 2015.
[10] M. Kantarcioglu, A. Bensoussan, and S. Hoe. Impact of security risks
on cloud computing adoption. In Proc. of IEEE Allerton, Monticello,
IL, Sept. 2011.
[11] T. Jing, S. Zhu, H. Li, X. Xing, X. Cheng, Y. Huo, R. Bie, and T. Znati.
Cooperative relay selection in cognitive radio networks. IEEE Trans.
Veh. Technol., 64(5):1872–1881, May 2015.
[12] A. Iwayemi, P. Yi, X. Dong, and C. Zhou. Knowing when to act:
An optimal stopping method for smart grid demand response. IEEE
Network, 25(5):44–49, 2011.
[13] N. Cheng, N. Lu, N. Zhang, T. Yang, X. Shen, and J. Mark. Vehicleassisted device-to-device data delivery for smart grid. IEEE Trans. Veh.
Technol., 2015. in press.
[14] H. V. Poor and O. Hadjiliadis. Quickest detection. Cambridge University
Press Cambridge, 2009.
[15] T. Ferguson.
Optimal stopping and applications.
2012.
http://www.math.ucla.edu/ tom/Stopping/Contents.html.
[16] X. Gong, T. Chandrashekhar, J. Zhang, and H. V. Poor. Opportunistic
cooperative networking: To relay or not to relay? IEEE J. Sel. Areas
Commun., 30(2):307–314, 2012.
[17] K. Szajowski. Markov stopping games with random priority. Zeitschrift
für Operations research, 39(1):69–84, 1994.
[18] M. Armbrust, A. Fox, R. Griffith, A. Joseph, R. Katz, A. Konwinski,
G. Lee, D. Patterson, A. Rabkin, and I. Stoica. A view of cloud
computing. Communications of the ACM, 53(4):50–58, 2010.
[19] Z. He, J. Cao, and X. Liu. High quality participant recruitment in
vehicle-based crowdsourcing using predictable mobility. In Proc. of
IEEE INFOCOM, Hong Kong, China, Apr. 2015.
[20] D. Ramsey and K. Szajowski. Three-person stopping game with players
having privileges. Journal of Mathematical Sciences, 105(6):2599–2608,
2001.
[21] S. Cheng, D. M. Reeves, Y. Vorobeychik, and M. Wellman. Notes on
equilibria in symmetric games. In Proc. of Workshop on Game Theory
and Decision Theory, 2004.
[22] C. Y. Huang. GeoPubSubHub: A geospatial publish/subscribe architecture for the world-wide sensor web. Ph.D. thesis, University of Calgary,
2014.
[23] T. Ristenpart, E. Tromer, H. Shacham, and S. Savage. Hey, you, get
off of my cloud: Exploring information leakage in third-party compute
clouds. In Proc. of ACM CCS, Chicago, IL, Nov. 2009.
[24] Y. Zhang, A. Juels, A. Oprea, and M. Reiter. Homealone: Co-residency
detection in the cloud via side-channel analysis. In Proc. of IEEE SP,
Oakland, CA, May 2011.
© Copyright 2026 Paperzz