Continuous Time Random Matching

Continuous Time Random Matching∗
Darrell Duffie† , Lei Qiao‡ , Yeneng Sun§
Preliminary draft; this version: May 28, 2017
Abstract
We show the existence of independent random matching of a large population in a
continuous-time dynamical system, where the matching intensities could be general nonnegative jointly continuous functions on the space of type distributions and the time line. In
particular, we construct a continuum of independent continuous-time Markov processes that
is derived from random mutation, random matching, random type changing and random
break up with time-dependent and distribution dependent parameters. It follows from the
exact law of large numbers that the deterministic evolution of the agents’ realized type
distribution for such a continuous-time dynamical system can be determined by a system
of differential equations. The results provide the first mathematical foundation for a large
literature on continuous-time search-based models of labor markets, money, and over-thecounter markets for financial products.
∗
An earlier version of this work was presented at the Workshop “Modeling Market Dynamics and Equilibrium
- New Challenges, New Horizons,” Hausdorff Research Institute for Mathematics, University of Bonn, August
19 - 22, 2013, at the 15th SAET Conference on Current Trends in Economics, University of Cambridge, July 27
- 31, 2015, and at the 11th World Congress of the Econometric Society, Montreal, August 17 - 21, 2015.
†
Graduate School of Business, Stanford University, Stanford, CA 94305-5015, USA. e-mail:
[email protected]
‡
School of Economics, Shanghai University of Finance and Economics, 777 Guoding Road, Shanghai 200433,
China. e-mail: [email protected]
§
Department of Economics, National University of Singapore, 1 Arts Link, Singapore 117570. e-mail: [email protected]
Contents
1 Introduction
2
2 Mathematical Preliminaries
4
3 The Model
4
4 The Main Results
7
4.1
Continuous time random matching with enduring partnerships . . . . . . . . .
7
4.2
Continuous time random matching with immediate break-up . . . . . . . . . .
7
5 Applications
9
5.1
The DMP model in labor economics . . . . . . . . . . . . . . . . . . . . . . . .
10
5.2
Over-The-Counter financial markets . . . . . . . . . . . . . . . . . . . . . . . .
11
5.3
The Kiyotaki-Wright money model in continuous time . . . . . . . . . . . . . .
14
6 Proof of Theorem 1
16
7 Proof of Theorem 2
18
7.1
Static internal matching model . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
7.2
Hyperfinite dynamic matching model . . . . . . . . . . . . . . . . . . . . . . . .
19
7.3
Properties of the hyperfinite dynamic matching model . . . . . . . . . . . . . .
22
7.4
Existence of continuous time random matching . . . . . . . . . . . . . . . . . .
24
7.5
Proofs of Lemmas 1 – 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
7.5.1
Proof of Lemma 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
7.5.2
Some additional lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
7.5.3
Proof of Lemma 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
7.5.4
Proof of Lemma 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
7.5.5
Proof of Lemma 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
7.5.6
Proof of Lemma 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
1
Introduction
Continuous-time independent random matching in a continuum population is a convenient and
powerful modeling approach that is applied extensively in the economics literature.1 Throughout, the literature exploits the idea that independence should lead, by the law of large numbers, to an almost-surely constant cross-sectional distribution of population types (or, in more
general models, deterministic time-varying cross-sectional type distributions). Mathematical
foundations for this result, however, have not been available. The existence of a model with
independent matching has simply been assumed, along with the desired law of large numbers.
The main aim of this paper is to provide the first treatment of the existence of continuoustime independent random matching in a continuum population.2 In particular, we construct
a joint agent-probability space on which there is a continuum of independent continuous-time
Markov chains describing the types of each agent, respecting properties derived structurally
from random mutation, pair-wise random matching between agents, random break-up of pairs,
and random type changes induced by matching, and with general time-dependent and distribution dependent parameters. We state and prove an exact law of large numbers for this
continuous-time dynamic system, by which there is an almost-sure constant (or, more generally,
deterministically evolving) cross-sectional distribution of types.
For a finite space S of agent types, let p(t) denote the cross-sectional distribution of types
at time t. That is, pk (t) denotes the fraction of agents that are currently of type k ∈ S. A key
primitive of the model is the intensity θkl (p(t), t) with which a specific agent of type k is matched
to some agent of type l. More precisely, letting α(i, t) denote the stochastic type of agent i at
time t, the cumulative number of matches of agent i with partners of type-l is a counting process
with intensity θα(i,t), l (p(t), t). By allowing these intensities to depend on the underlying crosssectional type distribution pt , the model accommodates the “matching-function” approach that
has been popular in the labor literature. For technical reasons, we assume that the matching
intensity θkl (p(t), t) depends continuously on p(t) and t. The specified intensity function θ must
of course satisfy the identity that the aggregate rate pk (t)θkl (p(t), t) of matches of agents of
type k to agents of type l is always equal to the aggregate rate pl (t)θlk (p(t), t) of agents of type
l matched to agents of type k.
1
See, for example, [9], [10], [23], [41], [43], [44], and [52] in monetary theory; [1], [2], [20], [26], [28], [33],
[35], [34], [38], [40], [45], and [46] in labor economics; [12], [13], [27], [29], [30], [49], [50], [51] in over-the-counter
financial markets; [3], [8], and [24] in game theory; and [7], [25], [15], and [14] in social learning theory. The
same sort of “ansatz” is applied without mathematical foundations in the natural sciences, including genetics
and biological molecular dynamics, as explained by [6], [19], and [42].
2
In [22], Gilboa and Matsui presented a particular matching model of two countable populations with a
countable number of encounters in the time interval [0, 1), where both the agent space N and the sample space
are endowed with purely finitely additive measures.
2
In many practical applications, and in many labor-market models, once two agents are
matched they may form a long-term relationship rather than immediately breaking up. For
instance, when a worker and a firm meet, they form a job match with some probability. At this
point, the worker may stop searching for new jobs until he or she becomes unemployed again.
(See, for example, [37].) To this end, for any pair of agents of types k and l, we introduce a
probability ξkl that an enduring partnership is formed at the time of the match. If formed, this
partnership ends at a time whose arrival intensity ϑkl may change over time with changes in
the cross-sectional distribution of agent types. The special case without enduring partnerships,
more popular in monetary economics and financial models, is obtained by taking ξkl = 0.
Random-matching models often allow for the random mutation of agent types. We allow
for independent random mutation, along the lines of [17], [18], and [16]. In some models, a
matched pair of agents may have their respective types changed, possibly randomly, by the
match or by the break-up. We allow this, and permit the post-break-up type probability
distributions to depend on the current type distribution pt .
All of the relevant parameters of the continuous-time dynamical system may be timedependent. For the special time-homogeneous case, we obtain a stationary deterministic joint
cross-sectional distribution of unmatched agent types and pairs of currently matched types.
Previous work ([17], [18] and [16]) provides related results for discrete-time Markov independent dynamical systems with random matching. This continuous-time setting involves
an extra underlying layer of analysis based on methods of nonstandard analysis for hyperfinite dynamical systems.3 This allows mutation, pairwise random matching, and random
match-induced type changes to occur at successive infinitesimal time periods. The final results,
however, are provided in the form of standard continuous-time processes (e.g., [39]) that are
defined on the usual real time line.
The remainder of the paper is organized as follows. In Section 2, we provide some mathematical preliminaries. Section 3 defines an independent continuous-time dynamical system
with random mutation, random partial matching, random break-up and random type changing. The main results on the existence and exact law of large numbers for a continuous-time
dynamical system with enduring partnerships are presented in the first subsection of Section 4.
When enduring partnerships are not possible, the notation and structure of the continuous-time
dynamical system are much simpler. In order to allow easier access to this case, we present it
separately in Subsection 4.2. In Section 5, we present three examples of applications to some
main models in labor markets, over-the-counter financial markets, and monetary theory. The
proofs of Theorems 1 and 2 are presented respectively in Sections 6 and 7.
3
For the basics of nonstandard analysis, see the first three chapters of [32].
3
2
Mathematical Preliminaries
Let (I, I, λ) be an atomless probability space representing the space of agents. Let (Ω, F, P )
be a sample probability space. We fix a filtration {Ft : t ≥ 0} of sub-σ-algebras of F satisfying
the usual conditions ([39]). We may view Ft as the information available at time t.
We model a continuum of independent stochastic processes based on the index space
(I, I, λ) and sample space (Ω, F, P ). As noted in Proposition 2.1 of [47], joint measurability
with respect to the usual product probability space is in general incompatible with independence. A Fubini extension, defined as follows, is proposed in [47] to handle such a problem.
Definition 1 A probability space (I × Ω, W, Q) extending the usual product space (I × Ω, I ⊗
F, λ ⊗ P ) is said to be a Fubini extension of (I × Ω, I ⊗ F, λ ⊗ P ) if for any real-valued Qintegrable function g on (I × Ω, W), the functions gi = g(i, · ) and gω = g( · , ω) are integrable
respectively on (Ω, F, P ) for λ-almost all i ∈ I and on (I, I, λ) for P -almost all ω ∈ Ω; and
R
R
if, moreover, Ω gi dP and I gω dλ are integrable, respectively, on (I, I, λ) and on (Ω, F, P ),
R R
R R
R
with I×Ω g dQ = I Ω gi dP dλ = Ω I gω dλ dP . To reflect the fact that the probability
space (I × Ω, W, Q) has (I, I, λ) and (Ω, F, P ) as its marginal spaces, as required by the Fubini
property, this space is denoted by (I × Ω, I ⊠ F, λ ⊠ P ).
Definition 2 Let S = {1, 2, . . . , K} be a finite set of agent types and J be a special type
representing no matching.
(i) A full matching φ is a one-to-one mapping from I onto I such that, for each i ∈ I,
φ(i) 6= i and φ(φ(i)) = i.
(ii) A (partial) matching ψ is a mapping from I to I such that for some subset B of I, the
restriction of ψ to B is a full matching on B, and ψ(i) = i on I \ B. This means that
agent i is matched with agent ψ(i) for i ∈ B, whereas any agent i not in B is unmatched,
that is ψ(i) = i.
(iii) A random matching π is a mapping from I × Ω to I such that πω is a matching for each
ω ∈ Ω.
3
The Model
Let Ŝ = S × (S ∪ {J}) be the set of extended types. An agent with an extended type of the
form (k, l) has type k ∈ S and is currently matched to some agent of type l in S. If an agent’s
ˆ of extended
extended type is of the form (k, J), then the agent is “unmatched.” The space ∆
type distributions is the set of probability distributions p̂ on Ŝ satisfying p̂(k, l) = p̂(l, k) for all
4
k and l in S. A time is an element of R+ , the set of non-negative real numbers, with its Borel
σ-algebra B.
The main objects of our model are α : I × Ω × R+ → S, π : I × Ω × R+ → I, and
g : I × Ω × R+ → S ∪ {J} specifying, for any agent i, state ω, and time t, the agent’s type
α(i, ω, t), the agent’s partner π(i, ω, t), and the partner’s type g(i, ω, t). As usual, we let α(i)
(or αi ) and g(i) (or gi ) denote the type processes for agent i and her partners, and we let
α(i, t) and g(i, t) denote the random types of agent i and of the partner of agent i at time t,
respectively. Our objective is to model the type processes α and g, as well as random matching
between agents in a manner consistent with given parameters for independent random mutation,
independent directed random matching among agents, independent random type changes at
each matching and break-up, and independent random break-up for matched pairs.
ˆ and, for
The parameters of the model are the initial extended type distribution p̂0 ∈ ∆
any k and l in S:
ˆ × R+ → R+ , specifying the intensity
(i) A continuous mutation intensity function ηkl : ∆
ηkl (p̂, t), given the extended type distribution p̂ at time t, with which any type-k agent
mutates to type l.
ˆ × R+ → R+ , specifying the intensity
(ii) A continuous matching intensity function θkl : ∆
θkl (p̂, t) at time t with which any type-k agent is matched with a type-l agent, if the crossˆ This function satisfies the
sectional agent extended type distribution at time t is p̂ ∈ ∆.
mass-balancing requirement p̂kJ · θkl (p̂, t) = p̂lJ · θlk (p̂, t) that the total aggregate rate
of matches of type-k agents to type l agents is of course equal to the aggregate rate of
matches of type-l agents to type-k agents.
ˆ
(iii) A continuous function ξkl : ∆×R
+ → [0, 1] specifying the probability ξkl (p̂, t) that a match
between a type-k agent and a type-l agent causes a long-term relationship between the
two agents after a match given the extended type distribution p̂ at time t.
ˆ × R+ → M(S × S) specifying the probability distribution
(iv) A continuous function σkl : ∆
σkl (t) of the new types of a type-k agent and a type-l agent who have been matched,
conditional on the event that the match causes an enduring relationship between them,
given the extended type distribution p̂ at time t.
ˆ × R+ → M(S) specifying the probability distribution
(v) A continuous function ςkl : ∆
ςkl (p̂, t) of the new type of a type-k agent who is matched with a type-l agent at time t,
conditional on the event that there is no enduring relationship, and the match is dissolved
immediately, given the extended type distribution p̂ at time t.
5
ˆ × R+ → R+ specifying the break-up rate ϑkl (p̂, t) of the
(vi) A continuous function ϑkl : ∆
long term relationship between a type-k agent and a type-l agent, given the extended
type distribution p̂ at time t. In this case, if the type-k agent and type-l agent eventually
break up at time s, they emerge with new types drawn from the probability distributions
ςkl (p̂s , s) and ςlk (p̂s , s) respectively.
For given parameters (p̂0 , η, θ, ξ, σ, ς, ϑ), a continuous-time dynamical system D with
enduring partnership, if it exists, is a triple (α, π, g) defined by the properties:
1. α(i, ω, t) and g(i, ω, t) are (I ⊠ F) ⊗ B-measurable. The stochastic processes αi and gi
have sample paths that are right-continuous with left limits (RCLL), a standard regularity
property of stochastic processes, found for example, in [39]. For any t ∈ R+ , π(·, ·, t) is a
random matching on I × Ω. For any i ∈ I and t ∈ R+ ,
(
α(π(i, ω, t)) if π(i, ω, t) 6= i
g(i, ω, t) =
J
if π(i, ω, t) = i
for P -almost all ω ∈ Ω, where α(J, ω, t) is defined to be J.
2. The cross-sectional extended type distribution p̂(t) at time t is defined by
p̂kl (t) = λ({i : α(i, t) = k, g(i, t) = l}).
Let p̌(t) = E (p̂(t)). For any agent i ∈ I, the extended type process (α(i), g(i)) of agent
i is a continuous-time Markov chain in S × (S ∪ {J}) whose generator (transition-rate
matrix) Q is defined at time t by:
Qt(k1 l1 )(k2 l2 ) = ηk1 k2 (p̌(t), t)δl1 (l2 ) + ηl1 l2 (p̌(t), t)δk1 (k2 ),
(1)
Qt(k1 l1 )(k2 J) = ϑk1 l1 (p̌(t), t)[ς(k1 ,l1 ) (p̌(t), t)](k2 ),
(2)
Qt(k1 J)(k2 l2 ) =
K
X
θk1 l1 (p̌(t), t)ξk1 l1 (t)[σk1 l1 (p̌(t), t)](k2 , l2 ),
(3)
l1 =1
Qt(k1 J)(k2 J)
= ηk1 k2 (p̌(t), t) +
K
X
θk1 l1 (p̌(t), t)(1 − ξk1 l1 (p̌(t), t))[ςk1 l1 (p̌(t), t)](k2 ),(4)
l1 =1
Qt(kl)(kl) = −
X
Qt(kl)(k′ l′ ) ,
(5)
(k ′ ,l′ )6=(k,l)
where δk1 (k2 ) = 0 for k1 6= k2 , whereas δk1 (k1 ) = 1.
3. The stochastic processes {(αi , gi ), i ∈ I} are essentially pairwise independent in the sense
that for λ-almost all i ∈ I, (αi , gi ) and (αj , gj ) are independent for λ-almost all j ∈ I.
6
4
The Main Results
4.1
Continuous time random matching with enduring partnerships
The exact law of large numbers (Theorem 2.16 of [47]) will be used to show that the crosssectional type distribution p̂(t) is deterministic almost surely, and equal to the solution p̌(t)
of the ordinary differential equation for the expected cross-sectional type distribution, E(p̂(t)),
given by
dp̌(t)
= p̌(t)Qt ,
dt
p̌(0) = p̂0 .
(6)
We are now ready to state the properties of an independent continuous-time dynamical
system with random mutation, directed random matching and random type changing and
random break-up.
Theorem 1 If D is a continuous-time dynamical system with parameters (p̂0 , η, θ, ξ, σ, ς, ϑ),
then:
(1) For P -almost all ω ∈ Ω, the cross-sectional type process (αω , gω ) is a continuous-time
Markov chain with generator Q.
(2) For P -almost all ω ∈ Ω, the realized cross-sectional extended type distribution p̂(t) is
equal to the expected cross-sectional type distribution E(p̂(t)).
(3) Suppose that the parameters (η, θ, ξ, σ, ς, ϑ) are time independent.
a probability distribution
p̂∗
Then there exists
on Ŝ such that the dynamical system D with parameters
(p̂∗ , η, θ, ξ, σ, ς, ϑ) has p̂∗ as a stationary type distribution. In particular, the realized
cross-sectional type distribution p̂(t) at any time t is almost surely p̂∗ and the generator
Qt (p̂(t)) is equal to Q0 (p̂∗ ).
The following result shows the general existence of an continuous-time dynamical system
with random mutation, directed random matching, random type changing and random breakup.
Theorem 2 For any given parameters (p̂0 , η, θ, ξ, σ, ς, ϑ), there exists a Fubini extension on
which is defined an dynamical system D with these parameters.
4.2
Continuous time random matching with immediate break-up
The assumption that agents break up immediately after meeting is widely used in the economics
literature, including that for finance and monetary economics. In this section, we consider a
special case of the general model presented in the previous section in which agents do not form
7
long-term partnerships. This special case can be derived by letting the enduring probabilities
to be 0 and all the other parameters depend on type distribution rather than extended type
distribution. In order to allow easier access to this case with much simpler notation and
structure, we state the model and results of this special case for the convenience of the reader
and for applications.
As in the previous section, let S = {1, 2, . . . , K} be a finite set of agent types, ∆ the
set of probability measures on S, and R+ the set of non-negative real numbers with its Borel
σ-algebra B. Since agents do not have enduring partnerships, the parameters and results are
simplified dramatically.
The parameters of the model are the initial type distribution p0 ∈ ∆ and, for any k and
l in S:
(i) A continuous mutation intensity function ηkl : R+ → R+ specifying the intensity ηkl (t)
at time t with which any type k agent mutates to type l.
(ii) A continuous matching intensity function θkl : ∆ × R+ → R+ specifying the intensity
θkl (p, t) at time t with which any type-k agent is matched with a type-l agent, if the
cross-sectional agent type distribution at time t is p ∈ ∆. This function satisfies the
mass-balancing requirement pk · θkl (p, t) = pl · θlk (p, t) that the total aggregate rate of
matches of type-k agents to type l agents is of course equal to the aggregate rate of
matches of type-l agents to type-k agents.
(iii) A continuous function ςkl : R+ → ∆ specifying the probability distribution ςkl (t) of the
new type of a type-k agent who has been matched at time t to a type-l agent. We denote
ςklr (t) = [ςkl (t)]({r}).
For given parameters (p0 , η, θ, ς), a continuous-time dynamical system D with independent random mutation, independent directed random matching and independent random type
changing is required to have the following properties:
1. The cross-sectional type distribution p(t) at time t is defined by pk (t) = λ({i : α(i, t) = k})
with initial condition p(0) = p0 . Let p̄(t) = E (p(t)). For λ-almost every agent i, the type
process α(i) of agent i is a continuous-time Markov chain in S whose transition intensity
from any state k to any state r 6= k is given almost surely by
t
Rkr
= ηkr (t) +
K
X
θkl (p̄(t), t)ςklr (t).
l=1
2. The type α(i, ω, t) is (I ⊠ F) ⊗ B-measurable.
8
(7)
3. The agents’ stochastic type processes {αi : i ∈ I} are essentially pairwise independent in
the sense that for λ almost all i ∈ I, αi and αj are independent for λ almost all j ∈ I.
The exact law of large numbers will be used to show that the cross-sectional type distribution p(t) is deterministic almost surely, and given by the solution p̄t of the ordinary differential
equation for the expected cross-sectional type distribution given by
where Rkr
dp̄t
= p̄t Rt ,
p̄0 = p0 ,
dt
P
is specified by (7) and Rkk = − l6=k Rkl . That is, the probability distribution of
each agent’s type evolves according to the same dynamics as those of the cross-sectional type
distribution. These distributions differ only with respect to their initial conditions.
The following results include the exact law of large numbers and the existence of a
stationary deterministic type distribution, which follow from Theorem 1 directly.
Corollary 1 If D is an independent dynamical system with parameters (p0 , η, θ, ς), then
(1) For P -almost all ω ∈ Ω, the cross-sectional type process αω is a Markov chain with
generator Rt .
(2) For P -almost all ω ∈ Ω, the realized cross-sectional type distribution p(t) is equal to the
expected cross-sectional type distribution p̄t .
(3) Suppose that the parameters (η, θ, ς) are time independent. Then there exists a probability
distribution p∗ on S such that any independent dynamical system D with parameters
(p∗ , η, θ, ς) has p∗ as a stationary type distribution. In particular, the realized crosssectional type distribution p(t) at any time t is almost surely p∗ and the generator Rt is
constant and equal to R0 .
The following result, which states the existence of a continuous-time dynamical system
with immediate break-up, is a direct corollary of Theorem 2.
Corollary 2 For any given parameters (p0 , η, θ, ς), there exists a Fubini extension on which is
defined an independent dynamical system D with these parameters.
5
Applications
The section offers some illustrative applications, drawn respectively from labor economics,
financial economics, and monetary theory.
9
5.1
The DMP model in labor economics
Our first example is from [37]. The agents are workers and firms. Each firm has a single job
position. Our results for continuous-time random matching with enduring partnerships provide
a foundation for the equilibrium unemployment rate, modeled as Equation (1) of [37].
The type space of the agents is S = {W, F, D}. Here, W , D, and F represent respectively
workers, dormant firms, and firms that are active in the labor market. Dormant firms are
neither matched with a worker nor immediately open to a match. The proportion4 of agents
that are workers is w > 0.
In Section 1 of [37], the fraction v(t) of unmatched active firms at time t is exogenously
given. Following the notation in Section 3, we can let v(t) = E(p̂tF J ).
There are frictions in the labor market, which make it impossible for all the unemployed
workers to find jobs instantaneously. At time t, for any unemployed worker, the next vacant
firm arrives with intensity θ̄1 (p̂t , t); for any vacant firm, the next unemployed worker arrives
with intensity θ̄2 (p̂t , t), where p̂t is the extended type distribution at time t.5 When a firm
and a worker meet,6 they form a (long term) job match with probability ξ̄. Furthermore, each
matched job-worker pair faces a randomly timed separation at an exogenous intensity ϑ̄.
Viewed in terms of our model, the corresponding parameters are given as follows. Vacant
and dormant firms could mutate to each other while workers and active firms do not mutate.
Mutation intensities7 are defined, for any k and l ∈ S, by
ηkl (p̂, t) =

max(v̇ − (w−p̂tW J )ϑ̄ + ξ̄θ1 (p̂,t)p̂tW J , 0)



p̂DJ





max(−v̇ + (w−p̂tW J )ϑ̄ − ξ̄θ1 (p̂,t)p̂tW J , 0)
p̂F J








0
if (k, l) = (D, F )
if (k, l) = (F, D)
otherwise,
where v̇ is the derivative of v with respect to t.
Matching occurs only between unemployed workers and firms with vacant jobs. For
matching intensities, we define


θ̄1 (p̂, t)
θkl (p̂, t) = θ̄2 (p̂, t)


0
if (k, l) = (W, F )
if (k, l) = (F, W )
otherwise.
4
In [37], the measure of workers is 1. In order to stay with our convention that an agent space has total mass
1, we rescale without loss of generality so that the measure of workers is w.
5
Obviously, one needs to have the mass balance identity ptW J · θ̄1 (p̂t , t) = ptF J · θ̄2 (p̂t , t).
6
Details regarding the job-creation mechanism are provided in Section 1 of [37].
7
The mutation intensities proposed here guarantee that the population of firms with unfilled job openings is
always v(t), which is exogenous, as given in [37].
10
For enduring-relationship probabilities, we define for any k, l ∈ S,
(
ξ̄ if (k, l) = (W, F ) or (F, W )
t
ξkl =
0 otherwise.
The match-induced type-change probabilities are
t
(k′ , l′ ) = δk (k′ )δl (l′ )
σkl
and
t
(k′ ) = δk (k′ ).
ςkl
The mean separation rates are
ϑtkl =
(
ϑ̄
0
if (k, l) = (W, F ) or (F, W )
otherwise.
By Equation (6)
dE(p̂tW J )
dt
=
X
Qt(kl)(W J) E(p̂tkl )
(k,l)∈Ŝ
=
w − E(p̂tW J ϑ̄ − ξ̄θ1 E(p̂t ), t E p̂tW J .
Letting u(t) be the fraction of unemployed workers at time t, as in [37], we have
u(t) =
1
E(p̂tW J ).
w
We can therefore derive that
du(t)
= (1 − u(t)) ϑ̄ − ξ¯ θ1 u(t),
dt
which is Equation (1) in [37].
5.2
Over-The-Counter financial markets
Our second example is from [12]. There are two classes of agents, investors and marketmakers.
Each agent consumes a single nonstorable consumption good that is used as a numeraire. The
masses8 of investors and marketmakers are each 1/2.
Investors can hold 0 or 1 unit of the asset. A fraction s of investors are initially endowed
with 1 unit of the asset. An investor is characterized as an asset owner or non-owner, and also
by an intrinsic preference for ownership that is high (h) or low (l). A high-type investor has no
8
In [12], the measures of investors and marketmakers are both 1. In order to work with a probability agent
space, we rescale these masses to 1/2.
11
such holding cost. A low type switches from low to high with intensity λu , and switches back
with intensity λd .
The type space is thus S = {ho, hn, lo, ln, m}, where the letters “h” and “l” designate
the investor’s intrinsic preference, “o” and “n” indicate whether the investor owns the asset
or not, and “m” indicates a marketmaker. Marketmakers never change their type. When a
high-preference non-owner meets an low-preference owner, they endogenously choose to trade
the asset, generating a change of types for each. Other investor-to-investor matches generate
no trade, thus no type changes. Trades generated by contact with a marketmaker will be
characterized shortly.
Investors meet by independent random search, as follows. At the successive event times
of a Poisson process with some intensity parameter λ, an investor contacts another investor
chosen at random, uniformly from the entire investor population. Thus, letting
µk (t) =
pk (t)
= 2pk (t)
pho (t) + phn (t) + plo (t) + pln (t)
denote the relative fraction of investors (among all the investors) of type k at time t, the
intensity for any investor of contacting an investor of type k is λµk (t). In [12], contact is
directional, in the sense that the event of a type k investor contacting a type l investor is
distinguished from the event of a type l investor contacting a type k investor. Thus the total
meeting intensity for specific type-k investor with some type-l investor is
θkl (pt , t) = 2λµl (t) = 4λpl (t).
This directional-contact formulation implies that the derived matching intensity function θ
automatically satisfies the mass-balance condition. Directional contact also allows in principle
for a difference in the terms of trade in the asset bargaining outcome, depending on which of
a pair contacts the other, but that difference plays no role here.
Each investor also contacts some randomly drawn marketmaker at the event times of a
Poisson process with a fixed intensity of ρ.
Viewed in terms of our model, the corresponding parameters are given as follows. For
mutation intensities, we have for any k and r ∈ S such that k 6= r,


λu if (k, r) = (lo, ho) or (ln, hn)
ηkr (p, t) = λd if (k, r) = (ho, lo) or (hn, ln)


0
otherwise.
12
For matching intensities, we have for any k and r ∈ S,


4λpr if k, r ∈ {ho, lo, hn, ln}



ρ
if k ∈ {ho, lo, hn, ln} and r = m
θkr (p, t) =

ρpr
if k = m and r ∈ {ho, lo, hn, ln}



0
if k = r = m.
Because agents in this model do not form long-term partnerships after matching, we have
enduring-relationship parameters
t
ξkr ≡ 0, σkr
(k′ , r ′ ) = δk (k′ )δl (r ′ ) and ϑkr ≡ 0 for any k, k′ , r, r ′ ∈ S.
When a type-hn investor meets a type-lo investor, the type-hn investor, having purchased the
asset, becomes a type-ho investor. Likewise the type-lo investor becomes a type-ln investor.
When a type-hn investor meets a marketmaker, the community of marketmakers may
be experiencing an excess of buyer contacts relative to seller contacts. Marketmakers are able
to instantly lay off their trades in the inter-dealer market, but do not absorb excess order
flow into their own accounts. In that case, each marketmaker will ration trades by randomizing
whether it will accept the position of at contacting buyer. Specifically, at marketmaker contact,
a type-hn investor becomes a type-ho investor with probability
min(phn , plo )
.
phn
Similarly, when a type-lo investor meets a marketmaker, the type-lo investor becomes a type-ln
investor with probability
min(phn , plo )
.
plo
Thus,
′
[ςkr (p̂, t)](k ) =
By Corollary 1,

δ (k′ )


 ho


δ (k′ )


 ln
if (k, r) = (hn, lo)
if (k, r) = (lo, hn)
min(phn ,plo )
,plo )
δhn (k′ )
δho (k′ ) + 1 − min(pphn
pk
k


,plo )
min(phn ,plo )


δlo (k′ )
δln (k′ ) + 1 − min(pphn

pk
k



δk (k′ )
Qt(ho)(lo) = λd
Qt(hn)(lo) = 0
Qt(lo)(lo) = −λu − 4λp̄thn −
Qt(ln)(lo) = 0.
13
ρ min(p̄hn , p̄lo )
p̄lo
if (k, r) = (hn, m)
if (k, r) = (lo, m)
otherwise.
Therefore,
dp̄tlo
dt
= Qt(ho)(lo) p̄tho + Qt(hn)(lo) p̄thn + Qt(lo)(lo) p̄tlo + Qt(ln)(lo) p̄tln
ρ min(p̄hn , p̄lo ) t
)p̄lo
p̄lo
= λd p̄tho − λu p̄tlo − 4λp̄thn p̄tlo − ρ min(p̄hn , p̄lo ).
= λd p̄tho − (λu + 4λp̄thn +
Because µk (t) = 2p̄tk , we have
dµlo (t)
= λd µho (t) − λu µlo (t) − 2λµhn (t)µlo (t) − ρ min(µhn (t), µlo (t)),
dt
which is Equation (3) of [12]. The remaining population-distribution evolution equations of
[12] follow similarly.
5.3
The Kiyotaki-Wright money model in continuous time
Our third example is the continuous-time version of the Kiyotaki-Wright Model from [52]. The
economy is populated by a continuum of infinitely-lived agents of unit total mass. Agents are
from two regions, Home and Foreign. Let n ∈ (0, 1) be the size of Home population.
Within each of the two regions, there are equal proportions of agents with K respective
traits, for some K ≥ 3. The trait space is {1, . . . , K, 1∗ , . . . , K ∗ }, where i denotes a home trait
and i∗ denotes a foreign trait.
There are K kinds of indivisible commodities in each region. The commodity space is
also {1, . . . , K, 1∗ , . . . , K ∗ }. An agent with trait k derives utility only from consumption of
commodity k or k∗ . After he consumes commodity k, he is able to produce one and only one
unit of commodity k + 1 (mod K) costlessly, and can also store up to one unit of his production
good costlessly. He can neither produce nor store other types of goods.
An agent of type k has random preferences between goods of types k and k’. One can
think of goods k and k′ as a pair of goods with different features over which a consumer’s taste
switches from time to time. Let l describe the preference state of an agent with type k in which
he prefers his local consumption good k over k∗ ,9 and let n be the preference state in which
he prefers the non-local consumption good k∗ . The preference state process of each agent a
two-state Markov chain with constant transition intensity bln from l to n and intensity bnl from
n to l.
In addition to the commodities described above, there are two distinguishable fiat monies,
objects with zero intrinsic worth, which we call the Home currency 0 and the Foreign currency
0∗ . Each currency is indivisible and can be stored costlessly in amounts of up to one unit
9
When k is a foreign trait i∗ , k∗ is simply i.
14
by any agent, provided that the agent is not carrying his own production good or the foreign
currency. This implies that, at any date, the inventory of each agent consists of one unit of the
Home currency, one unit of the Foreign currency, or one unit of his production good, but does
not include more than one of these three objects in total at any one time.
Agents meet pairwise randomly. Any agent’s potential trading partners arrive at the
event times of a Poisson process with parameter ν.
The type space S is the set of ordered tuples of the form (a, b, c), where
a ∈ {1, . . . , K, 1∗ , . . . , K ∗ }, b ∈ {0, 1, . . . , K, 0∗ , 1∗ , . . . , K ∗ }, c ∈ {l, n}.
For example, a agent of type (1, 2, l) is a trait-1 agent who holds one unit of the type-2 good
and who prefers local goods.
An agent chooses a trading strategy to maximize his expected discounted utility, taking
as given the strategies of other agents and the distribution of inventories. In [52], the author
focused on pure strategies that depend only on an agent’s trait, preference state, and the objects
that he and his counterparty have as inventories. Thus, the trading strategy of a trait-a agent
with preference state c can be described simply as
(
1 if he agrees to trade object b for object b′
ac
τbb′ =
0 otherwise,
where b and b′ are in {0, 1, . . . , K, 0∗ , 1∗ , . . . , K ∗ }.
We can apply our model of continuous time random matching with immediate breakup to give a mathematical foundation for the matching model in [52] by choosing suitable
parameters (η, θ, ϑ) governing random mutation, random matching, and match-induced type
changing. The mutation intensities are


δa1 (a2 )δb1 (b2 )bln
η(a1 ,b1 ,c1)(a2 ,b2 ,c2) = δa1 (a2 )δb1 (b2 )bnl


0
if c1 = l, c2 = n
if c1 = n, c2 = l
otherwise.
The matching intensities are simply proportional, and given by
θ(a1 ,b1 ,c1 )(a2 ,b2 ,c2 ) (p) = νp(a2 ,b2 ,c2) .
for a cross-sectional agent type distribution p ∈ ∆.
Because the consumption traits of agents do not change, a matched agent cannot change
to a type with a different trait. Suppose that agent i is of type (a1 , b1 , c1 ) and is matched with
agent j who has type (a2 , b2 , c2 ). The probability that agent i changes type to (a3 , b3 , c3 ) is
ν(a1 ,b1 ,c1 )(a2 ,b2 ,c2 ) (a3 , b3 , c3 ). Because the consumption traits and preference of agents are not
changed by meetings, ν(a1 ,b1 ,c1)(a2 ,b2 ,c2 ) (a3 , b3 , c3 ) = 0 if (a1 , c1 ) 6= (a3 , c3 ).
15
If an agent of type (a1 , b1 , c1 ) obtains one unit of good a1 or of a∗1 , then she will consume
the good and produce one unit of good a1 + 1. Thus, there is no agent with type (a1 , a1 , c1 ) or
(a1 , a∗1 , c1 ) in the market.
If b3 6= b1 , trade occurs between these two types of agents, so
 ac ac

if b2 6= a1 or a∗1
τb11b21 τb22b12 δb2 (b3 )






ς(a1 ,b1 ,c1 )(a2 ,b2 ,c2 ) (a1 , b3 , c1 ) = τba11bc21 τba22bc12 δa1 +1 (b3 ) if b2 = a1 or a∗1 and b1 6= a1 + 1







0
if b2 = a1 or a∗1 and b1 = a1 + 1,
which implies that
ς(a1 ,b1 ,c1 )(a2 ,b2 ,c2 ) (a1 , b1 , c1 ) = 1 −
X
ς(a1 ,b1 ,c1 )(a2 ,b2 ,c2 ) (a1 , b3 , c1 ).
b3 6=b1
6
Proof of Theorem 1
Let βi = (αi , gi ) be the extended type process for agent i. Note that for any t > t1 > . . . , > tn
and ∆t > 0, if (k, l) 6= (k′ , l′ ),
=
=
=
=
=
λ ⊠ P β t+∆t (i, ω) = (k′ , l′ ), β t (i, ω) = (k, l), β t1 (i, ω) = (k1 , l1 ), . . . , β tn (i, ω) = (kn , ln )
Z
P βit+∆t (ω) = (k′ , l′ ), βit (ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln ) dλ
ZI
P βit (ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln )
I
P βit+∆t (ω) = (k′ , l′ ) | βit (ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln ) dλ
Z
P βit (ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln )
I
P βit+∆t (ω) = (k′ , l′ ) | βit (ω) = (k, l) dλ
Z
P β t (i, ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βit1 (ω) = (kn , ln )
I
(Qt(kl)(k′ l′ ) ∆t + Ri (∆t) dλ
Qt(kl)(k′ l′ ) ∆tλ ⊠ P β t (i, ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln )
Z
+ P β t (i, ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln ) Ri (∆t)dλ,
I
where lim∆t→0
Ri (∆t)
∆t
= 0. Note that the generator and initial distribution determine the finite
dimensional distributions, then Ri has at most K(K + 1) different forms since there are only
K(K + 1) initial extended types. Then
P β t (i, ω) = (k, l), βit1 (ω) = (k1 , l1 ), . . . , βitn (ω) = (kn , ln ) Ri (∆t)dλ
= 0.
lim
∆t→0
∆t
16
Therefore,
λ ⊠ P β t+∆t (i, ω) = (k′ , l′ ) | β t (i, ω) = (k, l), β t1 (i, ω) = (k1 , l1 ), . . . , β tn (i, ω) = (kn , ln )
= Qt(kl)(k′ l′ ) ∆t + o(∆t).
Note that the right hand side does not depend on t1 , . . . , tn . Then β viewed as a stochastic
process with sample space (I × Ω, I ⊠ F, λ ⊠ P ) is also a Markov process with transition rate
matrix Q.
By the exact law of large numbers in Theorem 2.16 of [47], we know that for P -almost all
ω ∈ Ω, (βωt1 , . . . , βωtn ) and (β t1 , . . . , β tn ) (viewed as random vectors) have the same distribution.
Note that finite dimensional distributions determines whether a process is a Markov chain and
also the transition rate matrix. Then for P -almost all ω ∈ Ω, βω is also a Markov chain with
transition rate matrix Qt at time t.
Note that for any ω ∈ Ω, the realized extended type distribution p̂t (ω) is also the distribution βωt . By (1), p̂t (ω) is equal to the distribution of β t P -almost surely. Then p̂t (ω) is equal
to E p̂t P -almost surely
ˆ define q̂Q(p̂) ∈ ∆
ˆ such that
For part (3), for any p̂, q̂ ∈ ∆,
[q̂Q(p̂)]kl =
X
q̂k′ l′ Q(p̂)(k′ l′ )(kl) .
(k ′ ,l′ )∈Ŝ
ˆ such that p̂∗ Q(p̂∗ ) = 0. Since Q(kl)(k′ l′ )
It is sufficient to show that there exists a p̂∗ ∈ ∆
ˆ and ∆
ˆ is compact, we can find a positive real number c such that
is continuous on ∆,
cQ(kl)(k′ l′ ) (p̂) ≤ 1 for any p̂ ∈ ∆
ˆ and (k, l), (k′ , l′ ) ∈ Ŝ.
It is easy to see that p̂Q(p̂) = 0 is equivalent to the statement that f (p̂) , p̂ + cp̂Q(p̂)
has a fixed point. Note that
X
f (p̂)kl = p̂kl +
cp̂k′ l′ Q(p̂)(k′ ,l′ )(k,l) = 1 + cQ(p̂)(k,l)(k,l) p̂kl +
(k ′ ,l′ )∈Ŝ
X
cp̂k′ l′ Q(p̂)(k′ ,l′ )(k,l) ,
(k ′ ,l′ )6=(k,l)
1 + cQ(p̂)(k,l)(k,l) ≥ 0 and Q(p̂)(k′ ,l′ )(k,l) ≥ 0 if (k′ , l′ ) 6= (k, l). Then [f (p̂)]kl ≥ 0 for any
(k, l) ∈ Ŝ. One can check that
X
f (p̂)(k,l) =
(k,l)∈Ŝ
=
X
X
p̂k′ l′ + cp̂k′ l′ Q(p̂)(k′ ,l′ )(k,l)
X
X
p̂k′ l′ + cp̂k′ l′ Q(p̂)(k′ ,l′ )(k,l)
(k,l)∈Ŝ (k ′ ,l′ )∈Ŝ
(k ′ ,l′ )∈Ŝ
=
X
(k,l)∈Ŝ
p̂(k′ ,l′ ) = 1.
(k ′ ,l′ )∈Ŝ
ˆ to ∆.
ˆ By Kakutani’s Fixed Point Theorem, there
Hence, f is a continuous function from ∆
ˆ such that p̂∗ + cp̂∗ Q(p̂∗ ) = p̂∗ . Therefore, p̂∗ is a stationary distribution.
exists a p̂∗ ∈ ∆
17
7
Proof of Theorem 2
This section is organized as follows. Lemma 1 in Subsection 7.1 presents a static model for
internal random matching as well as some estimations on the relevant matching probabilities.
Such a static matching model will be used in the construction of a hyperfinite dynamic matching
model in Subsection 7.2. Lemmas 2 – 5 in Subsection 7.3 state some properties of the hyperfinite
dynamic matching model. Based on Lemmas 2 – 5, Theorem 2 is proved in Subsection 7.4.
For the convenience of the reader, we leave the technical proofs of Lemmas 1 – 5 in Subsection
7.5. Elementary nonstandard analysis is used extensively in this section; the reader is referred
to [32] for the details.
7.1
Static internal matching model
Let I = {1, . . . , M̂ } be a hyperfinite set with M̂ an unlimited hyperfinite integer in ∗ N∞ , I0
the internal power set on I, λ0 the internal counting probability measure on I0 .
Lemma 1 Let (I, I0 , λ0 ) be the hyperfinite internal counting probability space. Then, there
exists a hyperfinite internal set Ω with its internal power set F0 such that for any initial internal
type function α0 from I to S and initial internal partial matching π 0 from I to I ∪ {J} with
g0 = α0 ◦ π 0 , and for any internal matching probability function q from S × S to ∗ R+ with
P
0 0 −1 is the internal
r∈S qkr ≤ 1 and ρ̂kJ qkl = ρ̂lJ qlk for any k, l ∈ S, where ρ̂ = λ0 α , g
extended type distribution, there exists an internal random matching π from I × Ω to I and an
internal probability measure P0 on (Ω, F0 ) with the following properties.
(i) Let H = {i : π 0 (i) 6= J}. Then P0 {ω ∈ Ω : πω (i) = π 0 (i) for any i ∈ H} = 1.
(ii) Let g be the internal mapping from I × Ω to S ∪ {J}, define by
(
α0 (π(i, ω)) if π(i, ω) 6= i
g(i, ω) =
J
if π(i, ω) = i
for any (i, ω) ∈ I × Ω. Suppose i 6= j, (α0i , πi0 ) = (k1 , i) and (α0j , πj0 ) = (k2 , j), where
k1 , k2 ∈ S. For any l1 , l2 ∈ S, if ρ̂k1 J >
qk1 l1 −
if, in addition, ρ̂k2 J ≥
1
1
M̂ 3
qk1 l1 qk2 l2 −
1
2
M̂ 3
1
1
M̂ 3
,
≤ P0 (gi = l1 ) ≤ qk1 l1 ;
, we will also have
5
M̂
2
3
≤ P0 (gi = l1 , gj = l2 ) ≤ qk1 l1 qk2 l2 +
1
2
M̂ 3
.
To reflect their dependence on (α0 , π 0 , q), π and P0 will also be denoted by π(α0 ,π0 ,q) and
P(α0 ,π0 ,q) .
18
7.2
Hyperfinite dynamic matching model
What we need to do is to construct a hyperfinite sequence of internal transition probabilities and
a hyperfinite sequence of internal type functions. Since we need to consider random mutation,
random matching and random type changing at each infinitesimal time period, three internal
measurable spaces with internal transition probability will be constructed at each time period.
Let M and M̂ be fixed unlimited hyperfinite natural numbers in ∗ N∞ and M̂ is sufficiently
larger than M . Let I0 = {1, 2, . . . , M̂ }, I0 the internal power set on I, and λ0 the internal
counting probability measure on I0 .
We define the parameters for the hyperfinite dynamical system as follows. Let
(
n
1 ∗
( ηkl )(ρ̂, M
) + M12 if k 6= l
n
(ρ̂) = M P
η̂kl
n (ρ̂)
if k = l,
1 − l6=k η̂kl
n
(ρ̂) =
q̂kl
X
1 ∗
n
n
(ρ̂),
q̂kl
( θkl )(ρ̂, ) and q̂kn (ρ̂) = 1 −
M
M
l∈S
n
1
), 1 − 2 },
M
M
n
n
[σ̂kl
(ρ̂)](k′ , l′ ) = [(∗ σkl )(ρ̂, )](k′ , l′ ),
M
1
n
1
ϑ̂nkl (ρ̂) =
(∗ ϑkl )(ρ̂, ) + 2 ,
M
M
M
n
n
[ˆ
ςkl
(ρ̂)](k′ ) = [(∗ ςkl )(ρ̂, )](k′ )
M
′
′
∗
∗
ˆ Denote
for any k, k , l, l ∈ S, n ∈ N and ρ̂ ∈ ∆.
n
(ρ̂) = min{(∗ ξkl )(ρ̂,
ξ̂kl
n
ˆ
η̄ n0 = M max{η̂kl
(ρ̂) : n ≤ n0 , k, l ∈ S, k 6= l, ρ̂ ∈ ∗ ∆},
n
ˆ
(ρ̂) : n ≤ n0 , k, l ∈ S, ρ̂ ∈ ∗ ∆},
q̄ n0 = M max{q̂kl
ˆ
ϑ̄n0 = M max{ϑ̂nkl (ρ̂) : n ≤ n0 , k, l ∈ S, ρ̂ ∈ ∗ ∆}.
It is clear that η̄ n0 , q̄ n0 and ϑ̄n0 are finite if
n0
M
is finite. Note that
n ∈ N. Then there there exists a hyperfinite natural number C ∈
η̄CM
M
,
q̄ CM
M
,
ϑ̄CM
M
≤
1
K.
η̄nM q̄ nM ϑ̄nM
1
M , M , M ≤ K
∗N
∞ such that C ≤
for any
M and
Let T0 be the hyperfinite discrete time line {n/M }CM
n=0 .
Let {Akl }(k,l)∈S̃ be an internal partition of I such that
|Akl |
M̂
≃ p̂0kl for any k ∈ S and
l ∈ S ∪ {J}, and |Akl | = |Alk | and |Akk | are even for any k, l ∈ S. Let α0 be an internal
S
function from (I, I0 , λ0 ) to S such that α0 (i) = k if i ∈ l∈S∪{J} Akl . Let π 0 be an internal
S
partial matching from I to I such that π 0 (i) = i on k∈S AkJ , and the restriction π 0 |Akl is
an internal bijection from Akl to Alk for any k, l ∈ S. Let g 0 (i) = α0 (π 0 (i)). It is clear that
λ0 ({i : α0 (i) = k, g 0 (i) = l}) ≃ p̂0kl for any k ∈ S and l ∈ S ∪ {J}.
19
Suppose that the construction for the dynamical system D has been done up to time
3n−3
3n−3
period n − 1. Thus, {(Ωm , Em , Qm )}m=1
and {α̂l , π̂ l }l=0
have been constructed, where each
Ωm is a hyperfinite internal set with its internal power set Em , Qm an internal transition
probability from Ωm−1 to (Ωm , Fm ), and αl an internal type function from I × Ωl−1 to the type
Q
space S, and π l an internal random matching from I × Ωl−1 to I. Here, Ωm = m
j=1 Ωj , and
m when there is no confusion. Denote the internal product
{ωj }m
j=1 will also be denoted by ω
m
transition probability Q1 ⊗Q2 ⊗· · ·⊗Qm by Qm , and ⊗m
j=1 Ej by E (which is simply the internal
power set on Ωm ). Then, Qm is the internal product of the internal transition probability Qm
with the internal probability measure Qm−1 .
We shall now consider the constructions for time n. We first work with the random
mutation step. Let Ω3n−2 = S I (the space of all internal functions from I to S) with its
internal power set E3n−2 . For each i ∈ I, ω 3n−3 ∈ Ω3n−3 , if α̂3n−3 (i, ω 3n−3 ) = k, define a
probability measure γiω
3n−3
3n−3
3n−3
ρ̂ω3n−3
3n−3 = λ0 α̂ω 3n−3 , ĝω 3n−3
on S by letting γiω
−1
3n−3
n (ρ̂3n−3 ) for each l ∈ S, where
(l) = η̂kl
ω 3n−3
3n−3
. Define an internal probability measure Qω3n−2 on (S I , E3n−2 )
Q3n−2
Q
3n−3
. Let α̂3n−2 : I × m=1
Ωm → S be such that
to be the internal product measure i∈I γiω
Q3n−2
α̂3n−2 i, ω 3n−2 = ω3n−2 (i). Let π̂ 3n−2 : I × m=1
Ωm → I be such that π̂ 3n−2 i, ω 3n−2 =
Q3n−2
Ωm → S ∪ {J} be such that
π̂ 3n−3 i, ω 3n−3 . Let ĝ 3n−2 : I × m=1
(
α̂3n−2 (π 3n−2 (i, ω 3n−2 ), ω 3n−2 ) if π 3n−2 (i, ω 3n−2 ) 6= i
ĝ 3n−2 i, ω 3n−2 =
J
if π 3n−2 (i, ω 3n−2 ) = i.
3n−2
3n−2 −1
be the internal cross-sectional extended type distribution
Let ρ̂ω3n−2
3n−2 = λ0 α̂ω 3n−2 , ĝω 3n−2
after random mutation.
¯
Next, we consider the step of directed random matching. Let (Ω3n−1 , E3n−1 ) = (Ω̄, E),
¯ is the measurable space constructed in the proof of Lemma 1. For any given
where (Ω̄, E)
3n−3
ω 3n−2 ∈ Ω3n−2 , the type function is α̂ω3n−2
3n−2 ( · ) while the partial matching function is π̂ω 3n−3 ( · ).
3n−2
We can construct an internal probability measure Qω3n−1 = Pα̂3n−2
ω 3n−2
and a di
Q3n−1
: I × m=1 Ωm →
n 3n−2
,π̂ 3n−3
3n−3 ,q̂ (ρ̂ 3n−2 )
ω
ω
rected random matching πα̂3n−2 ,π̂3n−3 ,q̂n (ρ̂3n−2 ) by Lemma 1. Let α̂3n−1
ω 3n−2
ω3n−2 ω3n−3
Q3n−1
Q3n−1
3n−1
3n−1
: I × m=1
Ωm → S ∪ {J} be such that
S, π̂
: I × m=1 Ωm → I and ĝ
α̂3n−1 i, ω 3n−1 = α̂3n−2 i, ω 3n−2
π̂ 3n−1 i, ω 3n−1 = πα̂3n−2 ,π̂3n−3 ,q̂n (ρ̂3n−2 ) ,
ω 3n−2 ω 3n−3
ω 3n−2
(
α̂3n−2 (π 3n−1 (i, ω 3n−1 ), ω 3n−2 ) if π 3n−1 (i, ω 3n−1 ) 6= i
ĝ 3n−1 i, ω 3n−1 =
J
if π 3n−1 (i, ω 3n−1 ) = i.
3n−1 −1
3n−1
be the internal cross-sectional extended type distribution
Let ρ̂ω3n−1
3n−1 = λ0 α̂ω 3n−1 , ĝω 3n−1
after random matching.
20
Now, we consider the final step of random type changing with break-up for matched
agents. Let Ω3n = (S ×{0, 1})I with its internal power set F3n , where 0 represents “unmatched”
1 , ω2 ) ∈ Ω
and 1 represents “paired”; each point ω3n = (ω3n
3n is an internal function from I
3n
to S × {0, 1}. Define a new type function α̂3n : (I × Ω3n ) → S by letting α̂3n (i, ω 3n ) =
1 (i). Fix ω 3n−1 ∈ Ω3n−1 . For each i ∈ I, (1) if π̂ 3n−1 (i, ω 3n−1 ) = i (i is not paired
ω3n
after the matching step at time n), let τiω
3n−1
be the probability measure on the type space
S × {0, 1} that gives probability one to the type α̂3n−2 (i, ω 3n−2 ), 0 and zero for the rest; (2)
if π̂ 3n−1 (i, ω 3n−1 ) 6= i and π̂ 3n−3 (i, ω 3n−3 ) = i (i is newly paired after the matching step at
time n), α̂3n−2 (i, ω 3n−2 ) = k, π̂ 3n−1 (i, ω 3n−1 ) = j and α̂3n−2 (j, ω 3n−2 ) = l, define a probability
measure τijω
3n−1
on (S × {0, 1}) × (S × {0, 1}) such that
τijω
3n−1
and
τijω
3n−1
n 3n−1
n
′ ′
(k′ , 1), (l′ , 1) = ξkl
(ρ̂ω3n−1 )[σkl
(ρ̂ω3n−1
3n−1 )](k , l )
n 3n−1
n 3n−1
(ρ̂ω3n−1 )](k′ , l′ )
(ρ̂ω3n−1 ) [ςkl
(k′ , 0), (l′ , 0) = 1 − ξkl
for k′ , l′ ∈ S, and zero for the rest; (3) if π̂ 3n−1 (i, ω 3n−1 ) 6= i and π̂ 3n−3 (i, ω 3n−3 ) 6= i (i is already
paired at time n − 1), α̂3n−2 (i, ω 3n−2 ) = k, π̂ 3n−1 (i, ω 3n−1 ) = j and α̂3n−2 (j, ω 3n−2 ) = l, define
a probability measure τijω
τijω
and
τijω
3n−1
3n−1
3n−1
on (S × {0, 1}) × (S × {0, 1}) such that
′
′
(k′ , 1), (l′ , 1) = 1 − ϑ̂nkl (ρ̂ω3n−1
3n−1 ) δk (k )δl (l )
n 3n−1
n 3n−1
ςlk
(ρ̂ω3n−1 )](l′ )
ςkl
(ρ̂ω3n−1 )](k′ )[ˆ
(k′ , 0), (l′ , 0) = ϑ̂nkl (ρ̂ω3n−1
3n−1 )[ˆ
for k′ , l′ ∈ S, and zero for the rest. Let Anω3n−1 = {(i, j) ∈ I × I : i < j, π̄ n (i, ω 3n−1 ) = j}
3n−1
and Bωn3n−1 = {i ∈ I : π̄ n (i, ω 3n−1 ) = i}. Define an internal probability measure Qω3n
on
(S × {0, 1})I to be the internal product measure
Y
τiω
i∈B n3n−1
ω
3n−1
⊗
Y
τijω
3n−1
.
(i,j)∈An 3n−1
ω
Let
3n
3n
π̂ (i, ω ) =
(
i
π̂ 3n−1 (i, ω 3n−1 )
2 (i) = 0 or ω 2 (π̄ n (i, ω 3n−1 )) = 0
if π̂ 3n−1 (i, ω 3n−1 ) = i or ω3n
3n
otherwise.
and ĝ3n (i, ω 3n ) = α̂3n (π̂ 3n (i, ω 3n ), ω 3n ).
Keep repeating the construction. we can construct a hyperfinite sequence {(Ωm , Em , Qm )}3CM
m=1
of internal transition probability and a hyperfinite sequence {α̂l }CM
l=0 of internal type functions.
Let (I × Ω3CM , I0 ⊗ E 3CM , λ0 ⊗ Q3CM ) be the internal product probability space of
(I, I0 , λ0 ) and (Ω3CM , E 3CM , Q3CM ). Let (I × Ω3CM , I ⊠ F, λ ⊠ P ) be the Loeb space of the
21
internal product. For simplicity, let Ω3CM be denoted by Ω. Let F m = {F ∈ E 3CM : F =
m ∈ E m }. For any random variable f on (Ωm+1 , E m+1 , Qm+1 ) and
F m × Π3CM
m′ =m+1 Ωm′ and F
R
R
m
m
m 2
m
m
ω m ∈ Ωm , let Eω f = Ωm+1 f (ω m+1 )dQωm+1 and Varω f = Ωm+1 f (ω m+1 ) − Eω f dQωm+1 .
For any m ∈ {1, . . . , 3CM } and A ⊆ Ωm , let Ā = {ω ∈ Ω : ω m ∈ Ωm }. In the following,
we will often work with functions or sets that are measurable in (Ωm , F m , Qm ) or its Loeb space
for some m ≤ 3CM , which may be viewed as functions or sets based on (Ω3CM , F 3CM , Q3CM )
or its Loeb space by allowing for dummy components for the tail part. We can thus continue
to use P to denote the Loeb measure generated by Qm for convenience. Since all the type
functions, random matchings and the partners’ type functions are internal in the relevant
hyperfinite settings, they are all I ⊠ F-measurable when viewed as functions on I × Ω.
7.3
Properties of the hyperfinite dynamic matching model
As above, we decompose each period into three steps. Let e(m) = [ m+2
3 ] and f (m) = m −
3e(m) + 3, then the m-th step in the hyperfinite dynamical system is also the f (m)-th step in
the e(m)-th period.
m m
Let β̃im = (α̂m
i , ĝi , ĥi ), where
(
0
ĥm
i =
1
if gim 6= J and gim−1 6= J
otherwise.
It is clear that ĥm
i = 0 if and only if i has been matched with another agent for at least two
steps. Let ρ̃m
kl0 be the fraction of agent that is of type k, matched with type l agent at the m-th
step and paired in the last period; let ρ̃m
kl1 be the fraction of agent that is of type k, matched
with type l agent at the m-th step and single in the last period. Note that ρ̂m
kl is the proportion
m
m
of type k agent matched with type l agent at the m-th step, which implies ρ̂m
kl = ρ̃kl0 + ρ̃kl1 .
˜ be the space of all the probability measure p̃ on S̃ = S × (S ∪ {J}) × {0, 1} such
Let ∆
that p̃klr = p̃lkr for any k, l ∈ S and r ∈ {0, 1}. For each k, l ∈ S and n ≤ CM , we use the
n to denote the mutation rate from ∗ ∆
˜ → ∗ R by letting η̂ n (ρ̃) = η̂ n (ρ̂). We
same notation η̂kl
i
n
n
n
n
n
n
ˆ
also extend the domain of q̂ , q̂ , ξ , σ̂ , ςˆ , ϑ̂ in the same way.
kl
k
kl
kl
kl
kl
For any finite dimensional hyper integer vector m = (m1 , . . . , mr ) such that 3CM ≥
1
r
m1 > . . . mr ≥ 0, define β̃im = (β̃ m , . . . , β̃ m ). We say m′ > m if m′ > m1 . We define
β̃ m = β̃ 0 if m < 0. Then we have the following lemma.
Lemma 2 β̃i satisfies the Markov property for any i ∈ I, that is
P0 (β̃im = a, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )−P0 (β̃im = a, β̃im1 = a1 )P0 (β̃im1 = a1 , β̃im2 = a2 )
is infinitesimal for any i ∈ I if 3CM ≥ m > m1 > m2 .
22
The following lemma shows that β̃ is, in some sense, essentially pairwisely independent.
Lemma 3 For any i ∈ I, the following statement holds for P -almost all agent j ∈ I: for any
m and m0 such that m0 < m,
P0 (β̃im = a1 , β̃jm = a2 , β̃im0 = c1 , β̃jm0 = c2 ) − P0 (β̃im = a1 , β̃im0 = c1 )P0 (β̃jm = a2 , β̃im0 = c2 )
is infinitesimal.
Let
Ĥim = |{n : α̂i3n−2 6= α̂i3n−3 or ĝi3n−2 6= ĝi3n−3 , 3n − 2 ≤ m}|,
N̂im = |{n : ĝi3n−1 6= ĝ 3n−2 , 3n − 1 ≤ m}|
and
R̂im = |{n : ĝi3n = J and ĝ3n−1 6= J, 3n ≤ m}|.
Then Ĥim , N̂im , R̂im are agent i’s numbers of mutations, matchings and break-ups up to m-th
step respectively. Let X̂im = Ĥim + N̂im + R̂im . The following lemma provides a lower bound of
the probability that there is no jump for the counting process X̂i between two different steps.
Lemma 4
P0 (X̂im+∆m = X̂im |F m )
!2∆m
K η̄ e(m+∆m)
≥
1−
M
−
K q̄ e(m+∆m)
1−
M
(
K∆m 2η̄ e(m+∆m) +q̄ e(m+∆m) +ϑ̄e(m+∆m)
≃ e
!∆m
K ϑ̄e(m+∆m)
1−
M
!∆m
)
M
holds for any m, ∆m ∈ {0, . . . , 3CM } and F m ∈ F m such that m + ∆m ≤ 3CM ,
and P0
(F m )
∆m
M
is finite
∆m
M
is finite
> 0.
By Lemma 4, it is easy to prove the following result.
Lemma 5
kE ρ̃
m+∆m
m
−
(
K∆m 2η̄ e(m+∆m) +q̄ e(m+∆m) +ϑ̄e(m+∆m)
− E (ρ̃ ) k∞ . 1 − e
)
M
holds for any m, ∆m ∈ {0, . . . , 3CM } and F m ∈ F m such that m + ∆m ≤ 3CM ,
and P0
(F m )
> 0.
23
7.4
Existence of continuous time random matching
Let
αi (t) =

 lim min{α̂3n
i :
t0 →t+
min{α̂3n :
i
n
M
n
M
∈ monad(t0 )}
∈ monad(t)}
if the limit exists
otherwise,
πi (t) = π̂ 3n , where n is the integer part of tM,
gi (t) =

 lim min{ĝi3n :
t0 →t+
min{g̃ 3n :
i
n
M
Let Et = {n :
n
M
n
M
∈ monad(t0 )}
∈ monad(t)}
if the limit exists
otherwise,
∈ monad(t)}. Fix i ∈ I and ω ∈ Ω, if α̂3n
i (ω) ≡ C on Et , by
Spillover Principle, there exists n1 , n2 such that n1 < t < n2 , n1 , n2 ∈
/ Et and α̂3n
i (ω) ≡ C
n
on {n1 , n1 + 1, . . . , n2 }. Hence for any t′ ∈ st( nM1 ), st( nM2 ) , min{α̂3n
i : M ∈ monad(t)} = C.
Therefore,
αi (ω, t) = lim min{α̂3n
i :
t0
Fix any n0 ∈ Et . If
→t+
0
α̂3n
i (ω)
n
∈ monad(t0 )} = C.
M
6= αi (ω, t), by the argument above, α̂3n
i (ω) can not be
constant on Et . Hence there is a mutation or matching or break up in Et . Therefore, by
Lemma 4, for any n1 , n2 such that n1 < t < n2 , n1 , n2 ∈
/ Et
3n1
3n2
0
P α̂3n
(ω)
=
6
α
(ω,
t)
≤
P
X̂
(ω)
=
6
X̂
(ω)
i
i
i
i
−
≤ 1 − st e
3K(n2 −n1 ) 2η̄ e(n2 ) +q̄ e(n2 ) +ϑ̄e(n2 )
M
(
)
3K(n2 −n1 ) 2η̄ e(n2 ) +q̄ e(n2 ) +ϑ̄e(n2 )
M
(
)
!
.
!
0
→ 0, st e
→ 1. Then P α̂3n
i (ω) 6= αi (ω, t) = 0,
3n0
0
which implies P α̂3n
(ω)
=
α
(ω,
t)
=
1.
Similarly,
we
can
prove
P
ĝ
(ω)
=
g
(ω,
t)
=1
i
i
i
i
Let
n2 −n1
M
−
for any n0 such that
n0
M
∈ monad(t). By Lemmas 2 and 3, (α̂3n , ĝ3n ) is Markovian and
independent. Therefore, (α, g) is also Markovian and independent.
Suppose
n
M
∈ monad(t) and
n+∆n
M
∈ monad(t + ∆t). Then
3n
P0 X̂i3n+3∆n − X̂i3n ≥ 2 | (α̂3n
,
ĝ
)
=
(k,
l)
i
i
≤
3n+3∆n
X h
r=3n+1
3n
P0 X̂ir = X̂ir−1 + 1, X̂ r−1 = X̂ 3n | (α̂3n
,
ĝ
)
=
(k,
l)
i
i
i
3n
P0 X̂ 3n+3∆n − X̂ir ≥ 1 | X̂ir = X̂ir−1 + 1, X̂ r−1 = X̂ 3n , (α̂3n
i , ĝi ) = (k, l)
24
Note that
3n+3∆n
X
r=3n+1
3n+3∆n
3n
3n
3n 3n
P0 X̂ir = X̂ir−1 + 1, X̂ r−1 = X̂ 3n | (α̂3n
,
ĝ
)
=
(k,
l)
=
P
X̂
>
X̂
|
(α̂
,
ĝ
)
=
(k,
l)
0
i
i
i
i
i
and
3n
P0 X̂ 3n+3∆n − X̂ir ≥ 1 | X̂ir = X̂ir−1 + 1, X̂ r−1 = X̂ 3n , (α̂3n
,
ĝ
)
=
(k,
l)
i
i
. 1 − e−
(
3K∆n 2η̄ n+∆n +q̄ n+∆n +ϑ̄n+∆n
≃ 1 − e−3K∆t(2η̄
)
M
n+∆n +q̄ n+∆n +ϑ̄n+∆n
)
Then
3n
P0 X̂i3n+3∆n − X̂i3n ≥ 2 | (α̂3n
,
ĝ
)
=
(k,
l)
i
i
.
3n+3∆n
X
−3K∆t(2η̄n+∆n +q̄ n+∆n +ϑ̄n+∆n )
3n
P0 X̂ir = X̂ir−1 + 1, X̂ r−1 = X̂ 3n | (α̂3n
1−e
i , ĝi ) = (k, l)
r=3n+1
=
.
=
−3K∆t(2η̄n+∆n +q̄ n+∆n +ϑ̄n+∆n )
3n+3∆n
3n
3n 3n
P0 X̂i
> X̂ | (α̂i , ĝi ) = (k, l)
1−e
n+∆n +q̄ n+∆n +ϑ̄n+∆n
) 2
1 − e−3K∆t(2η̄
O(∆t2 )
It remains to check Equations (1) to (4). For Equation (1), fix any k, l, k′ ∈ S such that k 6= k′ .
By Equation (8),
≃
P βi (t + ∆t) = (k′ , l) | βi (t) = (k, l)
P0 β̂i3n+3∆n = (k′ , l), X̂i3n+3∆n − X̂i3n = 1 | β̂i3n = (k, l) + O(∆t2 ).
For any agent i ∈ I with extended type (k, l), mutation is the only way for her to become an
25
(8)
agent with extended type (k′ , l) given X̂i3n+3∆n − X̂i3n = 1. Therefore,
≃
≃
P βi (t + ∆t) = (k′ , l) | βi (t) = (k, l)
P0 β̂i3n+3∆n = (k′ , l), Ĥi3n+3∆n − Ĥi3n = X̂i3n+3∆n − X̂i3n = 1 | β̂i3n = (k, l) + O(∆t2 )
n+∆n−1
X
r=n
P0 β̂i3r+1 = (k′ , l), Ĥi3n+3∆n = Ĥi3r + 1 = Ĥi3n + 1,
X̂i3n+3∆n = X̂i3r + 1 = X̂i3n + 1 | β̂i3n = (k, l) + O(∆t2 )
≃
n+∆n−1
X h
P0 β̂i3r+1 = (k′ , l), Ĥi3r = Ĥi3n , X̂i3r = X̂i3n | β̂i3n = (k, l)
n+∆n−1
X h
P0 Ĥi3r = Ĥi3n , X̂i3r = X̂i3n | β̂i3n = (k, l)
r=n
i
P0 X̂i3n+3∆n = X̂i3r+1 | β̂i3r+1 = (k′ , l), Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l) + O(∆t2 )
≃
r=n
3r+1
3n
3n
3r
3n
′
3r
P0 β̂i
= (k , l) | Ĥi = Ĥi , X̂i = X̂i , β̂i = (k, l)
i
P0 X̂i3n+3∆n = X̂i3r+1 | β̂i3r+1 = (k′ , l), Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l) + O(∆t2 )
By Equation (14),
r+1
U13r+1 (ρ̃0 ) η̂llr+1 U13r+1 (ρ̃0 ) P0 β̂i3r+1 = (k′ , l) | Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l) − η̂kk
′
ǫ0
+ ξ−1 .
≤
P0 Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l)
Then
3r+1 0 3r+1
r+1
′
3r
3n
3r
3n
3n
U
(ρ̃
)
P
β̂
=
(k
,
l)
|
Ĥ
=
Ĥ
,
X̂
=
X̂
,
β̂
=
(k,
l)
−
η̂
0 i
i
i
i
i
i
1
kk ′
r+1 3r+1 0 r+1 3r+1 0 r+1
U13r+1 (ρ̃0 ) U1
(ρ̃ ) − η̂kk
(ρ̃ ) η̂ll
≤ η̂kk′ U1
′
ǫ0
+ ξ−1
+ P0 Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l)
r+1 2
ǫ0
η̄
+ ξ−1 .
+
≤ K
M
P0 Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l)
26
Therefore
n+∆n−1
X
r+1
U13r+1 (ρ̃0 ) η̂kk
P βi (t + ∆t) = (k′ , l) | βi (t) = (k, l) −
′
r=n
.
n+∆n−1
X 3r
3n
3r
3n
3n
P
Ĥ
=
Ĥ
,
X̂
=
X̂
|
β̂
=
(k,
l)
0 i
i
i
i
i
r=n
P0 β̂i3r+1 = (k′ , l) | Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l) − η̂ir+1 U13r+1 (ρ̃0 )
P0 X̂i3n+3∆n = X̂i3r+1 | β̂i3r+1 = (k′ , l), Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l) +
.
n+∆n−1
X
r=n
P0 X̂i3n+3∆n = X̂i3r+1 | β̂i3r+1 = (k′ , l), Ĥi3r = Ĥi3n , X̂i3r = X̂i3n , β̂i3n = (k, l) − 1 + O(∆t2 )
n+∆n−1
X
r=n
.
.
=
r+1
η̂kk
U13r+1 (ρ̃0 ) P0 Ĥi3r = Ĥi3n , X̂i3r = X̂i3n | β̂i3n = (k, l)
′
r+1
η̂kk
U13r+1 (ρ̃0 ) P0 Ĥi3r = Ĥi3n , X̂i3r = X̂i3n | β̂i3n = (k, l)
′
3n+3∆n
3r+1
3r+1
′
3r
3n
3r
3n
3n
P0 X̂i
= X̂i
| β̂i
= (k , l), Ĥi = Ĥi , X̂i = X̂i , β̂i = (k, l) − 1


r+1 2
n+∆n−1
X
η̄
ǫ
+
ξ
−1
K
0

+
+
M
3n
P0 β̂i = (k, l)
r=n
n+∆n−1
X η̄ n+∆n 3K∆n(2η̄n+∆n +q̄n+∆n +ϑ̄n+∆n ) 3K∆n(2η̄n+∆n +q̄n+∆n +ϑ̄n+∆n )
−
−
M
M
e
− 1
e
M
r=n
2
K η̄ n+∆n
M ǫ0 + M ξ−1
+ O(∆t2 )
+
+
M
3n
P0 β̂i = (k, l)
−6K∆t(2η̄n+∆n +q̄n+∆n +ϑ̄n+∆n )
n+∆n
η̄
∆t e
− 1 + O(∆t2 )
O(∆t2 ).
Note that P βit = β̂i3n = 1 for any i ∈ I. Then E p̂t ≃ E ρ̂3n . Therefore,
≃
.
By Lemma 5,
n+∆n−1
1 X
r+1
η̂kk
U13r+1 (ρ̃0 ) − ηkk′ t, E p̂t ∆t
′
∆t r=n
n+∆n−1
∗
n
, E ρ̂3n
ηkk′ M
1 X 1 ∗
r + 1 3r+1 0
′
η
,
U
(ρ̃
)
−
∆n
kk
1
∆t r=n M
M
M
n+∆n−1
X ∆n 1
∗ ηkk′ r + 1 , E ρ̂3r+1 − ∗ ηkk′ n , E ρ̂3n .
M ∆t ∆n r=n M
M
3K∆n( 2η̄ e(n+∆n) +q̄ e(n+∆n) +ϑ̄e(n+∆n) )
.
M
= ǫ(∆n)
kE ρ̂3r+1 − E ρ̂3n k∞ ≤ 1 − e−
27
for any r such that n ≤ r ≤ n + ∆n − 1. Let
W (∆n) = {(t′ , ρ̂′ ) : |t′ −
Then
.
.
∆n
n
|<
, ||ρ̂′ − E ρ̂3n ||∞ < ǫ(∆n)}
M
M
n+∆n−1
1 X
r+1
3r+1 0
t
η̂kk′ U1
(ρ̃ ) − ηkk′ t, E p̂ ∆t
∆t r=n
∆n 1
M ∆t ∆n
n+∆n−1
X
n
∗
, E ρ̂3n ηkk′ t′ , ρ̂′ − ∗ ηkk′
M
′ ′
r=n (t ,ρ̂ )∈W (∆n)
∆n
n
∗
sup
, E ρ̂3n ηkk′ t′ , ρ̂′ − ∗ ηkk′
M ∆t (t′ ,ρ̂′ )∈W (∆n)
M
sup
By the continuity of ∗ η, sup(t′ ,ρ̂′ )∈W (∆n) ∗ ηkk′ (t′ , ρ̂′ ) − ∗ ηkk′
n
M,E
which implies
ρ̂3n → 0. as ∆t → 0,
n+∆n−1
X
r+1
η̂kk
U13r+1 (ρ̃0 ) − ηkk′ t, E p̂t ∆t = o(∆t).
′
r=n
Therefore,
P βi (t + ∆t) = (k′ , l) | βi (t) = (k, l) = ηkk′ t, E p̂t ∆t + o(∆t).
We can prove
and
P βi (t + ∆t) = (k, l′ ) | βi (t) = (k, l′ ) = ηll′ t, E p̂t ∆t + o(∆t).
P βi (t + ∆t) = (k′ , l′ ) | βi (t) = (k, l) = o(∆t).
in the same way, which implies (1). Similarly, we can prove (2), (3), (4).
Therefore D is a dynamical system with parameters (p̂0 , η, θ, ξ, σ, ς, ϑ) that is Markovian
and independent.
7.5
Proofs of Lemmas 1 – 5
The proof of Lemma 1 is given in Subsection 7.5.1. In order to prove Lemmas 2 – 5, some additional lemmas are presented in Subsection 7.5.2. Lemmas 2 – 5 are then proved in Subsections
7.5.3 – 7.5.6 respectively.
28
7.5.1
Proof of Lemma 1
For each k ∈ S, let ηk = 1 −
Ik = {i ∈ I :
α0 (i)
Ω0 = {(Akl )k,l∈S :
=
k, π 0 (i)
P
r∈S qkr
(the no-matching probability for a type-k agent), and
= i} (the set of type-k agents who are initially unmatched). Let
Akl ⊆ Ik , Akl is internal, |Akl | is the largest even integer less equal than |Ik |qkl ,
Akl and Akl′ are disjoint for different l and l′ }.
Let µ0 be the internal counting probability measure on (Ω0 , A0 ), where A0 is the internal power
set of Ω0 .
For any fixed ω0 = (Akl )k,l∈S ∈ Ω0 , we consider internal partial matchings on I that
match agents from Akl to Alk . We only need to consider those sets Akl which are nonempty.
For each k ∈ S, let Ωωkk0 be the internal set of all the internal full matchings on Akk . Let
µωkk0 be the internal counting probability measure on Ωωkk0 . For k, l ∈ S with k < l, let Ωωkl0 be
the internal set of all the internal bijections from Akl to Alk . Let µωkl0 the internal counting
probability on Akl . Let Ω1 be the internal set of all the internal partial matchings from I to I.
Define Ωω1 0 to be the set of φ ∈ Ω1 , with
(i) the restriction φ|H = π 0 |H , where H is the set {i : π 0 (i) 6= i} of initially matched agents;
(ii) {i ∈ Ik : φ(i) = i} = Ik \ ∪K
l=1 Akl for each k ∈ S;
(iii) the restriction φ|Akk ∈ Ωωkk0 for k ∈ S;
(iv) for k, l ∈ S with k < l, φ|Akl ∈ Ωωkl0 .
(i) means that initially matched agents remain matched with the same partners. The rest is
clear.
Define an internal probability measure µω1 0 on Ω1 such that such that
(i) for φ ∈ Ωω1 0 ,
µω1 0 (φ) =
Y
µωkl0 (φ|Akl );
1≤k≤l≤K,Akl 6=∅
(ii) φ ∈
/ Ωω1 0 , µω1 0 (φ) = 0.
The purpose of introducing the space Ωω1 0 and the internal probability measure µω1 0 is to match
the agents in Akl to the agents Alk randomly. The probability measure µω1 0 is trivially extended
to the common sample space Ω1 .
Define an internal probability measure P0 on Ω = Ω0 × Ω1 with the internal power set
F0 by letting
P0 ((ω0 , ω1 )) = µ0 (ω0 ) × µω1 0 (ω1 ).
29
(
α0 (π(i, ω)) if π(i, ω) 6= i
Denote
For (i, ω) ∈ I × Ω, let π(i, (ω0 , ω1 )) = ω1 (i), and g(i, ω) =
J
if π(i, ω) = i.
the corresponding Loeb probability spaces of the internal probability spaces (Ω, F0 , P0 ) and
(I × Ω, I0 ⊗ F0 , λ0 ⊗ P0 ) respectively by (Ω, F, P ) and (I × Ω, I ⊠ F, λ ⊠ P ). Since π is an
internal function from I × Ω to I, it is I ⊠ F-measurable.
Denote the internal set {(ω0 , ω1 ) ∈ Ω : ω0 ∈ Ω0 , ω1 ∈ Ωω1 0 , } by Ω̂. By the construction of
P0 , it is clear that P0 Ω̂ = 1. By its construction, it is clear that π is an internal random
matching and satisfies part (i) of the lemma.
−1
It remains to prove part (ii)of the lemma. Suppose α0 (i) = k1 , π 0 (i) = i and ρ̂k1 J > M̂ 3 .
a!
denote
Let Mk and mkl be the internal cardinality of Ik and Akl respectively. Let ab = b!(a−b)!
the binomial coeffiecient. Then we have
P0 (g(i) = l1 ) = µ0 ({(Akl )k,l∈S : i ∈ Ak1 l1 }) =
It is clear that P0 (g(i) = l1 ) ≤
M k 1 q k 1 l1
M k1
= qk1 l1 . Note that
Mk1 −1 mk1 l1 −1
M k1 mk1 l1
=
mk1 l1
.
Mk1
Mk1 qk1 l1 − 2
2
= qk1 l1 −
Mk1
Mk1
2
2
2
≥ qk1 l1 −
= qk1 l1 −
= qk1 l1 −
2 .
Mk1
M̂ ρ̂k1 J
M̂ 3
P0 (g(i) = l1 ) ≥
Then
qk1 l1 −
2
2
M̂ 3
≤ P0 (g(i) = l1 ) ≤ qk1l1
(9)
In addition, suppose α0 (j) = k2 , π 0 (j) = j, j 6= i and ρ̂k2 J > M̂
−1
3
If k1 6= k2 , then
P0 (g(i) = l1 , g(j) = l2 ) = µ0 ({(Akl )k,l∈S : i ∈ Ak1 l1 , j ∈ Ak2 l2 })
Mk −1 Mk −1 1
=
By Equation (9),
mk1 l1 −1
M k1 mk1 l1
2
mk2 l2 −1
M k2 mk2 l2
qk1 l1 qk2 l2 ≥ P0 (g(i) = l1 , g(j) = l2 ) ≥ (qk1 l1 −
2
M̂
2
3
= P0 (g(i) = l1 )P0 (g(j) = l2 )
)(qk2 l2 −
2
M̂
2
3
) ≥ qk1 l1 qk2 l2 −
4
2
M̂ 3
If k1 = k2 but l1 6= l2 , then
P0 (g(i) = l1 , g(j) = l2 ) = µ0 ({(Akl )k,l∈S : i ∈ Ak1 l1 , j ∈ Ak1 l2 }) =
30
Mk1 −2
mk1 l1 −1,mk1 l2 −1
,
M k1
mk1 l1 ,mk1 l2
(10)
where
a
b,c
=
a!
b!c!(a−b−c)!
is the multinomial coefficient. It is clear that
mk1 l1 (mk1 l2 + 1)
mk1 l1 mk1 l2
≤
Mk1 (Mk1 − 1)
Mk21
1
1
≤ qk1 l1 qk2 l2 +
≤ qk1 l1 qk2 l2 + qk1 l1
Mk1
Mk1
1
1
≤ qk1 l1 qk2 l2 +
≤ qk1 l1 qk2 l2 +
2
M̂ ρ̂k1 J
M̂ 3
P0 (g(i) = l1 , g(j) = l2 ) =
On the other hand,
≥
≥
≥
≥
≥
mk1 l1 mk1 l2
Mk1 (Mk1 − 1)
(Mk1 qk1 l1 − 2) (Mk1 qk1 l2 − 2)
Mk1
Mk1
2
2
qk1 l1 qk2 l2 −
qk1 l1 −
qk l
Mk1
Mk1 1 2
4
qk1 l1 qk2 l2 −
Mk1
4
qk1 l1 qk2 l2 −
M̂ ρ̂k1 J
4
qk1 l1 qk2 l2 −
2 .
M̂ 3
Therefore,
qk1 l1 qk2 l2 +
1
M̂
2
3
≥ P0 (g(i) = l1 , g(j) = l2 ) ≥ qk1 l1 qk2 l2 −
4
2
M̂ 3
If k1 = k2 and l1 = l2 , then
P0 (g(i) = l1 , g(j) = l1 ) = µ0 ({(Akl )k,l∈S : i, j ∈ Ak1 l1 }) =
It is clear that
P0 (g(i) = l1 , g(j) = l1 ) =
≤
(mk1 l1 )(mk1 l1 − 1)
Mk1 (Mk1 − 1)
2
mk1 l1
Mk21
≤ qk21 l1
31
Mk1 −2 mk1 l1 −2
.
M k1 mk1 l1
(11)
On the other hand,
(mk1 l1 )(mk1 l1 − 1)
Mk1 (Mk1 − 1)
(Mk1 qk1 l1 − 2) (Mk1 qk1 l1 − 3)
≥
Mk1
Mk1
5
≥ qk1 l1 qk2l2 −
qk l
Mk1 1 1
5
≥ qk1 l1 qk2l2 −
2 .
M̂ 3
Therefore,
qk21 l1 ≥ P0 (g(i) = l1 , g(j) = l2 ) ≥ qk21 l1 −
5
(12)
2
M̂ 3
Combining Equation (10), (11) and (13), for any (k1 , l1 ), (k2 , l2 ) ∈ S 2 ,
qk1 l1 qk2 l2 +
7.5.2
1
M̂
2
3
≥ P0 (g(i) = l1 , g(j) = l2 ) ≥ qk1 l1 qk2 l2 −
5
(13)
2
M̂ 3
Some additional lemmas
˜ as follows:
First we define three transformations T1n , T2n , T3n on ∆
(P
n
n
if l 6= J
k ′ ,l′ ∈S ρ̃k ′ l′ 0 η̂k ′ k (ρ̃)η̂l′ l (ρ̃)
[T1n (ρ̃)]kl0 =
0
if l = J,
[T1n (ρ̃)]kl1 =
(
0
P
n
r∈S ρ̃rJ1 η̂rk (ρ̃)
[T2n (ρ̃)]kl0 =
[T2n (ρ̃)]kl1
[T3n (ρ̃)]kl0 =
[T3n (ρ̃)]kl1
(
0
= P
(
ρ̃kl0
0
if l 6= J
if l = J,
if l 6= J
if l = J,
(
n (ρ̃)
ρ̃kJ1 q̂kl
=
ρ̃kJ1 q̂kn (ρ̃)
if l 6= J
if l = J,
P
(
ρ̃kl0 1 − ϑ̂nkl (ρ̃) + k′ ,l′ ∈S ρ̃k′ l′ 1 ξ̂kn′ l′ (ρ̃)[σ̂kn′ l′ (ρ̃)](k, l)
0
if l 6= J
if l = J,
if l 6= J
P
ˆn
ςkn′ l′ (ρ̃)](k) + ρ̃nkJ1
ςkn′ l′ (ρ̃)](k) + k′ ,l′ ∈S ρ̃k′ l′ 0 ϑ̂nk′ l′ (ρ̃)[ˆ
k ′ ,l′ ∈S ρ̃k ′ l′ 1 1 − ξk ′ l′ (ρ̃) [ˆ
32
if l = J.
e(m −1)
e(m )
e(m )
1
2
2
m2 to represent T
We use Um
1
f (m2 ) ◦ Tf (m2 −1) ◦ · · · ◦ Tf (m1 ) .
For any a, b ∈ S̃ and n ∈ {1, . . . , CM }, define

n
n

η̂k1 l1 (ρ̃)η̃k2 l2 (ρ̃) if a = (k1 , k2 , 0), b = (l1 , l2 , 0)
3n−2
(ρ̃) = η̂kn1 l1 (ρ̃)
Bab
if a = (k1 , J, 1), b = (l1 , J, 1)


0
otherwise.


1 − ϑ̂nk1 k2 (ρ̃)
if a = (k1 , k2 , 0), b = (k1 , k2 , 0)




n

ϑ̂k1 k2 (ρ̃)[ςk1 k2 (ρ̃)](l1 )
if a = (k1 , k2 , 0), b = (l1 , J, 1)



ξ̂ n (ρ̃)[σ̂ n (ρ̃)](l , l )
if a = (k1 , k2 , 1), b = (l1 , l2 , 0)
1 2
3n
k1 k2
k1 k2
Bab
(ρ̃) =
n
n

(1 − ξ̂k1 k2 (ρ̃))[ˆ
ςk1 k2 (ρ̃)](l1 ) if a = (k1 , k2 , 1), b = (l1 , J, 1)





1
if a = (k1 , J, 1), b = (k1 , J, 1)



0
otherwise.
Lemma 6 There exists a sequence {ξm }3CM
m=−1 with ξ−1 =
1
M MM
and 3CM ξm ≤ ξ0 for any
m ∈ {1, . . . , 3CM } such that for any m ∈ {−1, 0, . . . , 3CM }, i ∈ 1, 2, 3, a1 , b1 , a2 , b2 ∈ S̃ and
n ∈ {1, . . . , CM }, kρ̃ − ρ̃′ k∞ ≤ ξm+1 implies
kTin (ρ̃) − Tin (ρ̃′ )k∞ ≤ ξm ,
kq̂ n (ρ̃) − q̂ n (ρ̃′ )k∞ ≤ ξm ,
|Ba3n−2
(ρ̃) − Ba3n−2
(ρ̃′ )| ≤ ξm ,
1 b1
1 b1
|Ba3n
(ρ̃) − Ba3n
(ρ̃′ )| ≤ ξm ,
1 b1
1 b1
|Ba3n−2
(ρ̃)Ba3n−2
(ρ̃) − Ba3n−2
(ρ̃′ )Ba3n−2
(ρ̃′ )| ≤ ξm ,
1 b1
1 b1
2 b2
2 b2
|Ba3n
(ρ̃)Ba3n
(ρ̃) − Ba3n
(ρ̃′ )Ba3n
(ρ̃′ )| ≤ ξm .
1 b1
1 b1
2 b2
2 b2
˜ →∆
˜ as follow,
Proof. First we work with T2n . Define F : N × N × ∆
(
p̃kl0 if l 6= J
[F (N, n, p̃)]kl0 =
0
if l = J,
[F (N, n, p̃)]kl1 =
( p̃
n
kJ θkl (p̃, N
)p̃lJ
N
p̃kJ (1 −
It is easy to see that T2n (ρ̃) = ∗ F (M, n, ρ̃).
P
n
θkl (p̃, N
)p̃lJ
)
l∈S
N
if l 6= J
if l = J.
For any N, N ′ ∈ N, there exists a strictly increasing function vN N ′ of {F (N, n, ·)}1≤n≤N ′
(which is called modulus of continuity) such that ||F (N, n, p̃)−F (N, n, p̃′ )||∞ ≤ vN N ′ (||p̃−p̃′ ||∞ )
˜ By Transfer Principle, for any N, N ′ ∈ ∗ N, there exists a strictly increasing
for any p̃, p̃′ ∈ ∆.
function vN N ′ of {∗ F (N, n, ·)}1≤n≤N ′ such that ||∗ F (N, n, ρ̃) − ∗ F (N, n, ρ̃′ )||∞ ≤ vN N ′ (||ρ̃ −
33
˜ Let N = M , N ′ = CM , it is clear that ||T n (ρ̃) − T n (ρ̃′ )||∞ ≤
ρ̃′ ||∞ ) for any ρ̃, ρ̃′ ∈ ∗ ∆.
2
2
3n−2
3n , B 3n−2 B 3n−2 and B 3n B 3n , we can derive the
, Bab
vN N ′ (||ρ̃ − ρ̃′ ||∞ ). For T1n , T3n , q̂ n , Bab
ab ab
ab
ab
modulus of continuity in the same way. By taking maximal, we can get the common modulus
of continuity v for all these processes.
Let ξ−1 =
ξ0
and w = v −1 . Let ξ0 = w(ξ−1 ), ξm = min w(ξm−1 ), 3CM
for any
1
M MM
m ∈ {1, . . . , 3CM }. It is clear that kρ̃ − ρ̃′ k∞ ≤ ξm+1 implies
kTin (ρ̃) − Tin (ρ̃′ )k∞ ≤ ξm ,
kq̂ n (ρ̃) − q̂ n (ρ̃′ )k∞ ≤ ξm ,
(ρ̃′ )| ≤ ξm ,
(ρ̃) − Ba3n−2
|Ba3n−2
1 b1
1 b1
|Ba3n
(ρ̃) − Ba3n
(ρ̃)′ | ≤ ξm ,
1 b1
1 b1
|Ba3n−2
(ρ̃)Ba3n−2
(ρ̃) − Ba3n−2
(ρ̃′ )Ba3n−2
(ρ̃′ )| ≤ ξm ,
1 b1
1 b1
2 b2
2 b2
|Ba3n
(ρ̃)Ba3n
(ρ̃) − Ba3n
(ρ̃′ )Ba3n
(ρ̃′ )| ≤ ξm .
1 b1
1 b1
2 b2
2 b2
Let M̂ be the smallest hyper integer greater equal to
estimation.
1
ξ3CM
3
, we have the following
m
Lemma 7 Let V m = {ω m ∈ Ωm : kρ̃m (ω) − U1m (ρ̃0 )k∞ ≥ ξ0 } and V = ∪3m
m=1 V̄ , then
P0 (V ) < ǫ0 , where ǫ0 =
3CM K(K+1)
1
M̂ 3
. It is clear that
Qm kρ̃m − U1m (ρ̃0 )k∞ ≥ ξ0 < ǫ0 ,
for any m between 0 and 3CM .
Proof. For the mutation step in period n, fix any k, l ∈ S,
Z
3n−3
3n−2
ω 3n−3 3n−2
(ω 3n−2 )dQω3n−2
ρ̃kl0
=
ρ̃kl0
E
Ω
Z 3n−2
1 X
3n−3
=
1kl0 (β̃i3n−2 )dQω3n−2
Ω3n−2 M̂ i∈I
Z
X
X 1
3n−3
1kl0 (β̃i3n−2 )dQω3n−2
=
M̂ 3n−3 ′ ′
Ω3n−2
k ′ ,l′ ∈S
β̃i
=(k ,l ,0)
X
3n−3 n
=
ρ̃k3n−3
)η̂kk′ η̂lln′
′ l′ 0 (ω
=
k ′ ,l′ ∈S
[T1n ρ̃3n−3 (ω 3n−3 ) ]kl0 .
By the same method, we can prove
Eω
3n−3
ρ̃3n−2 = T1n ρ̃3n−3 (ω 3n−3 ) .
34
Given ω 3n−3 and i 6= j, it is clear that 1klr (β̃i3n−2 ) and 1klr (β̃j3n−2 ) are independent on
3n−3
(Ω3n−2 , E3n−2 , Qω3n−2 ). Therefore,
Varω
3n−3
3n−2
ρ̃klr
= Varω
=
≤
M̂
1 X
M̂ 2
1 X
3n−3
1klr (β̃i3n−2 )
i∈I
Varω
3n−3
1klr (β̃i3n−2 )
i∈I
1 X1
4
M̂ 2
i∈I
=
1
4M̂
By Chebyshev’s Inequality,
1
kρ̃3n−2 − T1n ρ̃3n−3 k∞ ≥
1
M̂ 3
X
1
n 3n−3
ω 3n−3 3n−2
)]klr ≥
Q3n−2
ρ̃klr − [T1 (ρ̃
1
M̂ 3
(k,l,r)∈S̃
3n−3
Qω3n−2
≤
K(K + 1)
≤
1
2M̂ 3
.
Let W 3n−2 = {ω 3n−2 ∈ Ω3n−2 : kρ̃3n−2 (ω 3n−2 ) − T1n ρ̃3n−3 k∞ ≥
P0 (W
3n−2
)=
Z
Ω3n−3
3n−3
Qω3n−2
3n−2
kρ̃
−
T1n
3n−3
ρ̃
k∞ ≥
1
M̂
1
3
1
1
M̂ 3
}, then
dQ3n−3 ≤
K(K + 1)
1
2M̂ 3
For the step of type changing with break-up step in period n, fix any k, l ∈ S,
Z
3n
ω 3n−1
ω 3n−1 3n
ρ̃3n
E
ρ̃kl0 =
kl0 (ω )dQ3n
ZΩ3n
1 X
3n−1
1kl0 (β̃i3n )dQω3n
=
Ω3n M̂ i∈I
Z
X
1
3n−1
=
1kl0 (β̃i3n )dQω3n
M̂ 3n−1
Ω3n
β̃i
=(k,l,0)
Z
X
1 X
3n−1
1kl0 (β̃i3n )dQω3n
+
M̂ k′ ,l′ ∈S 3n−1 ′ ′
Ω3n
β̃i
=(k ,l ,1)
X
3n−1
3n−1 n
(ω 3n−1 )(1 − ϑ̂nkl ) +
= ρ̃kl0
ρ̃k3n−1
)ξ̂k′ l′ σ̂kn′ l′ (k, l)
′ l′ 1 (ω
k ′ ,l′ ∈S
= [T3n
ρ̃3n−3 (ω 3n−3 ) ]kl0 .
By the same method, we can prove
Eω
3n−1
ρ̃3n = T3n ρ̃3n−1 (ω 3n−1 ) .
35
Given ω 3n−1 and (i, j) such that π̂ 3n−1 (i, ω 3n−1 ) 6= j, it is clear that 1klr (β̃i3n ) and 1klr (β̃j3n )
3n−1
are independent on (Ω3n , E3n , Qω3n
). Let Anω3n−1 = {(i, j) ∈ I × I : i < j, π̄ n (i, ω 3n−1 ) = j},
then
Varω
3n−1
ω
ρ̃3n
klr = Var
3n−1
M̂
1 X
=
M̂ 2
1 X
1klr (β̃i3n )
i∈I
Var
ω 3n−3
1klr (β̃i3n−2 ) +
i∈I
2
M̂ 2
X
(i,j)∈An 3n−1
ω
Cov 1klr (β̃i3n−2 ), 1klr (β̃j3n−2 )
2 M̂ 1
1 X1
+
2
M̂ i∈I 4 M̂ 2 2 4
≤
1
=
2M̂
By Chebyshev’s Inequality,
3n−1
Qω3n
≤
kρ̃
T1n
−
3n−1
Qω3n
X
(k,l,r)∈S̃
≤
3n
K(K + 1)
1
M̂ 3
.
ρ̃
3n−1
k∞ ≥
P0 (W
)=
Z
Ω3n−1
3n−1
Qω3n
3n
kρ̃
1
M̂ 3
3n
ρ̃ − [T1n (ρ̃3n−1 )]klr ≥ 1 1
klr
M̂ 3
Let W 3n = {ω 3n ∈ Ω3n : kρ̃3n (ω 3n ) − T1n ρ̃3n−1 k∞ ≥
3n
1
− T3n
3n−1
ρ̃
1
1
M̂ 3
k∞ ≥
}, then
1
M̂
1
3
dQ3n−1 ≤
K(K + 1)
1
M̂ 3
3n−2
3n−1
for any l ∈ S.
= ρ̃kl0
For the step of random matching in period n, It is clear that ρ̃kl0
By the construction of static random matching, for any ω ∈ Ω and k, l ∈ S,
Then
3n−1
1
n 3n−1
ρ̃
)]kl1 ≤
.
kl1 − [T2 (ρ̃
M̂
3n−1
n 3n−1
≤ K,
ρ̃
−
[T
(ρ̃
)]
kJ1
2
kJ1
M̂
for any k ∈ S. Let W 3n−1 = {ω 3n−1 ∈ Ω3n−1 : kρ̃3n−1 (ω 3n−1 ) − T2n (ρ̃3n−2 )k∞ ≥
W 3n−1 = ∅.
e(m+1)
1
1
M̂ 3
}. Then
Let W = {ω ∈ Ω : kρ̃m+1 (ω) − Tf (m+1) (ρ̃m )k∞ ≥ 1 1 for some m between 0 and 3CM },
M̂ 3
P3CM
m
m
= ǫ0 .
then W = ∪3CM
W
which
implies
P
(W
)
≤
P
(W
) ≤ 3CM K(K+1)
0
1
m=1
m=1 0
M̂ 3
36
Note that M̂ ≥
1
ξ3CM
kρ̃m − U1m (ρ̃0 )k∞
3
1
, then
1
M̂ 3
≤ ξ3CM . Therefore, if ω ∈
/ W , we have
m m−1
m m−1
≤ kρ̃m − Um
(ρ̃
)k∞ + kUm
(ρ̃
) − U1m (ρ̃0 )k∞
m
m
m m−1
m m−1
(ρ̃m−2 ) − U1m (ρ̃0 )k∞
(ρ̃m−2 )k∞ + kUm−1
(ρ̃
) − Um−1
(ρ̃
)k∞ + kUm
≤ kρ̃m − Um
m m−1
≤ kρ̃m − Um
(ρ̃
)k∞ +
m−1
X
m
kUj+1
(ρ̃j ) − Ujm (ρ̃j−1 )k∞
j=1
≤
m
1
M̂ 3
≤ 3CM ξ3CM ≤ 3CM ξ1 ≤ ξ0 .
Then ω is also in V , which implies V ⊆ W . Therefore, P0 (V ) ≤
3n−2
Lemma 8 For any ω ∈
/ V , ρ̃kJ1
≥
Proof. : Note that ϑ̂nkl ≥
X
[U13n (ρ̃0 )]kJ1 =
k∈S
=
1
M2
X
k∈S
1
1
M̂ 15
n ≤1−
and ξ̂kl
=
(1 − ξ̂kn′ l′ )[U13n−1 (ρ̃0 )]k′ l′ 1 +
=
1 
M2
1
.
M2
n ≥
Since η̂kl
then
X
ϑ̂nk′ l′ [U13n−1 (ρ̃0 )]k′ l′ 0 ςˆkn′ l′ (k) +
X
X
[U13n−1 (ρ̃0 )]k′ l′ 1 +
k ′ ,l′ ∈S
1
M2
X
k ′ ,l′ ∈S
X
[U13n−1 (ρ̃0 )]kJ1
k∈S
ϑ̂nk′ l′ [U13n−1 (ρ̃0 )]k′ l′ 0 +
k ′ ,l′ ∈S

= ǫ0 .
k,k ′ ,l′ ∈S
k ′ ,l′ ∈S
≥
1
,
M2
T3n U13n−1 (ρ̃0 )
k,k ′ ,l′ ∈S
X
1
M̂ 3
for any integer n between 0 and CM .
(1 − ξˆkn′ l′ )[U13n−1 (ρ̃0 )]k′ l′ 1 ςˆkn′ l′ (k) +
X
3CM K(K+1)
X
[U13n−1 (ρ̃0 )]kJ1
k∈S

X
[U13n−1 (ρ̃0 )]k′ l′ 0 +
[U13n−1 (ρ̃0 )]kJ1 
k∈S
for any k, l ∈ S with k 6= l. Then
[U13n−2 (ρ̃0 )]kJ1 =
X
K
1 X 3n−3 0
n
[U1
(ρ̃ )]rJ1 ≥ 4
[U13n−3 (ρ̃0 )]rJ1 η̂rk
≥ 2
M
M
r∈S
r∈S
For any fixed ω ∈
/ V , we have kρ̃3n−2 (ω) − U13n−2 (ρ̃0 )k∞ ≤ ξ0 , then
K
− ξ−1 .
M4
1
5
−1
≤ ξ2K
=
3n−2
ρ̃kJ1
≥ [U13n−2 (ρ̃)]kJ1 − ξ0 ≥
1
Note that ξ−1 =
ξ−1 ≤
K
2M 4
and
1
M
1
M̂ 15
MM
≤
and
K
2M 4 ,
1
M̂
then
1
ξ3CM 5
2K
3n−2
ρ̃kJ1
≥ 11 .
M̂ 15
1
15
≤
1
2KM M M
1
5
. It is clear that
Lemma 9 For any ω 3n−2 ∈
/ V 3n−2 , if β̃i3n−2 = (k1 , J, 1) and β̃j3n−2 = (k2 , J, 1), then
3n−2
1
ω
Q3n−1 (ĝi3n−1 = l1 ) − q̂kn1 k1 (ρ̃3n−2 ) ≤
1
M̂ 9
37
3n−2
1
ω
Q3n−1 (ĝi3n−1 = J) − q̂kn1 (ρ̃3n−2 ) ≤
1
M̂ 9
3n−2
1
ω
Q3n−1 (ĝi3n−1 = l1 , ĝj3n−1 = l2 ) − q̂kn1 l1 (ρ̃3n−2 )q̂kn2 l2 (ρ̃3n−2 ) ≤
1 .
M̂ 9
3n−2
1
ω
Q3n−1 (ĝi3n−1 = l1 , ĝj3n−1 = J) − q̂kn1 l1 (ρ̃3n−2 )q̂kn2 J (ρ̃3n−2 ) ≤
1 .
M̂ 9
3n−2
1
ω
Q3n−1 (ĝi3n−1 = J, ĝj3n−1 = J) − q̂kn1 (ρ̃3n−2 )q̂kn2 (ρ̃3n−2 ) ≤
1 .
M̂ 9
Proof. If ρ̃k3n−2
q̂ n >
1 J1 k1 l1
1
1
M̂ 5
, then ρ̃k3n−2
>
1 J1
1
1
M̂ 5
>
1
1
M̂ 3
. By Lemma 1,
3n−2
1
1
ω
Q3n−1 (ĝi3n−1 = l1 ) − q̂kn1 l1 ≤
1 ≤
1 .
M̂ 3
M̂ 9
P
P
3n−2
3n−2
Note that Qω3n−1 (ĝi3n−1 = J) − q̂kn1 ≤ l1 ∈S Qω3n−1 (ĝi3n−1 = l1 ) − l1 ∈S q̂kn1 l1 , then
if ρ̃k3n−2
q̂ n (ρ̃3n−2 ) ≤
2 J1 k2 k1
that
ω 3n−2
Q3n−1 (ĝi3n−1
1
1
M̂ 5
3n−2
K
1
ω
Q3n−1 (ĝi3n−1 = J) − q̂kn1 ≤
1 ≤
1 .
M̂ 3
M̂ 9
, it is easy to prove q̂kn2 k1 (ρ̃3n−2 ) ≤
1
2
M̂ 15
since ρ̃k3n−2
≥
2
1
1
M̂ 15
. Note
= l1 ) ≤ q̂kn1 l1 , then
3n−2
3n−2
ω
Q3n−1 (ĝi3n−1 = k2 ) − q̂kn1 k2 (ρ̃3n−2 ) = q̂kn1 k2 (ρ̃3n−2 )−Qω3n−1 (ĝi3n−1 = k2 ) ≤ q̂kn1 k2 (ρ̃3n−2 ) ≤
1
M̂
Similarly, we can derive
3n−2
1
K
ω
Q3n−1 (ĝi3n−1 = J) − q̂kn1 ≤
2 ≤
1 .
M̂ 15
M̂ 9
We can prove the rest parts in the same way.
For any A, B ∈ E 3CM , define
P0 (A|B) =
(P
0 (A∩B)
P0 (B)
0
if P0 (B) > 0
otherwise.
We have the following estimation.
Lemma 10 For any a, b ∈ S̃ and F m ∈ F m such that P0 (β̃im = b, 1F m = 1) > 0,
m+1
m+1
m
m
m
= b|β̃i = a, 1F = 1) − P0 (β̃i
= b|β̃i = a) ≤
P0 (β̃i
2
2ξ−1 + 4ǫ0
+
1 ,
m
m
P0 ({β̃i = a} ∩ F ) M̂ 9
Proof. Let D1 = {ω m ∈ Ωm : β̃im = a} ∩ F m , D2 = {ω m ∈ Ωm : β̃im = a}, then
Z
m
1
Qω (β̃ m+1 = b)dQm
P0 (β̃im+1 = b|{β̃im = a} ∩ F m ) = m
Q (D1 ) D1 m+1 i
38
2
15
≤
1
1
M̂ 9
.
and
P0 (β̃im+1 = a|β̃im = b) =
1
Qm (D2 )
Z
m
D2
Qωm+1 (β̃im+1 = a)dQm .
For the step of random matching in period n, we can assume a = (k1 , k2 , 1) and
b = (k1 , J, 1) or (k1 , J, 1), otherwise P0 (β̃i3n−1 = b|β̃i3n−2 = a, 1F 3n−2 = 1) and P0 (β̃i3n−1 =
a|β̃i3n−2 = b) are both 0 or 1. If b = (k1 , k2 , 1), by Lemma 7 and Lemma 9,
Z
1
n
3n−2
m
P0 (β̃ 3n−1 = b|β̃ 3n−2 = a, 1F 3n−2 = 1) −
q̂
(ρ̃
)dQ
k
k
i
i
1
2
Qm (D1 ) D1
Z
1
3n−2
m
≤ m
Qω3n−1 (β̃i3n−1 = b) − q̂kn1 k2 (ρ̃ω3n−2
3n−2 )dQ Q (D1 ) D1 ∩V
Z
1
3n−2
m
+ m
Qω3n−1 (β̃i3n−1 = b) − q̂kn1k2 (ρ̃ω3n−2
3n−2 )dQ Q (D1 ) D1 \V
P0 (V ) P0 (D1 \V ) 1
+
≤ m
Q (D1 ) Qm (D1 ) M̂ 91
ǫ0
1
.
≤
+
m
Q (D1 ) M̂ 19
Note that kρ̃3n−2 (ω 3n−2 ) − U13n−2 (ρ̃0 )k∞ ≤ ξ0 for all ω ∈
/ V , by Lemma 6,
Z
n
1
m
n
3n−2
0
3n−2
q̂
q̂k1 k2 (ρ̃ω3n−2 )dQ (ρ̃ ) − m
k 1 k 2 U1
Q (D1 ) D1
Z
1
3n−2 0
3n−2
n
n
m
≤ m
q̂k1 k2 U1
(ρ̃ ) − q̂k1 k2 (ρ̃ω3n−2 )dQ Q (D1 ) D1 ∩V
Z
1
m
q̂kn1 k2 U13n−2 (ρ̃0 ) − q̂kn1 k2 (ρ̃ω3n−2
+ m
3n−2 )dQ Q (D1 ) D1 \V
P0 (V ) P0 (D1 \V ) +
ξ−1
≤ m
Q (D1 ) Qm (D1 ) ǫ0
≤
+ ξ−1 .
m
Q (D1 )
Therefore,
3n−1
3n−2
3n−2 0 n
= b|β̃i
= a, 1F 3n−2 = 1) − q̂k2 k1 U1
(ρ̃ ) ≤
P0 (β̃i
We can prove
3n−1
3n−2
n
3n−2 0 = b|β̃i
= a) − q̂k1 k2 U1
(ρ̃ ) ≤
P0 (β̃i
1
2ǫ0
+
+ ξ−1 .
Qm (D1 ) M̂ 91
1
2ǫ0
+
+ ξ−1
Qm (D2 ) M̂ 91
in the same way. Then
3n−1
3n−2
3n−1
3n−2
= b|β̃i
=a ≤
= b|β̃i
= a, 1F 3n−2 = 1 − P0 β̃i
P0 β̃i
If b = (k1 , J, 1), we can derive the above inequality in the same way.
39
2ξ−1 + 4ǫ0
P0
β̃i3n−2
= a, 1F 3n−2
2
+ 1 .
M̂ 9
=1
For the mutation step in period n, it is easy to prove
P0 β̃i3n−2 = b|β̃i3n−3 = a, 1F 3n−3 = 1
Z
1
3n−3
=
Qω3n−2 (β̃i3n−2 = a)dQm
m
Q (D1 ) D1
Z
1
m
B 3n−2 (ρ̃ω3n−3
=
3n−3 )dQ
Qm (D1 ) D1 ab
where
Therefore,

n
n

η̂k1 l1 (ρ̃)η̃k2 l2 (ρ̃)
3n−2
(ρ̃) = η̂kn1 l1 (ρ̃)
Bab


0
if a = (k1 , k2 , 0), b = (l1 , l2 , 0)
if a = (k1 , J, 1), b = (l1 , J, 1)
otherwise.
3n−2
U13n−3 (ρ̃0 ) P0 (β̃i3n−2 = b|β̃i3n−3 = a, 1F 3n−3 = 1) − Bab
Z
3n−2 3n−3
m
1
3n−3 0 3n−2
B
≤
U
(ρ̃
)
dQ
(ρ̃
)
−
B
3n−3
1
ab
ab
ω
Qm (D1 ) D1
Z
3n−2 3n−3
1
3n−2
B
U13n−3 (ρ̃0 ) dQm
(ρ̃ω3n−3 ) − Bab
=
ab
m
Q (D1 ) D1 ∩V
Z
3n−2 3n−3
1
3n−2
B
U13n−3 (ρ̃0 ) dQm
+ m
(ρ̃ω3n−3 ) − Bab
ab
Q (D1 ) D1 \V
By Lemma 6 and 7, we know
3n−2 3n−3
3n−3 0 3n−2
B
U
(ρ̃
)
≤ ξ−1
)
−
B
(ρ̃
3n−3
1
ab
ab
ω
for any ω ∈
/ V . Then
3n−3 0 3n−2
3n−3
3n−2
U
(ρ̃
)
=
1)
−
B
=
a,
1
=
b|
β̃
P
(
β̃
3n−3
≤
0 i
F
1
i
ab
We can prove
3n−2
U13n−3 (ρ̃0 ) ≤
P0 (β̃i3n−2 = b|β̃i3n−3 = a) − Bab
ǫ0
+ ξ−1
Qm (D1 )
ǫ0
m
Q (D1 )
+ ξ−1
in the same way. Therefore
P0 (β̃i3n−2 = b|β̃i3n−3 = a, 1F 3n−3 = 1) − P0 (β̃i3n−2 = b|{β̃i3n−3 = a}) ≤
2ξ−1 + 2ǫ0
.
Qm (D1 )
For the step of type changing with break-up step in period n, it is easy to prove
P0 β̃i3n = b|β̃i3n−1 = a, 1F 3n−1 = 1
Z
1
3n−1
=
Qω3n (β̃i3n = a)dQm
m
Q (D1 ) D1
Z
1
m
B 3n (ρ̃3n−1
=
3n−1 )dQ
Qm (D1 ) D1 ab ω
40
(14)
where
Therefore,


1 − ϑ̂nk1 k2 (ρ̃)





ϑ̂nk1 k2 (ρ̃)[ςk1 k2 (ρ̃)](l1 )



ξ̂ n (ρ̃)[σ̂ n (ρ̃)](l , l )
1 2
3n
k1 k2
k1 k2
(ρ̃) =
Bab
n (ρ̃)](l )
n (ρ̃))[ˆ

(1
−
ξ̂
ς
1

k1 k2
k1 k2




1



0
if a = (k1 , k2 , 0), b = (k1 , k2 , 0)
if a = (k1 , k2 , 0), b = (l1 , J, 1)
if a = (k1 , k2 , 1), b = (l1 , l2 , 0)
if a = (k1 , k2 , 1), b = (l1 , J, 1)
if a = (k1 , J, 1), b = (k1 , J, 1)
otherwise.
3n
U13n−1 (ρ̃0 ) P0 (β̃i3n = b|β̃i3n−1 = a, 1F 3n−1 = 1) − Bab
Z
3n 3n−1
1
B (ρ̃ 3n−1 ) − B 3n U 3n−1 (ρ̃0 ) dQm
≤
ab
ab
1
ω
Qm (D1 ) D1
Z
3n 3n−1
1
3n
Bab (ρ̃ 3n−1 ) − Bab
U13n−1 (ρ̃0 ) dQm
=
ω
m
Q (D1 ) D1 ∩V
Z
3n 3n−1
1
B (ρ̃ 3n−1 ) − B 3n U 3n−1 (ρ̃0 ) dQm
+ m
ab
ab
1
ω
Q (D1 ) D1 \V
By Lemma 6 and 7, we know
3n 3n−1
B (ρ̃ 3n−1 ) − B 3n U 3n−1 (ρ̃0 ) ≤ ξ−1
ab
ab
1
ω
for any ω ∈
/ V . Then
3n
U13n−1 (ρ̃0 ) ≤
P0 (β̃i3n = b|β̃i3n−1 = a, 1F 3n−1 = 1) − Bab
ǫ0
m
Q (D1 )
+ ξ−1
(15)
We can prove
3n−1
3n−1 0 3n
3n
P
(
β̃
=
b|
β̃
=
a)
−
B
U
(ρ̃
)
0 i
≤
ab
1
i
ǫ0
+ ξ−1
Qm (D1 )
in the same way. Therefore
P0 (β̃i3n = b|β̃i3n−1 = a, 1F 3n−1 = 1) − P0 (β̃i3n = b|β̃i3n−1 = a) ≤
2ξ−1 + 2ǫ0
.
Qm (D1 )
By combining the results of step 1, 2 and 3 in each period, we derive
2ξ−1 + 4ǫ0
2
m+1
m+1
m
m
m
(
β̃
=
b|
β̃
=
a,
1
=
1)
−
P
(
β̃
=
b|
β̃
=
a)
+
P0 i
≤
F
0 i
1 ,
i
i
m
m
P0 ({β̃i = a} ∩ F ) M̂ 9
for any integer m < 3CM
7.5.3
Proof of Lemma 2
We only need to prove that we can find a sequence of positive number {cm }1≤n≤3CM such that
cm is infinitesimal for every m and
m
m1
m2
m1
m
m1
m1
m2
P
(
β̃
=
a,
β̃
=
a
,
β̃
=
a
)P
(
β̃
=
a
)
−
P
(
β̃
=
a,
β̃
=
a
)P
(
β̃
=
a
,
β̃
=
a
)
0
1
2 0
1
0
1 0
1
2 ≤ cm
41
for any m, m1 , m2 such that m > m1 > m2 . Note that c1 = 0. Suppose we already derive cm ,
we need to find out the relationship between cm and cm+1 . It is easy to see that
P0 (β̃im+1 = a, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
X
=
P0 (β̃im+1 = a, β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
b∈S̃
=
X
P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 ).
b∈S̃
Let
Q=
X
P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 )P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
b∈S̃
and A = {b ∈ S̃ : P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 ) > 0}. It is easy to prove
P0 (β̃im+1 = a, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 ) − Q
X =
P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 , β̃im2 = a2 ) − P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 )
b∈S̃
P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
X
=
P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 , β̃im2 = a2 ) − P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 )
b∈A
P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
By Lemma 10,
P0 (β̃im+1 = a, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 ) − Q
X =
P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 , β̃im2 = a2 ) − P0 (β̃im+1 = a|β̃im = b)
b∈A
+P0 (β̃im+1 = a|β̃im = b) − P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 )
P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
≤
X
b∈A
P0 (β̃im
2
2
2ξ−1 + 4ǫ0
2ξ−1 + 4ǫ0
+
+
1 +
1
m1
m1
m2
m
P0 (β̃i = b, β̃i = a1 ) M̂ 9
= b, β̃i = a1 , β̃i = a2 ) M̂ 9
!
P0 (β̃im = b, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 )
4
≤ 2K(K + 1)(4ξ−1 + 8ǫ0 ) +
1 .
M̂ 9
Note that
m1
m2
m1
m1
m1
m2
m
m
P0 (β̃i = b, β̃i = a1 , β̃i = a2 )P0 (β̃i = a1 ) − P0 (β̃i = b, β̃i = a1 )P0 (β̃i = a1 , β̃i = a2 ) ≤ cm ,
42
we can prove
|Q −
X
P0 (β̃im+1 = a|β̃im = b, β̃im1 = a1 )P0 (β̃im = b, β̃im1 = a1 )P0 (β̃im1 = a1 , β̃im2 = a2 )|
b∈S̃
= |Q −
X
P0 (β̃im = b, β̃im+1 = a, β̃im1 = a1 )P0 (β̃im1 = a1 , β̃im2 = a2 )|
b∈S̃
≤ 2K(K + 1)cm .
Then
|P0 (β̃in+1 = a, β̃im1 = a1 , β̃im2 = a2 )P0 (β̃im1 = a1 ) − P0 (β̃in+1 = a, β̃im1 = a1 )P0 (β̃im1 = a1 , β̃im2 = a2 )|
4
≤ 2K(K + 1)(4ξ−1 + 8ǫ0 + cm ) +
1 .
M̂ 9
Let cm+1 = 2K(K + 1)(4ξ−1 + 8ǫ0 + cm ) +
4
1
M̂ 9
. By mathematical induction, it is easy to prove
that
2m
cm ≤ 2
Note that ξ−1 =
7.5.4
1
M MM
, ǫ0 =
K
2m
3CM K 2
M̂
1
3
2m
(K + 1)
4ξ−1 + 8ǫ0 +
4
1
M̂ 9
.
M
and M̂ ≥ M M , cm is infinitesimal for all m ≤ 3CM .
Proof of Lemma 3
For any i ∈ I and m ≤ 3CM , let Fijm = {ω ∈ Ω : π̂im (ω) = j}. It is clear that for any i ∈ I,
Fijm ∩ Fijm′ = ∅ if j 6= j ′ . For any i ∈ I, let Fim = {j ∈ I : P0 (Fijm ) ≥
Let Fi = {j ∈ I : ∃m ≤ 3CM such that
P0 (Fijm )
λ0 (Fi ) ≤
≥
1
3CM
8
M̂ 9
1
M̂ 9
1
1
M̂ 9
}. Then λ0 (Fim ) ≤
1
8
M̂ 9
.
}, then
≃ 0.
We only need to find a sequence of positive number {dm }0≤n≤3CM such that dm is
infinitesimal for every m and
P0 β̃im = a1 , β̃jm = a2 , β̃im−1 = b1 , β̃jm−1 = b2 , β̃im0 = c1 , β̃jm0 = c2
−P0 β̃im = a1 , β̃im−1 = b1 , β̃im0 = c1 P0 β̃jm = a2 , β̃jm−1 = b2 , β̃jm0 = c2 ≤ dm
for any i, j ∈ I such that j ∈
/ Fi and m0 < m − 1. It is clear that d0 = 0. Suppose we already
derive dm , we need to find out the relationship between dm and dm+1 . It is clear that
P0 (β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2 )
Z
m
Qωm+1 (β̃im+1 = a1 , β̃jm+1 = a2 )dQm ,
=
Dbij b
1 2
where Dbij1 b2 = {β̃im (ω) = b1 , β̃jm (ω) = b2 , β̃im0 = c1 , β̃jm0 = c2 }.
43
For the step of random matching in period n, m = 3n−2. If b1 = (k1 , J, 1), b2 = (k2 , J, 1),
we can assume a1 = (k1 , l1 , 1), a2 = (k2 , l2 , 1), where l1 , l2 ∈ S ∪ {J}, otherwise
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
= P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 P0 β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2
= 0.
By Lemma 9,
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
Z
q̂kn1 l1 (ρ̃m )q̂kn2 l2 (ρ̃m )dQm −
Dbij b
1 2
Z
m
Qωm+1 (β̃im+1 = a1 , β̃jm+1 = a2 ) − q̂kn1 l1 (ρ̃m )q̂kn2 l2 (ρ̃m ) dQm = Dij
b b
Z 1 2
m
ω
m+1
m+1
n
m n
m
m
Qm+1 (β̃i
= a1 , β̃j
= a2 ) − q̂k1 l1 (ρ̃ )q̂k2 l2 (ρ̃ ) dQ + P0 (V )
≤ Dij \V
b1 b2
≤ ǫ0 +
1
1
M̂ 9
.
By Lemmas 6 and 7,
Z
ij
n
m n
m
m
n
m 0
n
m 0 q̂
(ρ̃
)q̂
(ρ̃
)dQ
−
P
(D
)q̂
(U
(ρ̃
))q̂
(U
(ρ̃
))
ij
0
1
1
k1 l1
k2 l2
k2 l2
b1 b2 k1 l1
D
b1 b2
Z
≤ q̂kn1 l1 (ρ̃m )q̂kn2 l2 (ρ̃m ) − q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 ))dQm ij
D
b b
Z 1 2
q̂kn1 l1 (ρ̃m )q̂kn2 l2 (ρ̃m ) − q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 ))dQm + P0 (V )
≤ Dij \V
b b
Z 1 2
n
m n
m
n
m 0
n
m
m
q̂ (ρ̃ )q̂k2 l2 (ρ̃ ) − q̂k1 l1 (U1 (ρ̃ ))q̂k2 l2 (ρ̃ )dQ = Dij \V k1 l1
b1 b2
Z
+
q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (ρ̃m ) − q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 )dQm + P0 (V )
ij
D \V
b1 b2
Z
n
q̂k l (ρ̃m ) − q̂kn l (U1m (ρ̃0 )) dQm
≤
1 1
1 1
Dbij b \V
+
Z1
2
Dbij b \V
1 2
≤ 2ξ−1 + ǫ0
n
q̂k l (ρ̃m ) − q̂kn l (U1m (ρ̃0 ) dQm + P0 (V )
2 2
2 2
44
Therefore,
m0
m0
m+1
m+1
m
m
P
β̃
=
a
,
β̃
=
a
,
β̃
=
b
,
β̃
=
b
,
β̃
=
c
,
β̃
=
c
0 i
1 j
2 i
1 j
2 i
1 j
2
1
−P0 (Dbij1 b2 )q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 )) ≤ 2ǫ0 +
1 + 2ξ−1 .
M̂ 9
We can prove
1
P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 − P0 (Dbi 1 )q̂kn1 l1 (U1m (ρ̃0 )) ≤ 2ǫ0 +
1 + 2ξ−1 ,
M̂ 9
in a similar way, where Dbi 1 = {β̃im = b1 , β̃im0 = c1 }. Then
m
m
P0 (β̃im+1 = a1 , β̃im = b1 , β̃i 0 = c1 )P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃j 0 = c2 )
−P0 (Dbi 1 )P0 (Dbj2 )q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 ))
≤ P0 (β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 )P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 )
−P0 (Dbi 1 )q̂kn1 l1 (U1m (ρ̃0 ))P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 )
+P0 (Dbi 1 )q̂kn1 l1 (U1m (ρ̃0 ))P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 )
−P0 (Dbi 1 )P0 (Dbj2 )q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 ))
≤ 4ǫ0 +
2
1
M̂ 9
+ 4ξ−1 ,
Let Dbij1 b2 b′ b′ = {β̃im (ω) = b1 , β̃jm (ω) = b2 , β̃im−1 (ω) = b′1 , β̃jm−1 (ω) = b′2 , β̃im0 = c1 , β̃jm0 = c2 },
1 2
Di,b1 b′1 = {β̃im (ω) = b1 , β̃im−1 (ω) = b′1 , β̃im0 = c1 }. Then
ij
j P0 (Db1 b2 ) − P0 (Dbi 1 )P0 (Db2 )
X = P0 (Dbij1 b2 b′ b′ ) − P0 (Di,b1 b′1 )P0 (Dj,b2 b′2 ) 1 2
b′ ,b′ ∈S̃
1
2
≤ 4K 2 (K + 1)2 dm .
Therefore,
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
−P0 (β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 )P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 )
≤ P0 (Dbij1 b2 )q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 )) − P0 (Dbi 1 )P0 (Dbj2 )q̂kn1 l1 (U1m (ρ̃0 ))q̂kn2 l2 (U1m (ρ̃0 ))
3
+6ǫ0 +
1 + 6ξ−1
M̂ 9
3
ij
j i
≤ P0 (Db1 b2 ) − P0 (Db1 )P0 (Db2 ) + 6ǫ0 +
1 + 6ξ−1
M̂ 9
3
2
2
≤ 6ǫ0 +
1 + 6ξ−1 + 4K (K + 1) dm .
M̂ 9
45
(16)
If b1 = (k1 , J, 1), b2 = (k2 , l2 , 0), we can assume a1 = (k1 , l1 , 1), a2 = b2 = (k2 , l2 , 0), where
l1 ∈ S ∪ {J}, otherwise
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
= P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 P0 β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2
= 0.
By Lemma 9,
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
Z
q̂kn1 l1 (ρ̃m )dQm −
Dbij b
1 2
Z
m
Qωm+1 (β̃im+1 = a1 ) − q̂kn1 l1 (ρ̃m )) dQm = Dij
b b
Z 1 2
m
ω
m+1
n
m
m
Qm+1 (β̃i
= a1 ) − q̂k1 l1 (ρ̃ ) dQ + P0 (V )
≤ Dij \V
b1 b2
≤ ǫ0 +
1
1
M̂ 9
.
By Lemma 6 and 7,
Z
ij
n
m
m
n
m 0 q̂
(ρ̃
)dQ
−
P
(D
)q̂
(U
(ρ̃
))
ij
0
1
k1 l1
b1 b2 k1 l1
D
b1 b2
Z
≤ q̂kn1 l1 (ρ̃m ) − q̂kn1 l1 (U1m (ρ̃0 ))dQm ij
D
b b
Z 1 2
q̂kn1 l1 (ρ̃m ) − q̂kn1 l1 (U1m (ρ̃0 ))dQm + P0 (V )
≤ Dij \V
b1 b2
≤ ξ−1 + ǫ0
Therefore,
We can prove
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
1
ij
n
m 0 −P0 (Db1 b2 )q̂k1 l1 (U1 (ρ̃ )) ≤ 2ǫ0 +
1 + ξ−1 .
M̂ 9
1
P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 − P0 (Dbi 1 )q̂kn1 l1 (U1m (ρ̃0 )) ≤ 2ǫ0 +
1 + ξ−1 ,
M̂ 9
46
in a similar way, where Dbi 1 = {β̃im = b1 , β̃im0 = c1 }. Then
P0 (β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 )P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 )
−P0 (Dbi 1 )P0 (Dbj2 )q̂kn1 l1 (U1m (ρ̃0 ))
≤ P0 (β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 )P0 (Dbj2 ) − P0 (Dbi 1 )P0 (Dbj2 )q̂kn1 l1 (U1m (ρ̃0 ))
≤ 2ǫ0 +
1
1
M̂ 9
+ ξ−1 ,
Therefore,
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
−P0 (β̃im+1
a1 , β̃im
b1 , β̃im0
c1 )P0 (β̃jm+1
a2 , β̃jm
=
=
=
=
=
ij
j
m 0
i
n
m 0 n
≤ P0 (Db1 b2 )q̂k1 l1 (U1 (ρ̃ )) − P0 (Db1 )P0 (Db2 )q̂k1 l1 (U1 (ρ̃ ))
b2 , β̃jm0
= c2 )
2
+4ǫ0 +
1 + 2ξ−1
M̂ 9
2
≤ P0 (Dbij1 b2 ) − P0 (Dbi 1 )P0 (Dbj2 ) + 4ǫ0 +
1 + 2ξ−1
M̂ 9
2
2
2
≤ 4ǫ0 +
1 + 2ξ−1 + 4K (K + 1) dm .
9
M̂
(17)
If b1 = (k1 , l1 , 0), b2 = (k2 , l2 , 0), it is easy to see that
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
= δb1 (a1 )δb2 (a2 )P0 β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2 ,
and
Then
P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 = δb1 (a1 )δb2 (a2 )P0 β̃im = b1 , β̃im0 = c1 .
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
−P0 (β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 )P0 (β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 )
≤ P0 (Dbij1 b2 ) − P0 (Dbi 1 )P0 (Dbj2 )
≤ 4K 2 (K + 1)2 dm .
(18)
For the step of random matching, by combining Equations (16), (17) and (18), we can derive
m
m
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃i 0 = c1 , β̃j 0 = c2
≤ 6ǫ0 +
3
1
M̂ 9
+ 6ξ−1 + 4K 2 (K + 1)2 dm .
47
(19)
For the step of random mutation in period n, m = 3n − 3. Let

n
n

η̂k1 l1 (ρ̃)η̂k2 l2 (ρ̃) if a = (k1 , k2 , 0), b = (l1 , l2 , 0)
3n−2
Bab
(ρ̃) = η̂kn1 l1 (ρ̃)
if a = (k1 , J, 1), b = (l1 , J, 1)


0
otherwise.
By the construction, it is clear that
m
m
m
Qωm+1 (β̃im+1 = a1 , β̃jm+1 = a2 ) = Qωm+1 (β̃im+1 = a1 )Qωm+1 (β̃jm+1 = a2 )
if πim (ω) 6= j. Then
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
m
m
3n−2
(U
(ρ̃))
(U
(ρ̃))
B
−P0 (Dbij1 b2 )Bb3n−2
1
1
b2 a2
1 a1
Z
m
m
ω
m
m
3n−2
(U
(ρ̃))
(U
(ρ̃))
B
≤
dQ
Qm+1 (β̃im+1 = a1 , β̃jm+1 = a2 ) − Bb3n−2
1
1
b2 a2
1 a1
Dbij b
1 2
=
Z
m ∪V )
Dbij b \(Fij
+
≤
Z
Z1
2
m
3n−2 m
m
m
3n−2
3n−2
m
(U
(ρ̃))
(U
(ρ̃))
B
)
−
B
(ρ̃
m
dQ
Bb1 a1 (ρ̃ωm ) Bb3n−2
1
1
ω
b2 a2
b1 a1
2 a2
m ∪V )
Dbij b ∩(Fij
1 2
m ∪V )
Dbij b \(Fij
1 2
m
3n−2 m
m
m
3n−2
3n−2
m
(U
(ρ̃))
(U
(ρ̃))
B
)
−
B
(ρ̃
m
dQ
Bb1 a1 (ρ̃ωm ) Bb3n−2
1
1
ω
b2 a2
b1 a1
2 a2
m
3n−2 m
m
m
3n−2
3n−2
m
(U
(ρ̃))
(U
(ρ̃))
B
(ρ̃
m) − B
dQ + P0 (Fijm ∪ V ).
Bb1 a1 (ρ̃ωm ) Bb3n−2
1
1
ω
b2 a2
b1 a1
2 a2
By Lemma 6 and 7,
3n−2 m
m
m
3n−2
3n−2
m
3n−2
(U
(ρ̃))
(U
(ρ̃))
B
)
−
B
(ρ̃
)
B
(ρ̃
≤ ξ−1
Bb1 a1
1
1
ωm
ωm
b2 a2
b1 a1
b2 a2
for any ω ∈
/ V . Then
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
3n−2
m
m
−P0 (Dbij1 b2 )Bb3n−2
(U
(ρ̃))
B
(U
(ρ̃))
1
1
b2 a2
1 a1
≤ P0 (Dbij1 b2 )ξ−1 +
≤ ξ−1 +
1
1
M̂ 9
1
1
M̂ 9
+ ǫ0
+ ǫ0
By Equation (14)
m 0 U
(ρ̃
)
P0 (β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 ) − P0 (Dbi 1 )Bb3n−2
≤ ǫ0 + ξ−1 .
1
1 a1
48
Therefore,
P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 P0 β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2
3n−2 m 0 m 0
−P0 (Dbi 1 )P0 (Dbj2 )Bb3n−2
U
(ρ̃
)
Bb2 a2 U1 (ρ̃ ) 1
1 a1
≤ P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 P0 β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2
m+1
m0
m 0
m
−P0 (Dbi 1 )Bb3n−2
U
(ρ̃
)
P
β̃
=
a
,
β̃
=
b
,
β̃
=
c
0
2
2
2
1
j
j
j
1 a1
+P0 (Dbi 1 )Bb3n−2
U1m (ρ̃0 ) P0 β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2
1 a1
3n−2 m 0 m 0
−P0 (Dbi 1 )P0 (Dbj2 ) U1m (ρ̃0 ) Bb3n−2
U
(ρ̃
)
Bb2 a2 U1 (ρ̃ ) 1
1 a1
≤ 2ǫ0 + 2ξ−1
Therefore,
m0
m0
m+1
m+1
m
m
=
b
,
β̃
=
b
,
β̃
β̃
=
a
,
β̃
=
a
,
β̃
=
c
,
β̃
=
c
P0 i
1 j
2 i
1 j
2 i
1 j
2
−P0 β̃im+1 = a1 , β̃im = b1 , β̃im0 = c1 P0 β̃jm+1 = a2 , β̃jm = b2 , β̃jm0 = c2 1
≤ P0 (Dbij1 b2 ) − P0 (Dbi 1 )P0 (Dbj2 ) Bb3n−2
U1m (ρ̃0 ) Bb3n−2
U1m (ρ̃0 ) + 3ξ−1 +
1 + 3ǫ0
1 a1
2 a2
M̂ 9
1
2
2
(20)
≤ 3ξ−1 +
1 + 3ǫ0 + 4K (K + 1) dm .
M̂ 9
For the step of type changing with break up, we can derive Equation (20) in the same way.
Therefore,
P0 β̃im+1 = a1 , β̃jm+1 = a2 , β̃im = b1 , β̃jm = b2 , β̃im0 = c1 , β̃jm0 = c2
≤ 6ǫ0 +
3
M̂
1
9
+ 6ξ−1 + 4K 2 (K + 1)2 dm
for any m ≤ 3CM . Let dm+1 = 6ǫ0 +
3
1
M̂ 9
(21)
+6ξ−1 +4K 2 (K +1)2 dm . By mathematical induction,
it is easy to prove that Then
2m
dm ≤ 4
Note that ξ−1 =
7.5.5
1
M MM
, ǫ0 =
K
4m
3CM K 2
M̂
1
3
4m
(K + 1)
6ǫ0 +
3
1
M̂ 9
+ 6ξ−1 .
M
and M̂ ≥ M M , dm is infinitesimal for all m.
Proof of Lemma 4
First, it is easy to see that
P0 (X̂im+∆m = X̂im |F m ) = P0 (X̂im+∆m−1 = X̂im |F m )P0 (X̂im+∆m = X̂im |X̂im+∆m−1 = X̂im , F m )
49
For the step of random mutation, we take m + ∆m = 3n − 2. If P0 (X̂im+∆m−1 = X̂im , F m ) > 0,
then
P0 (X̂im+∆m = X̂im |X̂im+∆m−1 = X̂im , F m )
R
ω m+∆m−1 (α̂m+∆m = α̂m+∆m−1 , ĝ m+∆m = ĝ m+∆m−1 )dQm
i
i
i
i
{ω:X̂ m+∆m−1 =X̂ m }∩F m Qm+∆m
i
=
i
P0 (X̂im+∆m−1 = X̂im , F m )
2
R
K η̄e(m+∆m)
1
−
dQm
m+∆m−1
m
m
M
{ω:X̂
=X̂ }∩F
i
≥
1−
=
i
P0 (X̂im+∆m−1
!2
K η̄ e(m+∆m)
= X̂im , F m )
.
M
Therefore
P0 (X̂im+∆m
=
X̂im |F m )
≥
P0 (X̂im+∆m−1
=
X̂im |F m )
K η̄ e(m+∆m)
1−
M
!2
.
(22)
If P0 (X̂im+∆m−1 = X̂im , F m ) = 0, then the inequality above is trivially satisfied. For the step
of random matching and type changing with break up, we can derive
P0 (X̂im+∆m
=
X̂im |F m )
=
X̂im |F m )
≥
P0 (X̂im+∆m−1
≥
P0 (X̂im+∆m−1
=
X̂im |F m )
K q̄ e(m+∆m)
1−
M
!
(23)
=
X̂im |F m )
K ϑ̄e(m+∆m)
1−
M
!
(24)
and
P0 (X̂im+∆m
respectively. By combining Inequalities (22), (23) and (24), we can derive
P0 (X̂im+∆m = X̂im |F m )
!2
!
!
e(m+∆m)
e(m+∆m)
e(m+∆m)
K
η̄
K
q̄
K
ϑ̄
≥ P0 (X̂im+∆m−1 = X̂im |F m ) 1 −
1−
1−
M
M
M
!
!
!
2
′
′
′
K η̄ e(m )
K q̄ e(m )
K ϑ̄e(m )
m+∆m
m
m
m
≥ P0 (X̂i = X̂i |F )Πm′ =m+1 1 −
1−
1−
M
M
M
!2∆m
!∆m
!∆m
K η̄ e(m+∆m)
K q̄ e(m+∆m)
K ϑ̄e(m+∆m)
≥
1−
1−
1−
M
M
M
≃ e−
(
K∆m 2η̄ e(m+∆m) +q̄ e(m+∆m) +ϑ̄e(m+∆m)
M
)
.
(25)
50
7.5.6
Proof of Lemma 5
For any (k, l, r) ∈ S̃,
m )
−
E
(ρ̃
E ρ̃m+∆m
klr klr
!
!
X
1 X
1
= E
−E
1klr β̃im+∆m
1klr β̃im M̂ i∈I
M̂ i∈I
1 X E 1klr β̃im+∆m − 1klr β̃im ≤
M̂ i∈I
1 X m+∆m
≤
> X̂im .
P0 X̂i
M̂ i∈I
By Lemma 4,
K∆m(2η̄ e(m+∆m) +q̄ e(m+∆m) +ϑ̄e(m+∆m) )
M
P0 X̂im+∆m > X̂im . 1 − e−
.
Therefore,
K∆m(2η̄ e(m+∆m) +q̄ e(m+∆m) +ϑ̄e(m+∆m) )
−
m m+∆m
M
,
− E (ρ̃klr ) . 1 − e
E ρ̃klr
which implies
K∆m( 2η̄ e(m+∆m) +q̄ e(m+∆m) +ϑ̄e(m+∆m) )
M
.
kE ρ̃m+∆m − E (ρ̃m ) k∞ . 1 − e−
References
[1] D. Acemoglu and R. Shimer, Holdups and efficiency with search frictions, International
Economic Review 40 (1999), 827–849.
[2] R. Battalio, L. Samuelson, and J. Van Huyck, Optimization incentives and coordination
failure in laboratory stag hunt games, Econometrica 69 (2001), 749–764.
[3] M. Benaı̈m and J. Weibull, Deterministic approximation of stochastic evolution in games,
Econometrica 71 (2003), 873–903.
[4] K. Binmore and L. Samuelson, Evolutionary drift and equilibrium selection, Review of
Economic Studies 66 (1999), 363–393.
[5] K. Burdzy, D. M. Frankel, and A. Pauzner, Fast equilibrium selection by rational players
living in a changing world, Econometrica 69 (2001), 163–189.
[6] I. Bomze, Immanuel, Lotka-volterra equation and replicator dynamics:
dimensional classification, Biological Cybernetics 48 (1983), 201–211.
51
A two-
[7] T. Börgers, Tilman, Learning through reinforcement and replicator dynamics, Journal of
Economic Theory 77 (1997), 1–14.
[8] S. Currarini, Sergio, M. Jackson, and P. Pin, An economic model of friendship, homophily,
minorities, and segregation, Econometrica 77 (2009), 1003–1045.
[9] P. Diamond, Search, sticky prices, and inflation, Review of Economic Studies 60 (1993),
53–68.
[10] P. Diamond and J. Yellin, Inventories and money holdings in a search economy, Econometrica 58 (1990), 929–950.
[11] J. L. Doob, Stochastic Processes, Wiley, New York, 1953.
[12] D. Duffie, N. Gârleanu, and L. H. Pedersen, Valuation in over-the-counter markets, Review of Financial Studies 20 (2007), 1865–1900.
[13] D. Duffie, N. Gârleanu, and L. H. Pedersen, Over-the-counter markets, Econometrica 73
(2005), 1815–1847.
[14] D. Duffie, S. Malamud, and G. Manso, Information percolation with equilibrium search
dynamics, Econometrica (2009) 77, 1513–1574.
[15] D. Duffie and G. Manso, 2007, Information percolation in large markets, American Economic Review: Papers and Proceedings 97, 203–209.
[16] D. Duffie, L. Qiao and Y. N. Sun, Dynamic directed random matching, Working Paper,
2014.
[17] D. Duffie and Y. N. Sun, Existence of independent random matching, Annals of Applied
Probability 17 (2007), 386–419.
[18] D. Duffie and Y. N. Sun, The exact law of large numbers for independent random matching, Journal of Economic Theory 147 (2012), 1105–1139.
[19] M. Eigen, Self organization of matter and the evolution of biological macromolecules, Die
Naturwissenschaften (1971) 58, 465–523.
[20] C. Flinn, Minimum wage effects on labor market outcomes under search, matching, and
endogenous contact rates, Econometrica, 74 (2006), 1013–1062.
[21] D. Foster, and P. Young, Stochastic evolutionary game dynamics, Theoretical population
biology 38 (1990): 219-232.
52
[22] Itzhak Gilboa and Akihiko Matsui, A model of random matching, Journal of Mathematical Economics 21 (1992), 185–197.
[23] M. Hellwig, A model of monetary exchange, Econometric Research Program, Research
Memorandum Number 202, Princeton University, 1976.
[24] J. Hofbauer and W. Sandholm, Evolution in games with randomly disturbed payoffs,
Journal of Economic Theory 132 (2007), 47–69.
[25] E. Hopkins, Learning, matching, and aggregation, Games and Economic Behavior 26
(1999), 79–110.
[26] A. Hosios, On the efficiency of matching and related models of search and unemployment,
Review of Economic Studies 57 (1990), 279–298.
[27] J. Hugonnier, B. Lester, and P.-O. Weill, 2015, Heterogeneity in decentralized asset
markets, Working paper, EPFL.
[28] N. Kiyotaki, and Ricardo Lagos, A model of job and worker flows, Journal of Political
Economy (2007) 115, 770–819.
[29] R Lagos and G. Rocheteau, Liquidity in asset markets with search frictions, Econometrica
77 (2009), 403–426.
[30] B. Lester, Benjamin, G. Rocheteau, and P.-O. Weill, 2015, Competing for order flow in
OTC markets, Working paper, UCLA.
[31] L. Ljungqvist and T. Sargent, Recursive Macroeconomic Theory, Boston, MIT Press,
2000.
[32] P. A. Loeb and M. Wolff, eds. Nonstandard Analysis for the Working Mathematician,
Second Edition, Springer, Berlin, 2015.
[33] D. Mortensen, Property rights and efficiency in mating, racing, and related games, American Economic Review 72 (1982), 968–979.
[34] D. Mortensen and C. Pissarides, Job creation and job destruction in the theory of unemployment, The Review of Economic Studies 61 (1994), 397–415.
[35] G. Moscarini, Job matching and the wage distribution, Econometrica 73 (2005), 481–516.
[36] B. Petrongolo and C. Pissarides, Looking into the black box: a survey of the matching
function, Journal of Economic Literature 39 (2001) 390–431.
53
[37] C. Pissarides, Short-run equilibrium dynamics of unemployment, vacancies, and real
wages, American Economic Review 75 (1985), 676–690.
[38] F. Postel-Vinay and J.-M. Robin, Equilibrium wage dispersion with worker and employer
heterogeneity, Econometrica 70 (2002), 2295–2350.
[39] P. Protter Stochastic Integration and Differential Equations, Second Edition, Springer,
New York, 2005.
[40] R. Rogerson, R. Shimer and R. Wright, Search-theoretic models of labor markets: a
survey, Journal of Economic Literature 43 (2005), 959–988.
[41] P. Rupert, M. Schindler, and R. Wright, The search-theoretic approach to monetary
economics: a primer, Journal of Monetary Economics 48 (2001), 605–622.
[42] P. Schuster, Peter and K. Sigmund, Replicator dynamics, Journal of Theoretical Biology
100 (1983), 533–538.
[43] S. Shi, Credit and money in a search model with divisible commodities, The Review of
Economic Studies 63 (1996): 627-652.
[44] S. Shi, A divisible search model of fiat money, Econometrica 65 (1997), 75–102.
[45] S. Shi and Q. Wen, Labor market search and the dynamic effects of taxes and subsidies,
Journal of Monetary Economics 43 (1999), 457–495.
[46] R. Shimer, The cyclical behavior of equilibrium unemployment and vacancies, American
economic review (2005): 25-49.
[47] Y. N. Sun, The exact law of large numbers via Fubini extension and characterization of
insurable risks, Journal of Economic Theory 126 (2006), 31–69.
[48] A. Trejos and R. Wright, Search, bargaining, money, and prices, Journal of Political
Economy 103 (1995), 118–140.
[49] S. Üslü, 2016, Pricing and liquidity in decentralized asset markets, Working paper, Johns
Hopkins University.
[50] D. Vayanos and T. Wang, Search and endogenous concentration of liquidity in asset
markets, Journal of Economic Theory 136 (2007) 66–104.
[51] P.-O. Weill, Liquidity premia in dynamic bargaining markets, Journal of Economic Theory 140 (2008), 66–96.
54
[52] R. Zhou, Currency exchange in a random search model. The Review of Economic Studies
64 (1997), 289-310.
55