Notes on Space- and Velocity-jump Models of Biological

Notes on Space- and Velocity-jump Models of Biological Movement
Hans G. Othmer
School of Mathematics
University of Minnesota
April 8, 2010
Contents
1 Background on continuum descriptions of motion
1
2 Elementary properties of random walks
3
3 Generalized random walks and their associated PDEs
8
3.1 Analysis of moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Diffusion limits for the exponential waiting time distribution . . . . . . . . . . . . . . . . . . . . . 12
4 Reinforced random walks
14
5 Velocity Jump Processes
5.1 The telegraph process in one space dimension . . . . .
5.2 The general velocity-jump process . . . . . . . . . . . .
5.3 The unbiased walk . . . . . . . . . . . . . . . . . . . . .
5.4 A biased walk in the presence of a chemotactic gradient
5.5 Inclusion of a resting phase . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
References
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
17
18
20
21
24
26
29
Background on continuum descriptions of motion
In these notes we will describe deterministic continuum descriptions of the various stochastic processes used to
describe movement of biological cells and organism1 . To understand the broad picture before delving into the
details, let us first restrict attention to non-interacting particles. If the forces are deterministic and individuals are
regarded as point masses their motion can always be described by Newton’s laws, and this leads to a classification
of movement according to the properties of the forces involved. Initially the particles are regarded as structureless,
but we admit the possibility that they can exert forces, and later we add internal states.
Firstly, if the forces are smooth bounded functions the governing equations are smooth and the paths are
smooth functions of time. In a phase space description in which the fundamental variables are position and
1 There
is a huge literature on this subject. A recent review that discusses some of the topics treated herein is given in Codling
et al. (2008)
1
velocity, Newton’s equations are
dx
=v
dt
dv
m
= F.
dt
(1)
(2)
If we assume that the forces are independent of the velocity, then these are just the characteristic equations for
the hyperbolic equation
∂ρ
F
+ v · ∇x ρ +
· ∇v ρ = 0.
∂t
m
(3)
Here ρ is the density of individuals, so defined that ρ(x, v, t)dxdv is the number of individuals with position and
velocity in the phase volume (dxdv) centered at (x, v). If we define the number density n and the average velocity
u by
∫
n(x, t) = ρ(x, v, t)dv
(4)
∫
n(x, t)u(x, t) = ρ(x, v, t)vdv,
(5)
then the evolution of these average quantities is governed by
∂n
+ ∇x · (nu) = 0.
∂t
(6)
If we admit impulsive (i.e., distributional) forces then we arrive at the second major type of movement, which
is called a velocity jump process in Othmer et al. (1988). In this case the motion consists of a sequence of “runs”
separated by re-orientations, during which a new velocity is chosen instantaneously. If we assume that the velocity
changes are the result of a Poisson process of intensity λ, then in the absence of other forces we show later that
we obtain the evolution equation
∫
∂ρ
+ ∇x · vρ = −λρ + λ T (v, v0 )ρ(x, v0 , t) dv0 .
(7)
∂t
For most purposes one does not need the distribution ρ, but only its first few velocity moments. If we integrate
this over v we again obtain (6). Similarly, multiplying (7) by v and integrating over v gives
∫
∫
∂(nu)
+ ∇ · ρvv dv = −λnu + λ T (v, v0 )vρ(x, v0 , t) dv0 dv.
(8)
∂t
Applications of this description will be given later and in subsequent lectures.
The final description of motion, which in a sense is the roughest, is the familiar random walk, in which there
are instantaneous changes in position at random times. These are called space-jump processes (Othmer et al.
1988), and later we show that the probability density for such a process satisfies the renewal equation
∫ t∫
φ(t − τ )T (x, y)P (y, τ |0) dy dτ.
P (x, t|0) = Φ̂(t)δ(x) +
0
(9)
Rn
Here P (x, t|0) is the conditional probability that a walker who begins at the origin at time zero is at x at time t,
φ(t) is the density for the waiting time distribution, Φ̂(t) is the complementary cumulative distribution function
associated with φ(t), and T (x, y) is the redistribution kernel for the jump process.
If the initial distribution is given by F (x) then
∫
n(x, t) ≡
P (x, t|x0 )F (x0 ) dx0
Rn
2
can be regarded as the number density of identical non-interacting walkers at x at time t. Clearly n(x, t) satisfies
∫ t∫
n(x, t) = Φ̂(t)F (x) +
φ(t − τ )T (x, y)n(y, τ ) dy dτ.
(10)
Rn
0
The final description used is one in which the changes of position or velocity are not generated by a jump
process, but rather by the presence of small fluctuating components of velocity and/or position. This leads to
the familiar stochastic differential equations
dx = vdt + dX
(11)
mdv = Fdt + dV
where X and V are random displacements and velocities, respectively. This approach leads to a Fokker-Planck
equation under suitable conditions on the fluctuating forces (Kampen 1981).
2
Elementary properties of random walks
We begin with an unbiased random walk on a lattice in Z in which the walker takes a step of length ∆ at intervals
τ to one of its nearest neighbors, each step having probability 1/22 . We then ask what is the probability that a
walker beginning at the origin will be at site m∆ > 0 after N steps. We first note that every path of length N
has the same probability, namely (1/2)N . Secondly, in order to be at m∆ after N steps the walker must have
taken m more steps in the positive direction than in the negative direction, i.e. there must be (N + m)/2 steps to
the right and (N − m)/2 steps to the left. (Note that m must be even or odd accordingly as N is even or odd.)
Thus the probability p(m, N ) of being at m after N steps after whatever path is
p(m, N ) =
( )N (
)
N
1
N +m
2
2
(12)
()
where ·· is the binomial coefficient. If we assume that there are many steps but a small net displacement, then
N is large and m N , and we use Stirling’s approximation
log n! = (n + 1/2) log n − n + 1/2 log 2π + O(1/n)
for N → ∞, to obtain
[ (
]
N
m)
1
1+
log p(m, N ) ∼ (N + 1/2) log N − (N + m + 1) log
2
2
N
[ (
]
)
N
m
1
−1/2(N − m + 1) log
1−
− log 2π − N log 2.
2
N
2
Because m N, we can expand the logarithm to obtain
1
1
log p(m, N ) ∼ − log N + log 2 − log 2π − m2 /2N
2
2
√
or
p(m, N ) ∼
2 −m2 /2N
e
.
πN
Now let x = m∆ and t = N τ , and define
(
P (x, t)dx = p
2 See
Chandrasekhar (1943) for an ecomprehensive development.
3
x t
,
∆ τ
)
dx
2∆
for then
2
2
1
P (x, t)dx = √
e−x /2∆ t/τ dx.
2
2π∆ t/τ
Thus far ((x, t) is only defined on a lattice, but if we let τ → 0 and ∆ → 0 while holding
∆2
= constant
τ
we obtain
P (x, t) = √
= 2D
2
1
e−x /4Dt ,
4πDt
(13)
which is now defined for (x, t) ∈ R×R+ . It is easy to verify that P (x, t) is a solution, in fact called the fundamental
solution or Green’s function, for the parabolic initial-value problem
∂P
∂t
= D
P (x, 0)
∂2P
∂x2
x∈R
t ∈ R+
(14)
= δ(x)
where δ(·) is the Dirac distribution. It is easy to show that
∫ ∞
P (x, t)dx =
−∞
∫ ∞
xP (x, t)dx =
−∞
∫ ∞
x2 P (x, t)dx =
1
0
(15)
2Dt
−∞
Remark 1 An alternate approach is to begin with a continuus-time random walk on a lattice, in which one
specifies a rate of jumping, rather than the interval between jumps. If the jumps are governed by a Poisson
process of intensity λ and the jumps are restricted to nearest neighbors, then the probability P (n, t) of being at n
at time t satisfies the Kolmogorov forward equation (or master equation)
∂P (n, t)
∂t
= λP (n − 1, t) − 2λP (n, t) + λP (n − 1, t)
P (n, 0) =
(16)
δ(n − n0 )
In this form one sees that the right-hand side can be viewed as a second-order finite difference approximation to
the right-hand side of (14). One can also see how the diffusion coefficient should be defined in an appropriate
limit. Later we will consider generalizations of this in which the intensities depend on another field.
Next we introduce a barrier at y > 0. In the case of a reflecting barrier the method of images (cf. Figure 1)
shows that
2
2
1
P (x, t) = √
{e−x /4Dt + e−(2y−x) /4Dt },
(17)
4πDt
while for an absorbing barrier
P (x, t) = √
In the former case it follows that
2
2
1
{e−x /4Dt − e−(2y−x) /4Dt }.
4πDt
∂P =0
∂x x=y
whereas in the latter case
P (y, t) = 0
4
(18)
t
x
y
2y
Figure 1: A path and its image for the case of a reflecting barrier.
thus confirming the physical meaning of these boundary conditions.
The foregoing can be generalized to an arbitrary domain in the following sense. Let Ω = Rn or a bounded
subset of Rn with a smooth boundary. Let x denote the coordinates of a point in Ω. Then the probability that
a walker who begins at x0 ∈ Ω at t = 0 is at x ∈ Ω at t > t0 satisfies the diffusion equation.
∂P
∂t
= D∆P
(19)
δ(x − x0 ),
P (x, 0) =
plus boundary conditions if Ω is a bounded domain. Thus if one can find the Green’s function for (19) one can
immediately determine the probability.
In any case, if the domain is compact and has a smooth boundary then the solution of (19) has the eigenfunction
expansion
∑
P (x, t) =
an e−λn t ψn (x)
(20)
n
where the functions {ψn } satisfy
D∆ψn = −λn ψn
(21)
with the appropriate boundary condition, and for a smooth domain these form a complete set. This eigenfunction
expansion proves useful for many computations, for example, for funding the mean lifetime of a walker, or the
time to first capture.
Consider a bounded domain Ω with a smooth boundary. At t = 0
∫
P (x, 0)dx = 1
Ω
but for t > 0 the integral may be less than one if the walker can escape, i.e., if either P = 0 on some portion of
∂Ω or a non-zero flux is specified on some portion of the boundary. In either case, if we integrate (19) over the
domain and apply the divergence theorem then
∫
∫
d
P (x, t)dx = −
n · j dS
(22)
dt Ω
∂Ω
where n is the outward normal and j is the flux. For the diffusion equation we have
j = −D∇x P
5
and therefore (22) may be written
d
dt
∫
∫
n · D∇x P dS.
P (x, t)dx =
Ω
(23)
∂Ω
The right-hand side of this equation gives the probability per unit time of leaving the domain Ω, and so we define
the waiting time distribution φ(t) as
∫
φ(t) = −
n · D∇x P dS.
(24)
∂Ω
This only makes sense if the integral is strictly non-positive, since otherwise the probability of escape may be
negative. One may think of φ as the mean waiting time for crossing of the boundary in the following sense. If T
is the first time the walker reaches the boundary, given that it begins in the interior of the domain, then
φ(t)dt = P r{t ≤ T ≤ t + dt}
From the definition of φ it follows that
∫ t∫
∫ t
Φ(t) ≡
φ(s)ds = −
0
0
n · D∇x P (x, s)dSds
(25)
∂Ω
is the probability that the walker will escape in the interval (0, t), i.e. that the walker lifetime in Ω is less than t.
Similarly
∫ ∞∫
Φ̂(t) = −
n · D∇x P (x, s)dSds = 1 − Φ(t)
t
∂Ω
is the probability that the walker is still in Ω at t.
The mean lifetime of a walker in Ω is
∫ ∞
λ−1 =
sφ(s)ds
0
∫
=
−
(26)
∫
∞
n · D∇x P (x, s)dSds
s
0
∂Ω
and λ is the mean rate at which walkers leave Ω. If P (x, t) has the form given in (20) then
∫
∑
−λn t
φ(t) = −
an e
n · ∇x ψn dS
∂Ω
n
and therefore
λ−1 =
] ∑
∑ an [ ∫
bn
−
n · ∇x ψn dS ≡
λn
λn
∂Ω
n
n
(27)
Here the an are determined by the initial data and (λn , ψn ) is an eigen-pair for the Laplacian on Ω.
As an example, consider a one-dimensional domain [L1 , L2 ] with homogeneous conditions at the ends; Neumann at x = L1 and Dirichlet at x = L2 . We have to solve the problem
∂P
∂t
P
∂P
∂x
∂2P
x ∈ (L1 , L2 )
∂x2
= 0
at x = L1
= D
(28)
= 0
P (x, 0) =
at x = L2
δ(x − x0 )
Without loss of generality we let L1 = 0 and set L2 = L.
6
x0 ∈ (L1 , L2 )
One finds via separation of variables that
λn
(
=
n+
√
ψn
and therefore
=
1
2
)2
π2 D
L2
(29)
2
πx
sin(n + 1/2)
L
L
(
)
(
)
∞
2 ∑ −(n+ 12 )22 π2 Dt
1 πx
1 πx0
L
P (x, t) =
e
sin n +
sin n +
.
L n=0
2 L
2
L
Note that at t = 0 this reduces to
(
(
)
)
1 πx
1 πx0
2∑
sin n +
sin n +
L n
2 L
2
L
P (x, 0) =
= δ(x − x0 ).
Thus
) −(n+ 1 )2 π2 Dt
(
)
∞ (
2
2πD ∑
1
1 πx0
2
L
φ(t) = 2
n+
e
sin n +
L n=0
2
2
L
and
1 πx0
∞
2L2 ( x0 )3 ∑ sin(n + 2 ) L
λ =
tφ(t)dt =
.
1 3 πx0 3
D
L
0
)
n=0 (n + 2 ) (
L
= 0, and if x0 = L one finds that
−1
If x0 = 0 then clearly λ−1
∫
(30)
∞
λ−1 =
∞
0.4204L2
2L2 ∑ sin(n + 21 )π
∼
.
D n=0 [(n + 12 )π]3
D
(31)
(32)
If we define the average lifetime over all initial points as
∫
1 L −1
λ−1 =
λ dx
L 0
then we find from (31) that
τ1 ≡ λ−1
=
( )4
∞
∞
2L2 ∑
1
2
L2 ∑
1
=
2
·
1
4
4
π D n=0 (n + 2 )
π
D n=0 (2n + 1)4
=
L2
3D
where we have used the fact that
(33)
∞
∑
1
π4
=
4
(2n + 1)
96
n=0
Note that the average lifetime is approximately 3/4 of the largest lifetime, and that τ1 → ∞ as either L → ∞
or D → ∞, as we should expect. It is also clear that if L1 > 0 then we can simply replace L by L2 − L1 in the
above derivation.
Remark 2 There are many ways to generalize this analysis. One is to consider the role of dimensionality in
search problems. For example, consider the analog of the foregoing in on a square in 2D, with homogeneous
Neumann data on three sides and homogoeneous Dirichlet on the fourth. Then it can be shown that the average
lifetime is the same as in the 1D problem, despite the fact that there is a larger area to explore. A second
generalization is to show that one doesn’t even have to solve for the Green’s function for the transient problem:
λ−1 (x0 ) ≡ τ (x0 ) is the solution of the nonhomogeneous problem
∇2 τ = −1/D
with appropriate boundary conditions.
7
3
Generalized random walks and their associated PDEs
In this section we show that the theory of random space jump processes can be generalized considerably3 . Consider
a random jump process on Rn in which the walker executes a sequence of jumps of negligible duration, and suppose
that the waiting times between successive jumps are independent and identically distributed. That is, if the jumps
occur at T0 , T1 , . . . then the increments Ti − Ti−1 are identically and independently distributed, and therefore
the jump process is a semi-Markov process (Feller 1968; Karlin and Taylor 1975). Let T be the waiting time
between jumps and let φ(t) be the density for the waiting time distribution. T is experimentally observable, and
in principle φ(t) can be determined from experimental observations. If a jump has occurred at t = 0 then
φ(t) = P r{t < T ≤ t + dt}.
The cumulative distribution function for the waiting times is
∫ t
Φ(t) =
φ(s) ds = P r{T ≤ t}
0
and the complementary cumulative distribution function is
∫ ∞
Φ̂(t) =
φ(s) ds = 1 − Φ(t) = P r{T ≥ t}
t
For example, if the jumps are governed by a Poisson process then Φ(t) = 1 − e−λt and φ(t) = λe−λt . This is the
only smooth distribution for which the jump process is Markovian (Feller (1968), p. 458).
Next we must specify how jumpers are redistributed in space, given that a jump occurs. For simplicity we
shall assume that the spatial redistribution that occurs at jumps is independent of the waiting time distribution.
Thus the probability of a transition from y to x at time t will simply be the product of Φ(t) times the function
that gives the probability of the jump from y to x. This assumption of statistical independence between the event
of deciding to jump and the event of deciding where to jump may clearly be too restrictive for some systems, for
the direction and length of a jump may very well depend on the time elapsed since the last jump. Our formulation
of the velocity jump process will incorporate some types of directional persistence, but for now we shall, in effect,
assume that we have infinitely energetic jumpers that have no recollection of their previous location.
Let T (x, y) be the probability density function for a jump from y to x. That is, if X(t) is a random variable
giving the jumper’s position at time t, then given that a jump occurs at Ti ,
T (x, y) dx = P r{x ≤ X(Ti+ ) ≤ x + dx |X(Ti− ) = y},
(34)
where the superscripts ± denote limits from the right and left, respectively. This definition allows for the possibility
that the underlying medium is spatially nonhomogeneous and nonisotropic, in which case the transition probability
depends on x and y separately. In the case of a homogeneous and isotropic medium T (x, y) = T̃ ( x − y ), where
T̃ gives the absolute (unconditioned) probability of a jump of length x − y .
One of the purposes of the analysis is to show how the functions φ(t) and T (x, y) can be related to experimentally observable quantities. The statistics most accessible from observations are the various moments of the
displacement and their dependence on t. To relate these to φ and T we must derive an evolution equation for the
density function P (x, t|0), which is defined so that P (x, t|0)dx is the probability that the position of a jumper
which begins at the origin at time t = 0 lies in the interval (x, x + dx) at time t. We shall derive this equation
via equations for some auxiliary quantities.
Let Qk (x, t) be the conditional probability that a jumper which begins at x = 0 at t = 0 takes its k th step
at t− and lands in the interval (x, x + dx). Then it is clear that for x > 0, t > 0, Qk satisfies the first-order
integro-difference equation
3 See
also Othmer et al. (1988) for the analysis in this section.
8
∫ t∫
φ(t − τ )T (x, y)Qk (y, τ ) dy dτ.
Qk+1 (x, t) =
Rn
0
Summing this over k we obtain the density function for arriving in the interval (x, x + dx) at time t− after any
number of steps. Thus we obtain the Volterra integral equation
∞
∑
Q(x, t) =
∫ t∫
φ(t − τ )T (x, y)Q(y, τ ) dy dτ
Qk (x, t) = Q0 (x, t) +
(35)
Rn
0
k=0
and this must satisfy the initial condition Q(x, 0) = δ(x). Consequently (35) becomes
∫ t∫
φ(t − τ )T (x, y)Q(y, τ ) dy dτ.
Q(x, t) = δ(x)δ(t) +
0
Rn
The probability density function P (x, t|0) for the conditional probability that X(t) lies in (x, x + dx) at time t
can be computed as the product of the probability of arriving in this interval at some time τ < t, multiplied by
the probability that no transition occurs in the remaining time t − τ . Thus
P (x, t|0)
∫ t
=
Φ̂(t − τ )Q(x, τ ) dτ
0
∫ t
∫ τ∫
=
Φ̂(t − τ ){δ(x)δ(τ ) +
φ(τ − s)T (x, y)Q(y, s) dy ds} dτ
0
0
Rn
(∫
)
∫ t∫
t
= Φ̂(t)δ(x) +
Φ̂(t − τ )φ(τ − s) dτ T (x, y)Q(y, s)dy ds.
0
Rn
(36)
s
On the other hand, it follows from (36) that
∫ t∫
0
φ(t − τ )T (x, y)P (y, τ |0) dy dτ
∫ t∫ ∫ τ
=
φ(t − τ )Φ̂(τ − s)T (x, y)Q(y, s) dy ds dτ
0
Rn 0
)
∫ t ∫ (∫ t
=
Φ̂(τ − s)φ(t − τ ) dτ T (x, y)Q(y, s) dy ds.
Rn
Rn
0
It is easy to show that
∫
s
∫
t
t
Φ̂(t − τ )φ(τ − s) dτ =
s
φ(t − τ )Φ̂(τ − s) dτ
s
by setting u = t − s, z = τ − s, and observing that the resulting integrals have the same Laplace transforms.
Thus P (x, t|0) satisfies the renewal equation:
∫ t∫
φ(t − τ )T (x, y)P (y, τ |0) dy dτ.
P (x, t|0) = Φ̂(t)δ(x) +
0
If the initial distribution is given by F (x) then
∫
n(x, t) ≡
(37)
Rn
P (x, t|x0 )F (x0 ) dx0
Rn
can be regarded as the number density of identical non-interacting jumpers at x at time t. Clearly n(x, t) satisfies
∫ t∫
φ(t − τ )T (x, y)n(y, τ ) dy dτ.
n(x, t) = Φ̂(t)F (x) +
0
Rn
9
(38)
In order that the total number of jumpers be conserved in the jump process it is necessary that
∫
∫
N (t) =
n(x, t) dx = N0 ≡
F (x) dx
Rn
i.e., that
∫
Rn
∫ t∫
φ(t − τ )T (x, y)n(y, τ ) dy dτ dx = N0
Φ̂(t)N0 +
Rn
Rn
0
We assume that T ∈ L1 (Rn × Rn ), and therefore the x and y integrations can be interchanged by Fubini’s
theorem. It follows that the necessary and sufficient condition for conservation of jumpers is that
∫
T (x, y) dx = 1
Rn
Hereafter we assume that Φ and T have the proper normalizations and sufficient regularity that the indicated
operations make sense.
Special choices of φ and T lead to some of the standard random jump problems treated in the literature. For
instance, if φ(t) = δ(t − t0 ) then Φ(t) = H(t0 − t), where H(·) is the Heaviside function, and (37) reduces to
∫
P (x, t|0) = H(t0 − t)δ(x) + [1 − H(t0 − t)]
T (x, y)P (y, t − t0 |0) dy.
Rn
This is the governing equation for a discrete time, continuous space process in which jumps occur at intervals of
t0 . If in addition the support of T is concentrated on the points of a lattice Z n ⊂ Rn , then
∑
P (xi , t|0) = H(t0 − t)δi0 + [1 − H(t0 − t)]
Tij P (xj , t − t0 |0) .
j
where δi0 is the Kronecker delta, and xi is a lattice point. This can be written in the more conventional ChapmanKolmogorov form as follows.
∑
Pi0 (n + 1) = j Tij Pj0 (n)
n≥1
Clearly the underlying process is Markovian for the above choice of φ. If the support of the kernel T (x, y) is a
lattice and the waiting time distribution is exponential, as in a Poisson process, then one obtains the continuous
time random walk
∑
∂P
(xi , t|0) = −λP (xi , t|0) + λ
Tij P (xj , t|0).
(39)
∂t
j
3.1
Analysis of moments
As we remarked earlier, one of our purposes is to relate φ and T to the experimental observations. The statistics
most accessible from observations are the various moments of the displacement, in particular their dependence
on t. We shall compute these moments from (37), and for illustrative purposes we assume that the medium is
one-dimensional and spatially homogeneous. Define
∫ +∞
hxn (t)i =
xn P (x, t|0) dx
∫
−∞
+∞
∫ t∫
+∞
=
−∞
0
−∞
Let
xn T̃ (x − y)φ(t − τ )P (y, τ |0) dy dτ dx
∫
mk =
+∞
xk T̃ (x) dx
−∞
10
(40)
be the k-th moment of T̃ about zero. Then (40) can be written


∫ t∑
n
n


n−k
hxn (t)i =
(τ )i dτ.
 mk φ(t − τ )hx

0 k=0
k
(41)
It follows that all the moments of x(t) can be gotten by solving a sequence of linear integral equations of convolution
type.
Let
∫ ∞
e−sτ hxk (τ )i dτ
Xk (s) = L{hxk (t)i} ≡
0
be the Laplace transform of the k-th moment, and let φ̄(s) = L{φ(t)}. Then one finds that
m1 φ̄(s)
s 1 − φ̄(s)
(
m2 ) φ̄(s)
X2 (s) =
2m1 X1 (s) +
s 1 − φ̄(s)
X1 (s) =
(42)
If the first moment of T̃ vanishes then these simplify to
X1 (s)
= 0
m2 φ̄(s)
X2 (s) =
.
s 1 − φ̄(s)
(43)
The asymptotic behavior of the moments can be gotten by applying limit theorems for Laplace transforms
(Widder 1946), but we shall merely illustrate the dependence of X2 on t for two particular choices of φ. Firstly,
suppose that m1 = 0 and that
φ(t) = λe−λt
(44)
which is the density function for an exponential waiting time distribution. Then φ̄(s) = λ/(s + λ) and it follows
that
( )
∫ t
λ
hx2 (t)i = m2
L−1
dτ = m2 λt.
(45)
s
0
Secondly, if we choose
φ(t) = λ2 te−λt
(46)
which is the density function for a gamma waiting time distribution with parameters (2, λ), then
φ̄(s) =
One finds that
∫
t
hx2 (t)i = m2
0
L−1
(
λ2
s(s + 2λ)
λ2
.
(s + λ)2
)
dτ =
m2 λ
2
{
}
1
t−
(1 − e−2λt ) ,
2λ
(47)
which is shown in Figure 2.7.
It is clear from the analysis given earlier that (45) predicts the same mean squared displacement as a diffusion
process with diffusion coefficient D = m2 λ/2. Similarly (46) leads to the same mean squared displacement as
the telegraph process discussed later. Of course neither fact proves that the processes defined by (44) and (46)
are diffusion and telegraph processes, respectively, but an experimentalist who can reliably measure only the first
two moments of the displacement could not distinguish them from these processes. It is noteworthy that this
conclusion holds under the reasonable hypothesis that the first two moments of T are finite, without any condition
on the higher moments.
11
2
<x >
<x2 >
(a)
(b)
t
t
Figure 2: Theoretical curves of mean-squared displacement sketched for the space jump process with (a) exponential and (b) gamma waiting time distributions.
3.2
Diffusion limits for the exponential waiting time distribution
The results given by (45) and (47) raise the question as to whether, for some choice of T , the corresponding
integral equations are equivalent to the diffusion and telegraph equations, respectively, in an appropriate limit.
Consider first the choice φ(t) = λe−λt which leads to (45). After differentiating (37) and rearranging one finds
that
∫
∂P
= −λP + λ
T̃ (x − y)P (y, t) dy
(48)
∂t
R
where here and hereafter we suppress the conditioning argument in P . If
T̃ (x − y) =
1
[δ(x − y − ∆) + δ(x − y + ∆)]
2
(49)
then
∂P
λ
= [P (x + ∆, t) − 2P (x, t) + P (x − ∆, t)].
∂t
2
The right-hand side can be written
λ∆2 ∂ 2 P
[
+ O(∆2 )],
2 ∂t2
and therefore, in the diffusion limit (λ → ∞, ∆ → 0, λ∆2 = constant ) we obtain
∂2P
∂P
=D 2 ,
∂t
∂t
(50)
provided that the higher-order derivatives included in O(∆2 ) are bounded.
In fact, a similar result holds in any dimension. Let
T̃ (x − y) =
δ( x − y − ∆)
∆n−1 ωn
where ωn is the surface measure of the unit sphere in Rn . For this choice of T̃ one finds that
∂P
= λ[P̄ (x, ∆, t) − P (x, t)]
∂t
where P̄ is the average of P over the surface of a sphere of radius ∆ centered at x. By expanding P about x and
performing the indicated average one finds that in the diffusion limit
∂P
= D∇2 P,
∂t
(51)
provided that P varies smoothly, i.e., provided that all higher-order derivatives are bounded. Here D ≡ λ∆2 /2n
is the diffusion coefficient in n dimensions.
12
A more realistic choice for T̃ (x − y) in one space dimension is a sum of Gaussians, one centered at +∆ and
one centered at −∆. Thus suppose that For this kernel it is more convenient to work with the Fourier transform
of (48). If P is absolutely integrable in x its Fourier transform is defined as
∫
+∞
eikx P (x, t) dx.
P̂ (k, t) =
−∞
Since
T̂ (k) =
}
2 2
1 { i∆k−σ2 k2 /2
e
+ e−i∆k−σ k /2
2
it follows that P̂ (k, t) satisfies the ordinary differential equation
{
}
2 2
dP̂
= λ −1 + (cos ∆k)e−σ k /2 P̂ (k, t)
dt
Upon expanding the bracketed term and collecting like powers of k one finds that
dP̂
λk 2 2
λk 4
=−
[σ + ∆2 ]P̂ +
[3σ 4 + 6∆2 σ 2 + ∆4 ]P̂ + O(k 6 )
dt
2
4!
To obtain the Fourier transform of a second-order operator on the right-hand side we must let λ → ∞ and
(σ, ∆) → (0, 0) in such a way that
λ[σ 2 + ∆2 ] → constant
(52)
and
λ[3σ 4 + 6∆2 σ 2 + ∆4 ] → 0
Thus it suffices to require that
λ
→ ∞
∆
→ 0
λ∆
2
→
constant
(53)
In this case the diffusion coefficient is the limiting value of λ[σ 2 + ∆2 ]. If σ/∆ ∼ o(1) as ∆ → 0 then the term
involving σ 2 in the diffusion coefficient vanishes.
A similar conclusion holds for much more general kernels T̃ . Suppose that T̃ has the form
T̃ (x − y) =
Then
1
x−y
T0 (
, ∆).
∆
∆
( ∫
)
( 2∫
) 2
∂P
∆
∂ P
∂P
2
=λ ∆
T0 (r, ∆)rdr
+λ
T0 (r, ∆)r dr
+ O(∆3 ).
2
∂t
∂x
2
∂t
R
R
(54)
It follows that if the first moment of T0 is O(∆) for ∆ → 0, if the second moment of T0 tends to a constant, and
if all higher moments are bounded, then in the diffusion limit (λ → ∞, ∆ → 0, λ∆2 = constant ) we obtain a
diffusion equation with drift. The diffusion coefficient is given by
∫
∆2
D=λ
T0 (r, ∆)r2 dr
(55)
lim
2 ∆→0 R
and the drift coefficient is given by
β=λ
∆2
lim
2 ∆→0
∫
R
13
T0 (r, ∆)
rdr.
∆
(56)
If the kernel is symmetric then the drift coefficient vanishes. The reader can check that the foregoing conditions
are satisfied for the kernel
{
}
2
2
2
2
1
e−(x−y−∆) /2σ + e−(x−y+∆) /2σ .
T̃ (x − y) = √
2 2πσ 2
provided that σ/∆ ∼ O(1) as
4
∆ → 0.
Reinforced random walks
The rigrorous analysis of random walks is more complicated when particle interactions, either direct or indirect,
are taken into account (cf. Spohn (1991); Oelschläger (1987)). Thus for instance in the case of myxobacteria,
a bacterium gliding on a slime trail reacts to its own contribution to these trails and to the contributions of
the other bacteria. Similarly bacteria that release an attractant and react to that released by others interact
indirectly via the attractant. There is a growing mathematical literature on what are called reinforced random
walks that began with the work of Davis (1990); a recent review can be found in Pemantle (2007). Here we sketch
the approach developed in Othmer and Stevens (1997), where the particle motion is governed by a jump process
and the walkers modify the transition probabilities on intervals for subsequent transitions of an interval.
Davis (1990) considered a reinforced random walk for a single particle in one dimension. Initially there is a
weight wni on each interval (i, i + 1), i ∈ ZZ which is equal to wn0 . If at time n an interval has been crossed by
the particle exactly k times, its weight will be
wni = wn0 +
k
∑
aj ,
j=1
where aj ≥ 0 , j = 1, ..., k. Furthermore, the transition probablilites are given by
P (xi+1 = n + 1|xi = n) =
wni
wni
i
+ wn−1
Davis’ main theorem asserts that localization of the particle will occur if the weight on the intervals grows
quickly enough with each crossing, as summarized in the following.
Let
(
)−1
∞
n
∑
∑
X ≡ {xi , i ≥ 0} and φ(a) ≡
1+
ai
n=1
i=1
Theorem Suppose that wn0 = 1. Then
(i) If φ(a) = ∞ then X is recurrent
(ii) If φ(a) < ∞ then X has finite range and there are random integers n and I such that xi ∈ (n, n+1) if i > I
Here reccurent means that every integer is visited infinitiely often a.s., i.e.,the walker does not become trapped.
From this it follows that if aj = constant, for instance, which corresponds to linear growth of the weight, then X
is recurrent almost surely, whereas if the growth is superlinear then the particle oscillates between two random
integers almost surely after some random elapsed time.
Since the result deals with a single particle it does not directly address the aggregation of particles, but it
does at least suggest that if the particles interact only through the modification of the transition probability there
may be aggregation if this modification is strong enough.
This theorem motivated the following development, taken from Othmer and Stevens (1997), in which we begin
with a master equation for a continuous-time, discrete-space random walk. and we postulate a generalized form
of (39) in which the transition rates depend on the density of a control or modulator species that modulates the
14
transition rates. We restrict attention to one-step jumps, although it is easy, using the framework given earlier,
to apply this to general graphs, but one usually does not obtain diffusion equations in the continuum limit.
Suppose that the conditional probability pn (t) that a walker is at n ∈ ZZ at time t, conditioned on the fact
that it begins at n = 0 at t = 0, evolves according to the continuous time master equation
∂pn
+
−
= T̂n−1
(W ) pn−1 + T̂n+1
(W ) pn+1 − (T̂n+ (W ) + T̂n− (W )) pn .
∂t
(57)
Here T̂n± (·) are the transition probabilities per unit time for a one-step jump to n ± 1, and (T̂n+ (W ) + T̂n− (W ))−1
is the mean waiting time at the nth site. We assume throughout that these are nonnegative and suitably smooth
functions of their arguments. The vector W is given by
W
=
(· · · , w−n−1/2 , w−n , w−n+1/2 , · · · , wo , w1/2 , · · · ).
(58)
Note that the density of the control species w is defined on the embedded lattice of half the step size. The
evolution of w will be considered later; for now we assume that the distribution of w is given. Clearly a timeand p-independent spatial distribution of w can model sa heterogeneous environment, but this static situation is
not treated here.
As (57) is written, the transition probabilities can depend on the entire state and on the entire distribution
of the control species. Since there is no explicit dependence on the previous state the jump process may appear
to be Markovian, but if the evolution of wn depends on pn , then there is an implicit history dependence, and the
space jump process by itself is not Markovian. However, if one enlarges the state space by appending the w one
gets a Markov process in this new state space.
Three distinct types of models are developed and analyzed in Othmer and Stevens (1997), which differ in the
dependence of the transition rates on w; (i) strictly local models, (ii) barrier models, and (iii) gradient models.
In the first of these the transition rates based on local information, so that T̂n± = T̂ (wn ), and to ssimplify the
analysis we assume that the jumps are symmetric, so that T̂ + = T̂ − ≡ T̂ . In this case (57) reduces to
∂pn
= T̂ (pn−1 , wn−1 )pn−1 + T̂ (pn+1 , wn+1 )pn+1 − 2T̂ (pn , wn )pn ,
∂t
and in the formal diffusion limt
lim λh2 = constant ≡ D
h→0
λ→∞
we obtain the nonlinear diffusion equation
∂p
∂2
= D 2 (T (w)p).
∂t
∂x
(59)
wherein the flux is defined as
∂
(T (w)p).
∂x
The second type is one called a barrier model, for which there are two sub-cases, dending on whether or not
the transition rates are renormalized. In the first case we assume that
j = −D
T̂n± (W ) = T̂ (wn±1/2 )
which leads to
∂p
=D
∂τ
{
∂2
∂
(T (p, w)p) −
2
∂x
∂x
(
pTw (p, w)
∂w
∂x
in one space dimension, and to
∂p
= D∆(T (w)p) − D∇ · (pTw ∇w).
∂t
in general. The flux is now given as
j = −D∇(T p) + DpTw ∇w = −DT ∇p,
15
)}
,
Clearly the general equation can also be written as
∂p
= D∇ · (T ∇p).
∂t
One may also renormalize the transition rates so that
λ(T̂n+ (W ) + T̂n− (W )) = constant ≡ λ
and then define
Nn± (W ) =
T (wn±1/2 )
T (wn+1/2) + T (wn−1/2 )
+
Nn−1
(wn−1/2 , wn−3/2 ) =
T (wn−1/2 )
T (wn−3/2) + T (wn−1/2 )
−
Nn+1
(wn+1/2 , wn+3/2 ) =
T (wn+1/2 )
T (wn+3/2) + T (wn+1/2 )
(60)
The master equation then reads
1 ∂pn
λ ∂t
=
N + (wn−1/2 , wn−3/2 )pn−1 + N − (wn+1/2 , wn+3/2 )pn+1
(
)
− N + (wn+1/2 , wn−1/2 ) + N − (wn−1/2 , wn+1/2 ) pn
and in the diffusion limit this leads to
∂p
∂t
= D
∂
∂x
(
)
∂ ( p)
p
ln
∂x
T
(61)
For later use we define the chemotactic velocity and sensitivty as
χ = D (lnT )w
u = −D
∂
0 ∂w
ln p + D (lnT (w))
.
∂x
∂x
Thus the taxis is positive if T 0 (w) > 0. If we set T (w) = α + βw, (61) reduces to
(
)
∂p
∂
∂p
β
∂w
=D
−p
∂t
∂x ∂x
α + βw ∂x
(62)
and we use this form later in examples.
+
The last type of model is the gradient-based, or look-ahead model, for which Tn−1
= α + β(τ (wn ) − τ (wn−1 ))
−
and Tn+1 = α + β(τ (wn ) − τ (wn+1 )), α ≥ 0, which leads to
[ (
)]
∇p
β
∂p
= Dα∇ · p
− 2 τw ∇w
∂t
p
α
if the rates are not renormalized, and if they are, leads to
{
}
∂p
1
βτw
= D∇ ·
∇p − p
∇w
∂t
2
α
The results for the different types of models is summarized in Table 1.
Of course we also have to specify the local dynamics for the evolution of w, and here we use the general form
∂w
∂t
=
pw
p
+ γr
− µw ≡ R(p, w)
1 + λw
K +p
in the examples shown if Figure 3. For all cases we set D = 0.36, and in the first panel we show the solution of
(62) and (4) for α = γr = µ = 0 and β = 1, λ = 1. × 10−5 . The second panel is as in the first, but with λ = 0,
16
Table 1: Dependence of the response on the sensing mechanism
1.
Type of
Taxis
Chemotactic
Type of
Sensing
Velocity
Sensitivity
Taxis
Local
-D∇T
−DT 0 (w)
0
0
D∇lnT
D (lnT (w))0
2D∇lnT
2D (lnT (w))0
2Dβ∇τ
2Dβτ 0 (w)
β
Dα
∇τ
β 0
Dα
τ (w)
Negative
if T 0 (w) > 0
Barrier without
2.
re-normalization
Barrier with
3.
re-normalization
Nearest neighbor with
4.
re-normalization
Gradient without
5.
re-normalization
Gradient with
6.
re-normalization
None
Positive
if T 0 (w) > 0
Positive
if T 0 (w) > 0
Positive
if βτ 0 (w) > 0
Positive
if βτ 0 (w) > 0
and in the third panel a more complicated transition rate is used (cf. Othmer and Stevens (1997)). One sees in
that figure that both the dependence of the transition rates on the local modulator w, and the dynamics of w
itself, play an important role in the dynamics of the system. In the first panel the solution stabilizes at some
smooth distribution, in the second panel th esolution blows up in finite time (around t - 9.3 – this assertion is
supported by analysis of the Foruier components) and in the thord panel the solution ulitmately collapses, in a
very intersting step-wise fashion that is not understood at present.
(a)
(b)
(c)
Density
Density
Density
100
25
0
Ti
m
e
9.3
0
0
e
Tim
x
x
1
1
x
e
Tim
1
Figure 3: Three examples of the dynamics. See Othmer and Stevens (1997) for details.
5
Velocity Jump Processes
The prototypal organisms whose motion can be described as a velocity jump process are the flagellated bacteria,
the best studied of which is E. coli. To search for food or escape an unfavorable environment, E. coli alternates
two basic behavioral modes, a more or less linear motion called a run, and a highly erratic motion called tumbling,
the purpose of which is to reorient the cell (cf. Figure Fig-02). Run times are typically much longer than the
17
The unbiased process
The biased process
111111111111111111111111
000000000000000000000000
000000000000000000000000
111111111111111111111111
11
00
00
11
1
0
1
0
1
0
0
1
1
0
0
1
Figure 4: The movement of a particle executing a velocity-jump.
time spent tumbling, and when bacteria move in a favorable direction (i.e., either in the direction of foodstuffs
or away from harmful substances) the run times are increased further.
During a run the bacteria move at approximately constant speed in the most recently chosen direction. New
directions are generated during tumbles, and when bacteria move in an unfavorable direction the run length
decreases and the relative frequency of tumbling increases. The distribution of new directions is not uniform on
the unit sphere, but has a bias in the direction of the preceding run. The effect of alternating these two modes of
behavior, and in particular, of increasing the run length when moving in a favorable direction, is that a bacterium
executes a three-dimensional random walk with drift in a favorable direction when observed on a sufficiently long
time scale (Koshland, 1980; Berg, 1983). We begin with a simple example that illustrates the main points. We
assume as before that there is no onteraction between walkers, and therefore can consider either the probability
of a single walker being at a given position with a given velocity at time t, or the density of walkers. Here we
choose the latter.
5.1
The telegraph process in one space dimension
Supose that the walkers are confined to the interval [0, 1] with homogeneous Neumann data at the ends, that
the speeds s± to the right and left are constant, and that direction is reversed at random instants governed by
Poisson processes of intensity λ± . Let p± denote the density of walkers moving to the right and left, respectively.
Then the conservation equations for these densities are
∂p+
∂(s+ p+ )
+
∂t
∂x
= −λ+ p+ + λ− p−
∂p−
∂(s− p− )
−
∂t
∂x
= λ p −λ p .
(63)
+ +
− −
Define p ≡ p+ + p− and j ≡ (s+ p+ − s− p− ); then these can be written in the alternative form
∂j
∂p
+
∂t
∂x
=
∂j
+ 2λj
∂t
∂ + +
∂
= −s
(s p ) − s− (s− p− ) + λ(s+ p− − s− p+ )
∂x
∂x
0
(64)
+
To illustrate the essence of chemotaxis in this simple context, we ask how the walkers should modify their
behavior so as to produce a nonuniform distribution on the interval. We consider three possible cases.
Case I: Constant and equal speeds and turning rate – λ+ = λ− and s+ = s− . Combining the two equations in
(64) leads to the classical telegrapher’s equation
∂p
∂2p
∂2p
+ 2λ
= s2 2 .
2
∂t
∂t
∂x
18
(65)
The diffusion equation results by formally taking the limit λ → ∞, s → ∞ with s2 /λ ≡ 2D constant in (65),
but this can be made more precise because the equation can be solved explicitly. The solution when the spatial
domain is the entire line is
[
])
 −λt (
e
λ
λt


δ(x − st) + δ(x + st) +
I0 (Λ) + I1 (Λ)
|x| < st
2
s
Λ
p(x, t) =

 0
|x| > st
Here I0 and I1 are modified Bessel functions of the first kind. If we make use of the asymptotic expansions
( )
( )
ez
ez
1
1
I0 (z) = √
+O
I1 (z) = √
+O
as z → ∞
z
z
2πz
2πz
we see that
x2
−
1
p(x, t) = √
e 4Dt + e−λt O(ξ 2 )
4πDt
ξ 2 ≡ (x/st)2
and thus the telegraph process reduces to a diffusion process on short space scales and long time scales. This
fact was known to Einstein and this process has since been studied by many (Taylor 1920; Fürth 1920; Goldstein
1951; Kac 1956; Othmer et al. 1988).
If we define τ = 2 t and ξ = x, where is a small parameter, then (65) reduces to
2
2
∂2n
∂n
2∂ n
+
2λ
=
s
.
∂τ 2
∂τ
∂ξ 2
(66)
The diffusion regime defined by the exact solution now becomes
x
ξ
=
st
sτ
and this requires only that ξ/(sτ ) ≤ O(1). In the limit → 0 the exact solution can be used to show that (66)
again reduces to the diffusion equation, both formally and rigorously (for t bounded away from zero). However this
shows that the approximation of the telegraph process by a diffusion process hinges on the appropriate relation
between the space and time scales, not necessarily on the limit of speed and turning rate tending to infinity
In any case, it is clear that the spatial distribution of p is asymptotically constant, and thus there is no
localization of walkers in this case. Imposing no-flux boundary conditions on a finite interval does not change
this conclusion.
Case II: λ constant, speed depending on direction
Here one finds that the time-independent solutions are given by
]
s (0)p (0) λ
+
e
p (x) =
s+ (x)
[
[
+
]
s (0)p (0) λ
p− (x) =
e
s− (x)
+
∫
x
+
0
∫
+
0
x
s+ − s−
dξ
s+ s−
≡ p+ (0)F + (x),
s+ − s−
dξ
s+ s−
≡ p+ (0)F − (x).
where the constant p+ (0) is determined by the conservation of walkers. Clearly the flux vanishes pointwise, as it
must at steady state. It is also clear that these distributions differ if s+ (x) 6= s− (x).
Case III: λ+ 6= λ− , constant and equal speeds
19
∂p+
∂p+
+s
∂t
∂x
=
−λ+ p+ + λ− p−
∂p−
∂p−
−s
∂t
∂x
=
λ+ p+ − λ− p−
We write
λ+ + λ−
λ+ − λ−
±
≡ λ0 ± λ1
2
2
and then the density-flux form of the sytem is
λ± =
∂j
∂p
+
∂t
∂x
∂j
+ 2λ0 j
∂t
= 0
= −s2
∂p
− 2sλ1 p.
∂x
One finds that the steady-state solution is
N0 e− s
2
p(x) = ∫ 1
0
− 2s
e
Rx
0
Rx
0
λ1 (ξ)dξ
λ1 (ξ)dξ
.
dx
and again there may be a non-constant solution; now the difference in turning leads to this, and one can see that
the chemotactic velocity should be defined as
sλ1
uc = −
λ0
.
5.2
The general velocity-jump process
Our approach to velocity jump processes will be a direct generalization of the earlier derivation of the telegrapher’s
equation. Thus we shall work directly with the differential equation form of the conservation equation for a phase
space density function that depends only on the position, velocity and time. The development is similar to
that which leads to the Boltzmann equation and its related moment equations in the kinetic theory of gases
(cf. Resibois and DeLeener (1977). Here we deal with the case of no internal variables; that case will be dealt
with in subsequent lectures.
Let p(x, v, t) be the density function for individuals in a 2n-dimensional phase space with coordinates (x, v),
where x ∈ Rn is the position of a individual, and v ∈ Rn is its velocity. Then p(x, v, t) dx dv is the number
density of individuals with position between x and x + dx and velocity between v and v + dv, and
∫
n(x, t) = p(x, v, t) dv
is the number density of individuals at x, whatever their velocity. The evolution of p is governed by the partial
differential equation
∂p
+ ∇x · vp + ∇v · Fp = R,
(67)
∂t
where F denotes the external force acting on the individuals and R is the rate of change of p due to reaction,
random choice of velocity, etc. For the present we assume that F ≡ 0 and that only two processes contribute to
the changes on the right-hand side of (67), namely, a birth/death process and a process that generates random
velocity changes. We assume that the former is independent of the velocity and that it can be written
(
∂p
) = kr(n)p
∂t bd
20
(68)
where k is a constant. We suppose that the random velocity changes are the result of a Poisson process of intensity
λ, where λ may depend upon other variables. Thus λ−1 is a mean run length time between the random choices
of direction. The net rate at which individuals enter the phase-space volume at (x, v) is given by
∫
∂p
( )sp = −λp + λ T (v, v0 )p(x, v0 , t) dv0
(69)
∂t
where ‘sp’ denotes the change due to the stochastic process. Clearly this equation is the velocity-space analog of
the master equation for a space-jump process. The kernel T (v, v0 ) gives the probability of a change in velocity
from v0 to v, given that a reorientation occurs, and therefore T (v, v0 ) is non-negative and normalized so that
∫
T (v, v0 ) dv = 1.
This normalization condition merely expresses the fact that no individuals are lost during the process of changing
velocity. At present we assume that T (v, v0 ) is independent of the time between jumps.
In light of the foregoing assumptions, (7) becomes
∫
∂p
+ ∇x · vp = −λp + λ T (v, v0 )p(x, v0 , t) dv0 + kr(n)p.
(70)
∂t
For most purposes one does not need the distribution p, but only its first few velocity moments. The first two
are the number density n(x, t) introduced previously, and the average velocity u(x, t), which is defined by
∫
n(x, t)u(x, t) ≡ p(x, v, t)v dv.
(71)
If we integrate (70) over v we find that
∂n
+ ∇x · nu = R(n)
∂t
where R(n) ≡ knr(n). Similarly, multiplying by v and integrating over v gives
∫
∫
∂(nu)
+ ∇ · pvv dv = λ T (v, v0 )vp(x, v0 , t) dv0 dv − λnu + knur(n).
∂t
(72)
(73)
External signals enter either through a direct effect on the turning rate λ and the turning kernel T , or indirectly
via internal variables that reflect the external signal and in turn influence λ and/or T . The first case arises when
experimental results are used to directly estimate parameters in the equation (Ford and Lauffenburger 1992), but
the latter approach is more fundamental. The reduction of (67) to the macroscopic chemotaxis equations for the
first case is done in (Hillen and Othmer 2000; Othmer and Hillen 2002), and this will be discussed in a subsequent
lecture.
5.3
The unbiased walk
When the underlying space is one-dimensional we define
T (v, v0 ) = δ(v + v0 )
and thus demand that individuals change direction each time a choice is made. This is consistent with the scheme
that led to the telegrapher’s equation earlier, but not for instance, with the random choice of direction made at
each tumble in bacterial motion. (How would one define T in the latter case?)
When the speed is constant v = ±s and nu = s(p+ − p− ), where p± ≡ p(x, ±s, t). Furthermore
∫
∂
∂n
= s2 (p+ + p− ).
∇ · pvv dv = s2
∂x
∂x
21
For the foregoing choice of T the integral term in (73) reduces to −λs(p+ − p− ), and thus in the absence of
reaction (72) and (73) reduce to
∂ +
∂
(p + p− ) + s (p+ − p− ) = 0
∂t
∂x
∂
∂
s (p+ − p− ) + s2 (p+ + p− ) = −2λs(p+ − p− ),
∂t
∂x
These are just the equations given at (64), written in a slightly different form.
In higher space dimensions equations (72) and (73) do not specify n and u as they stand, for they involve
the second vmoment of p and the as yet unspecified kernel T (v, v0 ). Some further simplifying assumptions are
necessary, and to describe some that are biologically meaningful we shall first introduce the notion of persistence.
Let v = sξ where s = k v k is the speed (the Euclidean norm of v) and ξ = v/ k v k is the direction of v.
For a fixed v0 , the average velocity v after reorientation is defined by
∫
∫
0
v = T (v, v )v dv = T (v, v0 )ξsn ds dωn
where dωn is the surface measure on the unit sphere S0n−1 centered at the origin in Rn . While the average speed
∫
∫
0
s = T (v, v ) k v k dv = T (v, v0 )sn ds dωn
is always positive (since T ≥ 0 and T is not concentrated at v = 0), the average velocity vector may vanish, and
k v k≤ s, see Figure 2. The angle between v/s and ξ 0 = v0 /s0 , provides a measure of the tendency of the motion
to persist in any given direction ξ 0 . Therefore we define the index of directional persistence as
ψd ≡
v · v0
ss0
(74)
where ψd ∈ [−1, +1]. Of particular interest is the case in which the speed does not change with reorientation and
the turning probability depends only on the cone angle θ between v0 and v, which is given by
θ(v, v0 ) ≡ arccos
v · v0
,
ss0
where θ ∈ [0, π]. Then T (v, v0 ) has the form
δ(s − s0 )
h (θ(v, v0 ))
sn−1
T (v, v0 ) =
(75)
for any n ≥ 2. The distribution h is normalized so that
∫ π
2
h(θ) dθ = 1
0
for n = 2 and
∫
2π
π
h(θ) sin θ dθ = 1
0
for n = 3.
Given a velocity v0 , the average velocity after reorientation can be resolved into a component along v0 and
0
a component v⊥
orthogonal to v0 . Since the probability of choosing a given direction depends only on θ for the
0
foregoing T , it follows that v⊥
= 0. Furthermore, in this case ψd in (74) is independent of v0 and
v = ψd v 0 ,
22
(76)
where the persistence index or mean cosine is given by

∫

 2 π h(θ) cos θ dθ
0
ψd =
∫

 2π π h(θ) cos θ sin θ dθ
0
for n = 2
(77)
for n = 3
(cf. Patlak (1953)). Some specific examples of interest will help to illustrate this. For the simple case of uniform
random selection of direction on the unit circle, h(θ) = 1/(2π) and ψd = 0. For the the circular normal distribution
(Johnson and Kotz 1970) with pole θ0 = 0, we have h(θ) = [2πI0 (k)]−1 exp(k cos θ), where I0 is the Bessel function
of order zero of imaginary argument. For this distribution one finds that ψd = I1 (k)/I0 (k) ((Abramowitz and
Stegun 1965)), equation 9.6.19). For k = 0 we have uniform random selection of direction, while as k → ∞
the new direction of motion tends to be the same as the previous direction, and ψd → 1. From observations of
the two-dimensional locomotion of Dictyostelium amoeba, the data from Hall (1977) yield ψd ≈ 0.7 whereas the
three-dimensional bacterial random walk data in Berg and Brown (1972)) show ψd ≈ 0.33 ( cf. Berg (1983)).
It is also possible to derive simple equations for the mean squared displacement of individuals which begin at
the origin at t = 0. Let
∫
∫
D2 (t) ≡ hk x(t) k2 i ≡
k x k2 p(x, v, t) dx dv/ p(x, v, t) dx dv
(78)
be the mean squared displacement, and let
S (t) ≡ hs i ≡
m
m
∫
∫
m
s p(x, v, t) dx dv/
p(x, v, t) dx dv
be the m-th moment of the speed distribution. If N0 individuals are released at x = 0 at t = 0 then n(x, 0) =
N0 δ(x) and (nu)(x, 0) = 0. We shall assume that there is no birth/death term in (70),(72) and (73) until stated
otherwise, and as a result
∫
p(x, v, t) dx dv ≡ N0 .
To obtain a differential equation for D2 (t), multiply (70) by k k x k k2 and integrate over x and v. Under the
∫∫ 2
assumption that terms of the form
xi vj p dx dv vanish at infinity we obtain the equation
∫
d 2
2
D (t) =
(x · v)p(x, v, t) dx dv ≡ 2B(t).
(79)
dt
N0
By multiplying (70) by x · v and integrating we obtain the following equation for B(t):
∫
d
−1
B(t) =
(x · v)∇x · (vp) dx dv
dt
N0
∫
λ
(x · v)p(x, v0 , t) dv0 dx
−λB(t) +
N0
(80)
In cases where the relation (76) holds, the last term is simply λψd B(t).
Suppose that this is the case, and that terms of the form
∫ ∫
(xi vj vk )p(x, v, t) dx dv
vanish at infinity; then (80) reduces to
dB
+ λ(1 − ψd )B = S 2
(81)
dt
where S 2 , the second moment of the speed distribution, is a constant. Therefore integration of (81), subject to
B(0) = 0, and of (79) subject to D2 (0) = 0, yields

S2


[1 − e−λ(1−ψd )t ] for ψd 6= 1
λ(1
−
ψ
)
d
(82)
B(t) =

 S 2t
for ψd = 1
23
and
D2 (t) =



2S 2
1
[t −
(1 − e−λ(1−ψd )t )]
λ(1 − ψd )
λ(1 − ψd )

 S 2 t2
for ψd 6= 1
(83)
for ψd = 1.
The quantity λ0 = λ(1 − ψd ) is a modified turning frequency associated with the reorientation kernel T (v, v0 ),
and the inverse
P = 1/λ0
is a characteristic run time that incorporates the effect of persistence. This is called a “persistence time” by Dunn
(1983). The “motility” or diffusion coefficient is defined as
D = S 2 P/n
in a space of dimension n. In terms of D the first equation in (83) reads
D2 (t) = 2nD[t −
1
(1 − e−λ0 t )]
λ0
(84)
To reduce this to the result obtained earlier, note that when an individual reverses direction at every step ψd = −1,
and therefore λ0 = 2λ. Consequently (84) is equivalent to (47) in the one-dimensional case.
Figure 5: A sketch of the the theoretical values of the mean squared displacement (a) versus time t, according to
equation (50) and (b) versus the number of consecutive moves m. See also Hall (1977), Figure 7.
5.4
A biased walk in the presence of a chemotactic gradient
Some statistics of the density distribution in the first case, wherein the external field modifies the turning kernel or
turning rate directly, can easily be derived and used to interpret experimental data (Erban and Othmer 2007). To
outline the procedure, we consider two-dimensional motion of amoeboid cells in a constant chemotactic gradient
directed along the positive x1 axis of the plane, i.e.
∇S = k∇Sk e1 ,
where we denoted
e1 = [1, 0].
(85)
Moreover, we assume that the gradient only influences the turn angle distribution T ; details of the procedure
are given in (Othmer et al. 1988). We assume for simplicity that the individuals move with a constant speed s.
i.e. a velocity of an individual can be expressed as v(φ) ≡ s[cos(φ), sin(φ)] where φ ∈ [0, 2π). We assume that
T (v, v0 ) ≡ T (φ, φ0 ) is the sum of a symmetric probability distribution h(φ, φ0 ) ≡ h(φ − φ0 ) = h(|φ − φ0 |) and a
bias term k(φ) that results from the gradient of the chemotactic substance. Since the gradient is directed along
the positive x1 axis, we assume that the bias is symmetric about φ = 0 and takes its maximum there. Thus we
write T (φ, φ0 ) = h(φ − φ0 ) + k(φ) where h and k are normalized as follows.
∫ 2π
∫ 2π
h(φ)dφ = 1
k(φ)dφ = 0
(86)
0
0
24
Let p(x, φ, t) be the density of cells at position x ∈ R2 , moving with velocity v(φ) ≡ s[cos(φ), sin(φ)], φ ∈ [0, 2π),
at time t ≥ 0. The statistics of interest are the mean location of cells X(t), their mean squared displacement
D2 (t), and their mean velocity V(t), which were defined earlier. Two further quantities that arise naturally are
the taxis coefficient χ, which is analogous to the chemotactic sensitivity defined earlier because it measures the
response to a directional signal, and the persistence index ψd . These are defined as
∫ π
∫ 2π
k(φ) cos φ dφ
and
ψd = 2
h(φ) cos φ dφ.
(87)
χ≡
0
0
The persistence index measures the tendency of a cell to continue in the current direction. Since we have assumed
that the speed is constant, we must also assume that χ and ψd satisfy the relation χ < 1 − ψd , for otherwise the
former assumption is violated (cf. (90)).
One can now show, by taking moments of (67), using (86) and symmetries of h and k, that the moments
satisfy the following evolution equations (Othmer et al. 1988).
dX
dV
=V
= −λ0 V + λχse1
dt
dt
2
dB
dD
= 2B
= s2 − λ0 B + λχsX1
dt
dt
where λ0 ≡ λ(1 − ψd ). The solution of (88) subject to zero initial data is
(
)
1
−λ0 t
X(t) = sCI t − (1 − e
) e1 ,
V(t) = sCI (1 − e−λ0 t ) e1
λ0
(88)
(89)
(90)
where CI ≡ χ/(1 − ψd ) is sometimes called the chemotropism index. Thus the mean velocity of cell movement is
parallel to the direction of the chemotactic gradient and approaches V∞ = s CI e1 as t → ∞. Thus the asymptotic
mean speed is the cell speed decreased by the factor CI .
A measure of the fluctuations of the cell path around the expected value is provided by the mean square
deviation, which is defined as
∫ ∫ 2π
1
σ 2 (t) =
kx − X(t)k2 p(x, φ, t) dφdx = D2 (t)− kX(t)k2 .
(91)
N 0 R2 0
Using (88) – (89), one also finds a differential equation for σ 2 . Solving this equation, we find
)}
{
(
1 5 2
2s2
2
2
as
t→∞
σ ∼
(1 − CI )t +
C −1
λ0
λ0 2 I
and from this one can extract the diffusion coefficient as
2s2
D=
(1 − CI2 ).
λ0
Therefore if the effect of an external gradient can be quantified experimentally and represented as the distribution
k(φ), the macroscopic diffusion coefficient, the persistence index, and the chemotactic sensitivity can be computed
from measurements of the mean displacement, the asymptotic speed and the mean-squared displacement.
However, it is not as straightforward to derive directly the macroscopic evolution equations based on detailed
models of signal transduction and response. Suppose that the internal dynamics that describe signal detection,
transduction, processing and response are described by the system
dy
= f (y, S)
(92)
dt
where y ∈ Rm is the vector of internal variables and S is the chemotactic substance (S is extracellular cAMP
for Dd aggregation – this will be discussed in the third lecture). Models that describe the cAMP transduction
pathway exist (Martiel and Goldbeter 1987; Tang and Othmer 1994; Tang and Othmer 1995), but for describing
chemotaxis one would have to formulate a more detailed model. The form of this system can be very general
but it should always have the “adaptive” property that the steady-state value (corresponding to the constant
stimulus) of the appropriate internal variable (the “response regulator”) is independent of the absolute value of
the stimulus, and that the steady state is globally attracting with respect to the positive cone of Rm .
25
5.5
Inclusion of a resting phase
As is suggested by the notation used in (69), there is no necessity that the random process generating the
∂p
velocity changes be a Poisson process. Whatever the underlying process, one simply has to compute ( )sp
∂t
for that process, but of course if it is not Markovian the right-hand side of (69) will involve an integral over
time. Secondly, we can include a resting time in the reorientation or tumbling phase, in order to more accurately
describe bacterial motion discussed in the following lecture.
When a resting phase is incorporated, the total population is divided into two subpopulations, one consisting
of the moving bacteria and the other comprising the resting bacteria. As before, let p = p(x, v, t) be the density
of bacteria at (x, v), and let r = r(x, v, τ, t) be the density of bacteria in the resting phase, defined so that
r = r(x, v, τ, t) dx dv dτ is the number of bacteria with position between x and x + dx, whose most recent nonzero velocity lies between v and v + dv, and whose rest time lies between τ and τ + dτ . We assume as before that
there are no external forces on the bacteria, and that the loss of bacteria from a given (x, v) point in positionvelocity space is governed by a Poisson process of intensity λ. Now however the change is not to a non-zero
velocity, but rather into the resting phase. Bacteria leave the resting phase at random times and choose a new
velocity. The random exits from the resting phase are supposed to be governed by a Poisson process of intensity
µ, and the new choice of velocity depends on the time spent in the resting phase as follows:
T(v, v0 , τ ) = e−γτ T (v, v0 ) + (1 − e−γτ )g(k v k).
(93)
Here T (v, v0 ) is a velocity kernel of the type given at (75), and the speed distribution g(s) is such that g(0) = 0
and
∫ ∞
ωn
g(s)sn−1 ds = 1.
(94)
0
n/2
The factor ωn = 2π /Γ(n/2) is the surface area of the unit sphere in Rn . Thus the probability of choosing a
random direction with speed g(s) increases with the resting time, and any directional persistence embodied in
the kernel T(v, v0 ) is exponentially fading in the resting time.
In the absence of birth/death terms, the governing equations for p and r are
∫ ∫ ∞
∂p
+ v · ∇x p = −λp + µ
T(v, v0 , τ )r(x, v0 , τ, t) dτ dv0
(95)
∂t
Rn 0
and
∂r
∂r
+
= −µr
∂t
∂τ
with the initial condition on r having the renewal form
r(x, v, 0, t) = λp(x, v, t).
∫
If we define
p(x, v, t) dv dx
Rn
∫
Nr (t) ≡
∞
(97)
∫
Np (t) ≡
and
(96)
∫
Rn
∫
r(x, v, τ, t) dv dx dτ,
0
Rn
Rn
then, since the total number of particles are conserved in the absence of a birth/death process, Np (t) and Nr (t)
must satisfy
Np (t) + Nr (t) = N0 .
It is easy to see that the solution of (96) and (97) is given by r(x, v, τ, t) = λe−µτ p(x, v, t − τ ), and it is
convenient to introduce the following notation for the two moments
∫ ∞
r0 (x, v, t) ≡
r(x, v, τ, t) dτ
0
26
∫
and
r1 (x, v, t) ≡
∞
e−γτ r(x, v, τ, t) dτ.
0
The governing equation for p can now be written
∫
∂p
+ v · ∇x p =
∂t
T (v, v0 )r1 (x, v0 , t) dv0
−λp + µ
∫
Rn
(r0 (x, v0 , t) − r1 (x, v0 , t)) dv0
µg(k v k)
+
(98)
Rn
As before, we define the mean squared displacement in x of moving bacteria as
∫ ∫
k x k2 p(x, v, t) dv dx /Np (t),
Dp2 =
Rn
and of resting bacteria as
∫
∫
Dr2 =
Rn
k x k2 r0 (x, v, t) dv dx /Nr (t).
Rn
Rn
Furthermore, we define the corresponding second-order moments
∫ ∫
Bp =
(x · v)p(x, v, t) dv dx /Np (t)
Rn
∫
and
Rn
∫
Br =
(x · v)r1 (x, v, t) dv dx /Nr (t).
Rn
Rn
These satisfy the following system of ordinary differential equations.
dDp2 Np
= 2Bp Np − λDp2 Np + µDr2 Nr
dt
dDr2 Nr
= λDp2 Np − µDr2 Nr
dt
dBp Np
= Sp2 Np − λBp Np + µψd Br Nr
dt
dBr Nr
= λBp Np − (µ + γ)Br Nr
dt
(99)
(100)
(101)
(102)
This system is not closed, for the second moments Sp2 and Sr2 of the speed distribution, which are defined
∫ ∫
s2 p(x, v, t) dv dx /Np (t)
Sp2 =
Rn
∫
and
Sr2 =
Rn
∫
s2 r1 (x, v, t) dv dx /Nr (t),
Rn
Rn
are time-dependent, in contrast to the case analyzed in Section 3. One finds that
dSp2 Np
dt
dSr2 Nr
dt
dNr
dt
dR1
dt
= −λSp2 Np + µSr2 Nr + µs20 (Nr − R1 )
= λSp2 Np − (µ + γ)Sr2 Nr
= λNp − µNr
= λNp − (µ + γ)R1
27
(103)
∫
where
∞
s20 ≡ ωn
g(s)sn+1 ds.
0
∫
∫
and
R1 ≡
r1 (x, v, t) dv dx.
Rn
Rn
Since these equations are linear in Sp2 Np , etc., they can be solved explicitly and the results can be used in (99-102).
However, if λ and µ are large the solution quickly relaxes to the steady-state solution, which is given by
λN0
λ+µ
µN0
λ+µ
λµN0
(µ + λ)(µ + γ)
Nr
=
Np
=
R1
=
Sp2
= s20
Sr2
=
µ 2
s
µ+γ 0
(104)
Moreover, if we assume that initially the cells are released at the origin x = 0, then Dp2 (0) = Dr2 (0) = 0, and
if they have no preferential direction of motion then Bp (0) = Br (0) = 0. If in addition the initial distribution
between moving and nonmoving cells is the steady-state distribution given by (104), then (104) holds for all time
and the mean squared speed
Sp2 = s20
(105)
is a constant. With these assumptions we obtain from (99) and (100) the usual formula
dD2
= 2bNp /N0
dt
(106)
for the weighted mean squared displacement
D2 (t) =
Dp2 (t)Np + Dr2 (t)Nr
N0
The quantity B ≡ Bp satisfies the following second order equation, which is derived from (101), (102)
dB
d2 B
+ (λ + µ + γ)
+ λ(µ(1 − ψd ) + γ)B = (µ + γ)s20 .
2
dt
dt
(107)
It should be noted that in the limit µ → ∞, in which case the mean resting time 1/µ tends to zero, equation (107)
formally reduces to equation (81) with S 2 = s20 . The solution of (107) and the solution of the reduced equation
agree to within terms of O(1/µ), except in a neighborhood of t = 0.
This suggests the following definition of a modified turning frequency
λ0 = λ
µ(1 − ψd ) + γ
,
µ+γ
and if we solve (107) subject to Br (0) = Bp (0) = 0 we obtain
{
}
s2
λ+ − λ0 −λ− t λ− − λ0 −λ+ t
B(t) = 0 1 −
e
+
e
,
λ0
λ+ − λ −
λ+ − λ −
where λ± are given by
√
[
]
λ+µ+γ
(1 − ψd )µ + γ
λ± =
1 ± 1 − 4λ
.
2
(λ + µ + γ)2
28
(108)
(109)
(110)
Note that λ+ ∼ µ and λ− → λ0 in the limit µ → ∞. The solution of (106) subject to the initial condition
D2 (0) = 0 gives a relation for the mean squared displacement, namely
{
}
2s20 µ
1 λ+ − λ0 −λ− t
1 λ− − λ0 −λ+ t
2
D (t) =
t+
(e
− 1) −
(e
− 1) .
(111)
λ0 λ + µ
λ− λ+ − λ −
λ+ λ+ − λ−
As we saw in earlier, the first term in (111) arises in a diffusion process. It can be shown that λ± are both
real and therefore the foregoing generalization deviates from this by two exponentially decreasing terms with the
relaxation times
1
P± =
.
λ±
A plot of the relation in (111) similar to Figure 5 has an asymptote whose intercept with the t-axis is the
persistence time
λ+ + λ− − λ0
.
P =
λ+ λ−
References
Abramowitz, M. and Stegun, I. 1965. Handbook of Mathematical Functions. New York: Dover.
Berg, H. 1983. Random Walks in Biology. Princeton: Princeton University Press.
Berg, H. C. and Brown, D. A. 1972. Chemotaxis in Escherichia coli analyzed by three dimensional tracking.
Nature, 239, 69–78.
Chandrasekhar, S. 1943. Stochastic Problems in Physics and Astronomy. Reviews of Modern Physics, 15, 2–89.
Codling, E. A., Plank, M. J., and Benhamou, S. 2008. Random walk models in biology. J Roy Soc Interface, 5
(25), 813.
Davis, B. 1990. Reinforced random walks. Prob. Thy. Rel. Fields, pages 203–229.
Dunn, G. A. 1983. Characterizing a kinesis response: Time averaged measures of cell speed and directional
persistence. Pages 14–33 of: Keller, H. O. and Till, G. O. (eds), Leukocyte Locomotion and Chemotaxis.
Basel: Birkhäuser Verlag.
Erban, R. and Othmer, H. G. 2007. Taxis equations for amoeboid cells. J Math Biol, 54, 847–885. Epub ahead
of print.
Feller, W. 1968. An Introduction to Probability Theory. New York: Wiley.
Ford, Roseanne and Lauffenburger, Douglas A. 1992. A Simple Expression for Quantifying Bacterial Chemotaxis
Using Capillary Assay Data: Application to the Analysis of Enhanced Chemotactic Responses from GrowthLimited Cultures. 109(2), 127–150.
Fürth, R. 1920. Die Brownsche Bewegung bei Berücksichtigung einer Persistenz der Bewegungsrichtung. Zeitsch.
f. Physik, 2, 244–256.
Goldstein, S. 1951. On diffusion by discontinuous movements, and on the telegraph equation. Quart. J. Mech.
Applied Math., VI, 129–156.
Hall, R. L. 1977. Amoeboid movement as a correlated walk. J. Math. Biology, 4, 327–335.
Hillen, T. and Othmer, H. G. 2000. The diffusion limit of transport equation derived from velocity-jump processes.
Siam J. Appl. Math., 61(3), 751–775.
29
Johnson, N. L. and Kotz, S. 1970. Distributions in Statistics – Continuous Univariate Distributions. Vol. 2. New
York: Wiley.
Kac, M. 1956. A stochastic model related to the telegrapher’s equation. Rocky Mountain J. Math., 4, 497–509.
Kampen, N. G. Van. 1981. Stochastic Processes in Physics and Chemistry. Oxford: North-Holland.
Karlin, S. and Taylor, H. 1975. A First Course in Stochastic Processes. New York: Academic Press.
Martiel, J. L. and Goldbeter, A. 1987. A model based on receptor desensitization for cyclic AMP signalling in
Dictyostelium cells. Biophys. J., 52, 807–828.
Oelschläger, Karl. 1987. A Fluctuation Theorem for Moderately Interacting Diffusion Processes. Probab. Th. Rel.
Fields, 74, 591–616.
Othmer, H. G. and Hillen, T. 2002. The diffusion limit of transport equations, Part II: chemotaxis equations.
SIAM JAM, 62, 1222–1260.
Othmer, H. G., Dunbar, S. R., and Alt, W. 1988. Models of dispersal in biological systems. J. Math. Biol., 26,
263–298.
Othmer, Hans G. and Stevens, Angela. 1997. Aggregation, Blowup and Collaps: The ABC’s of generalized taxis.
SIAM J. Appl. Math., 57(4), 1044–1081.
Patlak, C. S. 1953. Random walk with persistence and external bias. Bull. of Math. Biophys., 15, 311–338.
Pemantle, R. 2007. A survey of random processes with reinforcement. Probability Surveys, 4, 1–79.
Resibois, P. and DeLeener, M. 1977. Classical Kinetic Theory of Fluids. Wiley and Sons, New York.
Spohn, H. 1991. Large scale dynamics of interacting particles. Springer-Verlag New York:.
Tang, Y. and Othmer, H. G. 1995. Excitation, oscillations and wave propagation in a G-protein based model of
signal transduction in Dictyostelium discoideum. Phil. Trans. Roy. Soc. (Lon.), B349, 179–195.
Tang, Yuanhua and Othmer, Hans G. 1994. A G Protein-Based Model of Adaptation in Dictyostelium discoideum.
120(1), 25–76.
Taylor, G. I. 1920. Diffusion by continuous movements. Proc. Lon. Math. Soc., 20, 196–212.
Widder, D. 1946. The Laplace Transform. Princeton: Princeton Univ. Press.
30