ON AN INFINITE SET OF NON-LINEAR
DIFFERENTIAL EQUATIONS
By J. B. McLEOD (Oxford)
[Received 25 October 1961]
1. IT is of some interest in the study of collision processes to consider
the solution of the infinite set of non-linear differential equations
where the initial conditions are
!/i(0) = l,
Sfi(0) = O (» = 2,3,4,...).
(1.2)
We are concerned with positive values of the independent variable t,
which is physically the time, and the constant coefficients aif are positive
and symmetric, i.e. ati ^ 0, a^ = ait.
The only work that seems to have been done on this type of problem
from the pure-mathematical point of view has been restricted, in effect,
to the case where o w is bounded, as in (1), (2). The extension to
unbounded a y appears difficult, but this paper makes a start on the
problem by solving explicitly one specific case, when ati = ij, and by
proving, in § 4, an existence theorem (though not a uniqueness theorem)
under fairly general conditions on atj. I hope to return to the problem
of uniqueness and other related questions in a later paper.
2. The case a{i = ij
THEOEBM 1. There is one and only one solution of the equations (1.1),
with the initial conditions (1.2) and with a^ = ij, for which 2*V< **
absolutely and uniformly convergent for t in some closed interval [0,&],
where # < 1. Furthermore, this solution continues to be a valid solution
for all positive t <C 1, but its analytic continuation, though meaningful,
fails to remain a solution for t > 1. In fact, there is no solution whatever
of the equations, with the given initial conditions, for which J * 2 ^ ! **
absolutely and uniformly convergent in some interval [0, e], where e ^ 1.
Proof. For convenience, we divide the proof into three parts, (i) showing that the solution is unique, (ii) showing that the solution exists,
and (iii) discussing the continuance of the solution. In (i), (ii), we shall,
of course, be restricted to values of (in [0,#].
Quart. J. Math. Oxford (2), 13 (1962), 119-128.
120
J. B. McLEOD
(i) With the given convergence assumptions, we can multiply the
ith equation by t and add to obtain
2 • * = - &*
provided that the second term on the right-hand side converges. In
fact, it converges absolutely because it is just a rearrangement of the
first term, which evidently converges absolutely. We see this by noting
that, if either term is multiplied out, then the total coefficient of yt y^
is (i-\-j)ij. It follows therefore that
the convergence being absolute and uniform for t in [0,#].
It follows also from (1.1) that each y( is continuous because it is
differentiable, and that dyjdt is continuous because it is the sum of
a uniformly convergent series of continuous functions. Hence we may
integrate (2.2) term by term, and obtain, using the initial conditions,
that
m
(2-3)
t = 1.
The equations (1.1) therefore reduce to
%
fjii-Myi-,
(* = i,2,3,...).
(2.4)
The equations (2.4) can be solved in succession since only y1 appears
in the first, only yx and y% in the second, and so forth. This solution,
and so the solution of (1.1), with initial conditions (1.2), is unique.
(ii) In order to prove that there is a solution at all of (1.1), with
initial conditions (1.2), we have to show that the solution of (2.4)
satisfies the convergence criteria of the theorem and also (2.3). We
therefore investigate the solution of (2.4).
We readily obtain
yx = e-4,
y* = ¥«-*-
Let us suppose in general that, forj = 1, 2,..., n,
y, = AiP-^e-»IJ,
(2.5)
where At is independent of t. Then, substituting in the (n+l)th
equation, we have
ON A SET OF DIFFERENTIAL EQUATIONS
so that
7
121
n
i-i
£2
Hence (2.5) holds by induction for allj provided that Ax = 1 and
1
A
n+\
"V A A
2 ^
t
(
To investigate the nature of the coefficients Ajt we define, formally,
y(w) = fAnw»-K
(2.7)
n-l
Then, still formally, the left-hand side of (2.6) is the coefficient of wn in
o
for n = 1, 2, 3,..., while the right-hand side is the coefficient of wn in
to
—
y(u)du
wJ
o
for n = 0, 1, 2,.... We therefore have, formally, that, for (2.6) to be
true, y(w) is a solution of
to
w
\
i
y {u)du = -
y{u)du—Av
o
o
Differentiating both sides, we obtain
0
%
and, multiplying through by w and differentiating again, we conclude
that
*/
^
whence, after a sh'ght rearrangement, we have
i.e.
dw ,
1
dy
y
yw = log y+c,
122
J. B. MoLEOD
for some constant c. Using the fact that Ax = 1, we see that y = 1
when w = 0, so that c = 0 and
(2.8)
This gives w as an analytic function of y in some neighbourhood of
y = 1, and so, on inversion, it gives y as an analytic function of w in
some neighbourhood of w = 0. Thus y can be expressed as a power
series in w, and going backwards through the formal argument above,
we find that the power series is given by (2.7), where At = 1 and (2.6)
is satisfied. We have therefore determined the coefficients Ait given
by A1 = 1 and (2.6), as the coefficients in the expansion of a certain
analytic function.
We next note that, if w is given as a function of y by (2.8), then
dw/dy = 0 implies
logy = 1 ,
y = e,
w = e-1.
Hence the radius of convergence of (2.7) is e~x. I t is now immediate
that the sequence {j/y}, given by (2.5), satisfies the convergence criteria
of the theorem provided that \t | is so small that \te~*\ ^ e~l— 77 for some
77 > 0.
I t only remains to verify, in this section of the proof, that (2.3) is
satisfied. If y^ is given by (2.5), and if |/| is sufficiently small, then
2
i-l
= e-h/(te-<),
(2.9)
where y(w) is defined by (2.7). But, by (2.8), t/(te~0 satisfies
te-t = y~1\ogy,
i.e. y = e*.
Thus (2.9) reduces to unity, as required.
(iii) The final question is whether the solution now found remains
valid for all positive t. In fact, the argument in the last two paragraphs
of (ii), as we have shown, holds good for all positive t such that
\te~*\ < c"1, i.e. for 0 ^ t < 1, and so the solution remains valid over
this range.
To investigate whether it continues to be a solution for t ^ 1, we
note that it is certainly a solution of (2.4) for all positive t, and comparing (2.4) with (1.1), we see that it can continue as a solution of (1.1)
if and only if J iyt = 1. In fact, this holds for t = 1, but not for t > 1.
ON A SET OF D I F F E R E N T I A L EQUATIONS
4
123
x
To consider t > 1 first, we note that then 0 < te- < e~ , and so
certainly 2 *y< converges. In fact,
= e-* f, Aiite-1)*-1
where t* (with 0 < t* < 1) is such that
t+e-'* = te-*,
= e~*(F, as at the end of (ii).
00
Hence
2*y< ^ *•
To consider t = 1, we have to investigate the coefficients -44 more
closely. We can in fact obtain them explicitly from (2.8). For, with
y = (*, (2.8) reduces to t = we?, and we can then expand e* as a power
series in w by Lagrange's expansion. This gives
i.e.
y = 1+ >
n-l
and comparing this with (2.7), we obtain
An+1 = (n+l)»-Vn! (n = 1, 2, 3,...).
Using the asymptotic expression for n\, we conclude that
J4
71—lpn
and it then follows that (2.7) is uniformly convergent for 0 ^ w ^ e - 1 ,
and thus that y(w) is continuous for 0 ^ w ^ e~l. Hence ^ iyt = 1
for < = 1.
The proof of the statement in the theorem that no solution exists
satisfying the convergence criteria over [0, e], where e > 1, is practically
a repetition of what we have just done. For, since the convergence
criteria are satisfied over [0, e], the equations (1.1) reduce to (2.4) over
[0, e], as in part (i) of the proof, and (2.4) gives as the only possible
solution (2.5). But (2.5) is not a solution for t > 1, and evenfor t = 1
i t does not satisfy the convergence criteria since ^ ityi diverges.
3. Theorem 1 indicates that all that we can hope to prove in any
generalization is that there exists one and only one solution of the
124
J. B. MoLEOD
equations (1.1) with initial conditions (1.2) for some interval [0,c] of t.
That there is no significance in the interval [0,1] of Theorem 1 is shown
by considering the case a{i = kij, for some constant k. This in fact
reduces to the case ati = ij if we replace t by T, where T = Id. From
Theorem 1, we then have one and, under the convergence criteria,
only one solution for the interval [0,1] of T, which is [0,1/k] for t.
4. The general case
Let us first consider very briefly what happens if ati = ci Cj, where,
since ati ^ 0, we must have ci ^ 0 for all i. In fact, we may exclude
the case ci — 0 for any i. For, if ct = 0 for just one value of t, then the
corresponding yt appears in no equation but the tth, for it appears
with the coefficient c{ ( = 0) in any other equation. We may therefore
ignore yt and solve the remaining equations for the remaining yy The
problem therefore reduces to a similar one in which all the coefficients
are non-vanishing. This argument can plainly be repeated if ct = 0
for more than one value of t.
Further, by making a change of scale as in § 3, we may suppose that
More generally, if we suppose that there are numbers ci such that
c-ii ^ c< Cy for all i, j , then once again we may assume without loss of
generality that ct > 0 for all t. For, if ct = 0 for some particular value
of t, then ay = 0 for this value of i and all j , and yt can be eliminated
as before. And once again we may suppose that (^ = 1.
Given a sequence of non-zero numbers {cn} (n = 1,2,3,...), with
c1 = 1, we define a related sequence {Cn} by the recurrence relations
C, = 1,
Lf
, 0 ^ = ^1
^
fi
(4.1)
(n= 1,2,3,...).
(4.2)
CC
n+1
We then have
THEOREM 2. / / , in the problem of (1.1) with initial conditions (1.2),
we have aif <j c{Cj, where, without loss of generality, we may suppose
c{ > 0for alii and <^ = I, and,if the sequence {Cn}defined by (4.1) and (4.2)
is such that the power series J Cn zn h*13 ° non-zero radius of convergence
R, then the problem possesses at least one solution valid in any closed
interval [0, R—#] of t, where # > 0. (We may remark that this is not
a best-possible result since, in the case of J 2, Theorem 2 would give
us a range of validity [0, e~x—#], whereas Theorem 1 tells us that it is
ON A SET OF D I F F E R E N T I A L E Q U A T I O N S
125
actually [0,1]. It is also perhaps worth emphasizing that, although
Theorem 2 places no convergence criteria as in Theorem 1, it at the
same time Bays nothing about uniqueness.)
Proof. Consider the N equations
with the initial conditions
!^(0)=l,
yiN)(0) = 0 ( » = 2 , 3
N).
(4.3 a)
By multiplying the tth equation of (4.3) by i and adding for t = 1,
2,..., N, we find that, as at the beginning of Theorem 1, the terms on
the right-hand side cancel, so that
N
whence
2 * ^ = 1.
(4.4)
The proof of the theorem breaks fairly naturally into three sections:
(i) we show that the problem of (4.3) and (4.3 a) has one and only
one solution, and that this solution is valid for all positive t and is such
that yi^it) > 0 for all t;
(ii) we show that, for fixed i and N -> oo, the sequence {y[N)(t)} is
uniformly bounded and equicontinuous for t in [0, R—#];
(iii) we show that there is a sequence of values of N tending to
infinity for which, for all i and t in [0, R—&], yiN)(t) -> y^t), where t/<(<)
a solution of the problem of (1.1) and (1.2).
(i) We note first that the right-hand side of (4.3) satisfies a Lipschitz
condition in the y[^ if the y^ are bounded. But this is certainly so
initially, and so, by a familiar theorem in the theory of the existence
of solutions of differential equations, there is one and only one solution
of (4.3) and (4.3 a) for some interval [0,tx], say, oft.
We note secondly that, for t sufficiently small in [0, t J , y[N) ^ 0 for
'• F o r
»r>
»r>«>> = i.
2,^(0) = 0,
5*U
>0)
i-o
unless On = 0, when, from (4.3),
126
J. B. McLEOD
which, with the initial condition j / ^ ( 0 ) = 0, implies that y^
«-*
= 0;
m =o, rn >»,
unless, arguing again as above, y^ = 0, and so forth.
If we now drop from the equations all functions which are identically
zero, and renumber to fill up the gaps, we can show that, except for
the initial vanishing of y^,..., yffl, we have y^ > 0 for all i and all t
(not necessarily sufficiently small) in [0, t{\. For, if we suppose that
y]^, say, is the first to vanish as ( increases in [0, <J (if two vanish
simultaneously, we choose the one with the smaller value of k) and
if we suppose that it vanishes first at t = o, then we have from (4.3)
that
whioh is a contradiction.
If y^ > 0 for all i, then it follows from (4.4) that y{m < 1 for all t,
and so we can treat tx as the initial point, apply the Lipschitz-condition
argument, and extend the region of validity. As we repeat this process,
we can either extend the region of validity to all positive t, which is
what we want, or we find that there is a least upper bound on the
values of t for which extension is possible. Let us suppose this bound
is T, so that we can extend to any interval [0, T—&], where & > 0, but
not to any interval [0, T+&].
As t -> T, we certainly have for all i, by arguments already used, that
yW > 0, and so,from(4.4), yW < 1. Hence, from (4.3), Idy^/dt] < K
for all i and some constant K. Certainly, y^(<) has at least one limit
as t -*• T because it is bounded, but in fact the limit is unique. For,
if we suppose not, then we can find two values of t arbitrarily close to
T for which the values of y\N){t) are not arbitrarily close, and this
contradicts the boundedness of the differential coefficient.
If we now define
,vum.
,.
im,.-,
N)
y\ (T) = km y^it),
1-+T-0
we can apply the usual Lipsohitz-condition argument to extend the
solution beyond T, which contradicts the definition of T. We have thus
shown that we can extend the solution to all positive t, and it is unique
because we have throughout used Lipschitz-condition arguments.
Furthermore the argument implies that y^ > 0 for all t except
initially and except for those functions which vanish identically. This
completes section (i) of the proof.
ON A SET OF D I F F E R E N T I A L EQUATIONS
127
(ii) We now investigate bounds for the solutions y^. Since certainly
1, we have, from (4.3), with t = 2, that
so that
ytf° < #.
Suppose in general that
yj*> < ^ t*-1
(t = 1, 2,...,n).
(4.5)
Then again from (4.3)
so that
v(JV, ^ — ^ <™.
by virtue of (4.2). Using induction, we establish the truth of (4.5) for
i = 1, 2,..., N. We note that the estimate (4.5) for y^ is independent
ofN.
We note next that
N
m
Icty^<fct^,
i-l
(4.6)
i-1
by virtue of (4.5), and that the right-hand side is bounded, independent
of N, provided that t lies in [0, R—&]. From now on we shall suppose
t to lie in such an interval.
With this restriction on t, we now show that y^, for fixed t, is equicontinuous for all N. We shall certainly have shown this if we prove
that dyW/dt is bounded for all N, t, and this follows from (4.3) by using
(4.5) and (4.6). For fixed t, therefore, but for all N, and for all f in
[0, B—&], the sequence {y^(0} is a uniformly bounded, equicontinuous
set. Hence, by Ascoli's lemma [as, for example, in (3) 5], there is a
subsequence which is uniformly convergent for t in [0, R—#].
(iii) We can now employ the principle of the Helly selection theorem
to show that there is a sequence of values of N for which {j/^(0} i 8
uniformly convergent, not just for one i, but for all t. For we have
established a subsequence 2VJ < N\ < N\ < ... such that for this subsequence {y^jia uniformly convergent. We can then find a subsequence
of this subsequence, say N% < N\ < N% < ..., such that {yi^} is also
uniformly convergent. And, in general, for any integer i, we can find
a subsequence Ng < N[ < N\ < ..., which is a subsequence of that
for i— 1, and such that for it {y[N)}, {yiN)},..., {y\N)} are all uniformly
128
ON A SET OF D I F F E R E N T I A L EQUATIONS
convergent. The sequence iVJ, N\, N\,... is then a subsequence for
which {y^1} is uniformly convergent for all t. From now on, we shall
restrict ourselves to values of N in this subsequence. Then
lim y^(t) = Vi(t), say.
(4.7)
As N -*• oo through this subsequence, and for t in [0, R—&], S
i-l
CO
c
converges uniformly to £ i !/<•
^iVi
— ^
where No, though large, is to be fixed. We first choose No so that, using
(4.5) and (4.6), both the second and third terms on the right-hand side
do not exceed Je, for any given e > 0, and, No once chosen, we can
make N sufficiently large so that, from (4.7), the first term also does
not exceed £e.
It thus follows that dyf^jdt converges uniformly to
i-l
2
from which it follows that the functions y{(t) satisfy (1.1). This completes the proof of the theorem.
REFERENCES
1. Z. A. Melzak, 'A scalar transport equation', Trans. American Math. Soc. 85
(1957) 547-60.
2. D. Morgenstern, 'Analytical studies related to the Maiwell-Boltzmann equation', J. Rational Mech. Anal. 4 (1955) 533-55.
3. E. A. Coddington and N. Levinson, Theory of ordinary differential equations
(New York, 1955).
© Copyright 2026 Paperzz