Dynamical systems

Dynamical systems
Johan Richter
Discrete Dynamical systems
Let V be a set and f a function (usually continuous in some sense)
on V . A discrete dynamical system is a map N × V → V given by
(n, x0 ) 7→ f ◦n (x0 ).
Usually one uses the notation x1 = f (x0 ), x2 = f (x1 ) and so
on.
In principle it is easy to compute xn for finite n but the question
that is asked in the theory of dynamical systems typically deal with
the long-term behaviour of the system.
Example. A classical example with (perhaps dubious) biological
motivation is the following. Suppose we are modelling the the
growth of some population. We assume there are two competing
influences on population growth: on the one hand, the larger the
population the more fertile animals(plants, humans) there are who
can reproduce. On the other hand, the larger the population the
less resources will each individual have available and the less chance
each individual has of surviving.
A simple model that captures these two competing factors is
the discrete logistic model xn+1 = rxn (1 − xn ), where r is some
positive constant of proportionality. We must have r ∈ [0, 4] to
1
2
keep xi in the interval [0, 1]. (See the lecture notes by Lars-Erik
Persson for an analysis of the system.)
It is interesting to note that we can rewrite the logistic equation
to the form
xn+1 − xn = xn (r(1 − xn ) − 1),
which can be readily approximated by the continuous time system
y 0 = y(r(1 − y) − 1).
This ODE does not display chaos, since there are theorems ruling
out chaos for first-order ODE of one variable.
Sensitivity for starting values
An exact defintion of chaos is difficult to make in general and will
not be attempted in this course. We will instead here focus on one
aspect of chaos, which is that the the values of the system depend
very sensitively on the starting values. This can be quantified with
the help of something called the Lyapunov exponent.
Computing the Lyapunov exponent
A useful theorem when computing the following examples is this.
Theorem. Suppose that f 0 (xn ) converges to a limit, L, where
xn+1 = f (xn ). Then the Lyapunov exponent for the system at the
point x0 is log |L|. In particular, if lim xn = x then the Lyapunov
exponent is log |f 0 (x)|.
√
Example. Let f (x) = x and compute the Lyapunov exponent
at some point x0 > 0. It is clear that xn → 1 so λ = log | 21 | =
−ln(2). This implies that the difference between nearby points is
approximately halved in every step.
3
Example. Let f (x) = x2 + 10. Then f 0 (x) = 12 and the Lyapunov
exponent is − ln(2), no matter where we start from. This is a case
where the system is not sensitive to the starting value, but where
the system does not approach a fixed point.
Example. Let f (x) = 2x − x1 . Then the Lyapunov exponent if
x0 = 1 is 0. If x0 > 1 then the sequence xn diverges towards
infinity and f 0 (xn ) → 2. Thus the Lyapunov exponent for x0 > 1
is ln(2).
Fixed points
A fixed point of a function, f , is a point, x, such that f (x) = x. If
the function f is used to define a dynamical system then x will be
a equilibrium of the system, a point such that xn+1 = xn if xn = x.
A fixed point, x, is said to be stable if for every δ > 0 there is
am > 0 such that if y ∈ (x − , x + ) then f ◦n (y) ∈ (x − δ, x + δ)
for all n and f ◦n (y) → x.
Theorem. If f is a continuously differentiable function then a
fixed point x is stable if |f 0 (x)| < 1 and unstable if |f 0 (x)| > 1.
Proof. We prove only the first part of the theorem.
Let δ > 0 be given. Since f 0 is continuous there is an > 0 such
that |f 0 (y)| < 1 − if |x − y| < . We can also assume min(δ, 1) > .
Then for any y such that |x − y| < we have
|f (y) − x| = |f (y) − f (x)| = |f 0 (ξ)||x − y| ≤ |x − y|.
This shows that the distance between x and f (y) is less than the
distance between x and y. The first part of the theorem now follows
by an easy induction.