Discrete-time Nonautonomous Dynamical Systems P.E. Kloeden, C. Pötzsche, and M. Rasmussen Abstract These notes present and discuss various aspects of the recent theory for time-dependent difference equations giving rise to nonautonomous dynamical systems on general metric spaces: First, basic concepts of autonomous difference equations and discrete-time (semi-) dynamical systems are reviewed for later contrast in the nonautonomous case. Then time-dependent difference equations or discrete-time nonautonomous dynamical systems are formulated as processes and as skew products. Their attractors including invariants sets, entire solutions, as well as the concepts of pullback attraction and pullback absorbing sets are introduced for both formulations. In particular, the limitations of pullback attractors for processes is highlighted. Beyond that Lyapunov functions for pullback attractors are discussed. Two bifurcation concepts for nonautonomous difference equations will be introduced, namely attractor and solution bifurcations. Finally, random difference equations and discrete-time random dynamical systems are investigated using random attractors and invariant measures. Key words: Nonautonomous dynamical system, process, two-parameter semigroup, discrete skew-product flow, difference equation, pullback attractor, Lyapunov function, nonautonomous bifurcation, random dynamical system P.E. Kloeden Institut für Mathematik, Goethe-Universität, Postfach 11 19 32, 60054 Frankfurt a.M., Germany, e-mail: [email protected] C. Pötzsche Institut für Mathematik, Universität Klagenfurt, Universitätsstraße 65–67, 9020 Klagenfurt, Austria, e-mail: [email protected] M. Rasmussen Department of Mathematics, Imperial College, London SW7 2AZ, United Kingdom, e-mail: [email protected] 1 Contents Discrete-time Nonautonomous Dynamical Systems . . . . . . . . . . . P.E. Kloeden, C. Pötzsche, and M. Rasmussen 1 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Autonomous Difference Equations . . . . . . . . . . . . . . . . . . . . . . . . 7 1 Autonomous semidynamical systems . . . . . . . . . . . . . . . . . . . . . 9 2 Lyapunov functions for autonomous attractors . . . . . . . . . . . . 11 3 Nonautonomous Difference Equations . . . . . . . . . . . . . . . . . . . . 3 Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Skew-product systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Skew-product systems as autonomous semidynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . 15 16 17 18 19 4 Attractors of processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Nonautonomous invariant sets . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Forwards and pullback convergence . . . . . . . . . . . . . . . . . . . . . . 7 Forwards and pullback attractors . . . . . . . . . . . . . . . . . . . . . . . . 8 Existence of pullback attractors . . . . . . . . . . . . . . . . . . . . . . . . . 9 Limitations of pullback attractors . . . . . . . . . . . . . . . . . . . . . . . 23 24 25 27 29 33 5 Attractors of skew-product systems . . . . . . . . . . . . . . . . . . . . . . . 10 Existence of pullback attractors . . . . . . . . . . . . . . . . . . . . . . . . . 11 Comparison of nonautonomous attractors . . . . . . . . . . . . . . . . 12 Limitations of pullback attractors revisited . . . . . . . . . . . . . . . 13 Local pullback attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 37 41 43 44 21 3 4 Contents 6 Lyapunov functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Existence of a pullback absorbing neighbourhood system . . . 15 Necessary and sufficient conditions . . . . . . . . . . . . . . . . . . . . . . 15.1 Comments on Theorem 15.1 . . . . . . . . . . . . . . . . . . . . . 15.2 Rate of pullback convergence . . . . . . . . . . . . . . . . . . . . 47 48 50 55 56 7 Bifurcations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Hyperbolicity and simple examples . . . . . . . . . . . . . . . . . . . . . . 17 Attractor bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Solution bifurcation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 59 66 68 8 Random dynamical systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Random difference equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Random attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Random Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Approximating invariant measures . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 75 77 79 81 84 Chapter 1 Introduction The qualitative theory of dynamical systems has seen an enormous development since the groundbreaking contributions of Poincaré and Lyapunov over a century ago. Meanwhile it provides a successful framework to describe and understand a large variety of phenomena in areas as diverse as physics, life science, engineering or sociology. Such a success benefits, in part, from the fact that the law of evolution in various problems from the above areas is static and does not change with time (or chance). Thus a description with autonomous evolutionary equations is appropriate. Nevertheless, many real world problems involve timedependent parameters and, furthermore, one wants to understand control, modulation or other effects. In doing so, periodically or almost periodically driven systems are special cases, but, in principle, a theory for arbitrary time-dependence is desirable. This led to the observation that many of the meanwhile well-established concepts, methods and results for autonomous systems are not applicable and require an appropriate extension — the theory of nonautonomous dynamical systems. The goal of these notes is to give a solid foundation to describe the longterm behaviour of nonautonomous evolutionary equations. Here we restrict to the discrete-time case in form of nonautonomous difference equations. This has the didactical advantage to feature many aspects of infinite-dimensional continuous-time problems (namely nonexistence and uniqueness of backward solutions) without an involved theory to guarantee the existence of a semiflow. Moreover, even in low dimensions, discrete dynamics can be quite complex. Beyond that a time-discrete theory is strongly motivated from applications e.g., in population biology. In addition, it serves as a basic tool to understand numerical temporal discretization and is often essential for the analysis of continuous-time problems thorough concepts like time-1- or Poincaré mappings. The focus of our presentation is on two formulations of time-discrete nonautonomous dynamical systems, namely processes (two-parameter semigroups) and skew-product systems. For both we construct, discuss and com- 5 6 1 Introduction pare the so-called pullback attractor in Chapters 4–6. A pullback attractor serves as nonautonomous counterpart to the global attractor, i.e., the object capturing the essential dynamics of a system. Furthermore, in Chapter 7 we sketch two approaches to a bifurcation theory for time-dependent problems to illuminate a current field of research. The final Chapter 8 on random dynamical systems emphasises similarities to the corresponding nonautonomous theory and provides results on random Markov chains and the approximation of invariant measures. To conclude this introduction we point out that a significantly more comprehensive approach is given in the up-coming monograph [32] (see also the lecture notes [52, 48]). In particular, we neglect various contributions to the discrete-time nonautonomous theory: An appropriate spectral notion for linear difference equations (cf. [55, 6, 43, 54]) substitutes the dynamical role of eigenvalues from the autonomous special case. Gaps in this spectrum enable to construct nonautonomous invariant manifolds (so-called invariant fiber bundles, see [5, 50]). As special case they include centre fiber bundles and therefore allow one to deduce a time-dependent version of Pliss’s reduction principle [41, 49]. The pullback attractors constructed in these notes are, generally, only upper semi-continuous in parameters. Thus, for approximation purposes it might be advantageous to embedded them into a more robust dynamical object, namely a discrete counterpart to an inertial manifold [42]. Topological linearization of nonautonomous difference equations has been addressed in [7, 8], while a smooth linearization theory via normal forms was developed in [59]. Acknowledgements This work was partially supported by the DFG grant KL 1203/7-1, the Ministerio de Ciencia e Innovación project MTM2011-22411, the Consejerı́a de Innovación, Ciencia y Empresa (Junta de Andalucı́a) under the Ayuda 2009/FQM314 and the Proyecto de Excelencia P07-FQM-02468 (Peter E. Kloeden). Martin Rasmussen was supported by an EPSRC Career Acceleration Fellowship. The authors thank Dr. Thomas Lorenz for carefully reading parts of the article. Chapter 2 Autonomous Difference Equations A difference equation of the form xn+1 = f (xn ) , (1) where f : Rd → Rd , is called a first-order autonomous difference equation on the state space Rd . There is no loss of generality in the restriction to firstorder difference equations (1), since higher-order difference equations can be reformulated as (1) by the use of an appropriate higher dimensional state space. Successive iteration of an autonomous difference equation (1) generates the forwards solution mapping π : Z+ × Rd → Rd defined by xn = π(n, x0 ) = f n (x0 ) := f ◦ f ◦ · · · ◦ f (x0 ), | {z } n times which satisfies the initial condition π(0, x0 ) = x0 and the semigroup property π(n, π(m, x0 )) = f n (π(m, x0 )) = f n ◦ f m (x0 ) = f n+m (x0 ) = π(n + m, x0 ) for all n, m ∈ Z+ , x0 ∈ Rd . (2) Here, and later, Z+ := {0, 1, 2, 3, . . .}, Z− := {. . . , −3, −2, −1, 0} denote the nonnegative and nonpositive integers, respectively, and a discrete interval is the intersection of a real interval with the set of integers Z. Property (2) says that the solution mapping π forms a semigroup under composition; it is typically only a semigroup rather than a group since the mapping f need not be invertible. It will be assumed here that the mapping f in the difference equation (1) is at least continuous, from which it follows that the mappings π(n, ·) are continuous for every n ∈ Z+ . The solution mapping π then generates a discrete-time semidynamical system on Rd . 7 8 2 Autonomous Difference Equations More generally, the state space could be a metric space (X, d). Definition 0.1. A mapping π : Z+ × X → X satisfying i) π(0, x0 ) = x0 for all x0 ∈ X, ii) π(m + n, x0 ) = π(m, π(n, x0 )) for all m, n ∈ Z+ and x0 ∈ X, iii) the mapping x0 7→ π(n, x0 ) is continuous for each n ∈ Z+ , is called a (discrete-time) autonomous semidynamical system or a semigroup on the state space X. The semigroup property ii) is illustrated in Figure 1 below. Note that such an autonomous semidynamical system π on X is equivalent to a first order autonomous difference equation on X with the right-hand side f defined by f (x) := π(1, x) for all x ∈ X. If Z+ in Definition 0.1 is replaced by Z, then π is called a (discrete-time) autonomous dynamical system or group on the state space X. π(n, x0 ) x0 π(m, π(n, x0 )) π(m + n, ·) X Fig. 1 Semigroup property ii) of a discrete-time semidynamical system π : Z+ × X → X Autonomous dynamical systems need not be generated by autonomous difference equations as above. Example 0.1. Consider the space X = {1, · · · , r}Z of bi-infinite sequences x = {kn }n∈Z with kn ∈ {1, · · · , r} w.r.t. the group of left shift operators θn := θn for n ∈ Z, where the mapping θ : X → X is defined by θ({kn }n∈Z ) = {kn+1 }n∈Z . This forms an autonomous dynamical system on X, which is a compact metric space with the metric X d (x, x0 ) = (r + 1)−|n| |kn − kn0 | . n∈Z The proximity and convergence of sets is given in terms of the Hausdorff separation distX (A, B) of nonempty compact subsets A, B ⊆ X is defined as 1 Autonomous semidynamical systems 9 distX (A, B) := max dist(a, B) = max min d(a, b) a∈A a∈A b∈B and the Hausdorff metric HX (A, B) = max {distX (A, B), distX (B, A)} on the space H(X) of nonempty compact subsets of X. In absence of possible confusion we simply write dist or H for the Hausdorff separation resp. metric. 1 Autonomous semidynamical systems The dynamical behaviour of a semidynamical system π on a state space X is characterised by its invariant sets and what happens in neighbourhoods of such sets. A nonempty subset A of X is called invariant under π, or πinvariant, if π(n, A) = A for all n ∈ Z+ (3) or, equivalently, if f (A) = π(1, A) = A. Simple examples are equilibria (steady state solutions) and periodic solutions; in the first case A consists of a single point, which must thus be a fixed point of the mapping f , whereas for a solution with period r it consists of a finite set of r distinct points {p1 , . . . , pr } which are fixed point of the composite mapping f r (but not for an f j with j smaller than r). Invariant sets can also be much more complicated, for example fractal sets. Many are the ω-limit sets of some trajectory, i.e., defined by ω + (x0 ) = {y ∈ X : ∃nj → ∞, π(nj , x0 ) → y} , which is nonempty, compact and π-invariant when the forwards trajectory {π(n, x0 ); n ∈ Z+ } is a precompact subset of X and the metric space (X, d) is complete. However, ω(x0 ) needs not to be connected. The asymptotic behaviour of a semidynamical system is characterised by its ω-limit sets, in general, and by its attractors and their associated absorbing sets, in particular. An attractor is a nonempty π-invariant compact set A∗ that attracts all trajectories starting in some neighbourhood U of A∗ , that is with ω + (x0 ) ⊂ A∗ for all x0 ∈ U or, equivalently, with lim dist (π(n, x0 ), A∗ ) = 0 n→∞ for all x0 ∈ U. A∗ is called a maximal or global attractor when U is the entire state space X. Note that a global attractor, if it exists, must be unique. For later comparison the formal definition follows. Definition 1.1. A nonempty compact subset A∗ of X is a global attractor of the semidynamical system π on X if it is π-invariant and 10 2 Autonomous Difference Equations attracts bounded sets, i.e., lim dist (π (n, D) , A∗ ) = 0 n→∞ for any bounded subset D ⊂ X. (4) As simple example consider the autonomous difference equation (1) on X = R with the map f (x) := max{0, 4x(1 − x)} for x ∈ R. Then A∗ = [0, 1] is invariant and f (x0 ) ∈ A∗ for all x0 ∈ R, so A∗ is the maximal attractor. The dynamics are very simple outside of the attractor, but chaotic within it. The existence and approximate location of a global attractor follows from that of more easily found absorbing sets, which typically have a convenient simpler shape such as a ball or ellipsoid. Definition 1.2. A nonempty compact subset B of X is called an absorbing set of a semidynamical system π on X if for every bounded subset D of X there exists a ND ∈ Z+ such that π(n, D) ⊂ B for all n ≥ ND in Z+ . Absorbing sets are often called attracting sets when they are also positively invariant in the sense that π(n, B) ⊆ B holds for all n ∈ Z+ , i.e., if one has the inclusion f (B) = π(1, B) ⊆ B. Attractors differ from attracting sets in that they consist entirely of limit points of the system and are thus strictly invariant in the sense of (3). Theorem 1.1 (Existence of global attractors). Suppose that a semidynamical system π on X has an absorbing set B. Then π has a unique global attractor A∗ ⊂ B given by A∗ = \ [ π(n, B), (5) m≥0 n≥m or simply by A∗ = T m≥0 π(n, B) when B is positively invariant. For a proof we refer to the more general situation of Theorem 8.1. Similar results hold if the absorbing set is assumed to be only closed and bounded and the mapping π to be compact or asymptotically compact. For later comparison note that, in view of the invariance of A∗ , the attraction (4) can be written equivalently as the forwards convergence 2 Lyapunov functions for autonomous attractors dist (π (n, D) , π (n, A∗ )) → 0 11 as n → ∞. (6) A global attractor is, in fact, uniformly Lyapunov asymptotically stable. The asymptotic stability of attractors and that of attracting sets in general can be characterised by Lyapunov functions. Such Lyapunov functions can be used to establish the existence of an absorbing set and hence that of a nearby global attractor in a perturbed system. 2 Lyapunov functions for autonomous attractors Consider an autonomous semidynamical system π on a compact metric space (X, d) which is generated by an autonomous difference equation xn+1 = f (xn ) , (7) where f : X → X is globally Lipschitz continuous with Lipschitz constant L > 0, i.e., d(f (x), f (y)) ≤ Ld(x, y), for all x, y ∈ X . Definition 2.1. A nonempty compact subset A ( X is called globally uniformly asymptotically stable if it is both i) Lyapunov stable , i.e., for all > 0, there exists a δ = δ() > 0 with dist(x, A) < δ ⇒ dist(f n (x), A) < for all n ∈ Z+ , (8) ii) globally uniformly attracting , i.e., for all > 0, there exists an integer N = N () > 1 such that dist(f n (x), A) < for all x ∈ X, n ≥ N . (9) Note that such a set A is the global attractor for the semidynamical system generated by an autonomous difference equation (7). In particular, it is invariant, i.e., f (A) = A. Global uniform asymptotical stability is characterized in terms of a Lyapunov function by the following necessary and sufficient conditions. The following theorem is taken from Diamond & Kloeden [15]. Theorem 2.1. Let f : X → X be globally Lipschitz continuous, and let A be a nonempty compact subset of X. Then A is globally uniformly 12 2 Autonomous Difference Equations asymptotically stable w.r.t. the dynamical system generated by (7) if and only if there exist i) a Lyapunov function V : X → R+ , ii) monotone increasing continuous functions α, β : R+ → R+ with α(0) = β(0) = 0 and 0 < α(r) < β(r) for all r > 0, and iii) constants K > 0, 0 ≤ q < 1 such that for all x, y ∈ X, it holds that 1) |V (x) − V (y)| ≤ Kd(x, y), 2) α(dist(x, A)) ≤ V (x) ≤ β(dist(x, A)) and 3) V (f (x)) ≤ qV (x). Proof. Sufficiency. Let V be a Lyapunov function as described in the theorem. Choose > 0 arbitrarily and define δ := β −1 (α()/q), which means that α() = qβ(δ). This implies that α(dist(f n (x), A)) ≤ V (f n (x)) ≤ q n V (x) ≤ qV (x) ≤ qβ(dist(x, A)) , so that dist(f n (x), A) ≤ α−1 (qβ(dist(x, A))) ≤ α−1 (α()) ≤ for all n ∈ N, when dist(x, A) < δ. Thus, A is Lyapunov stable. Now define ln (α()/V0 ) N := max 1, 1 + , ln q where V0 := maxx∈X V (x) is finite by continuity of V0 and compactness of X. For n ≥ N , one has q n ≤ q N , since 0 ≤ q < 1. Since, from above, α(dist(f n (x), A)) ≤ q n V (x) ≤ q n V0 ≤ q N V0 ≤ α() for all n ≥ N , one has dist(f n (x), A) < for n ≥ N , x ∈ X. This means that A is globally uniformly attracting and hence globally uniformly asymptotically stable. Necessity. This will just be sketched here; the details can be found in [15]. Let A be globally uniformly asymptotically stable, i.e., for given > 0, there exists δ = δ() such that (8) holds, and for given > 0, there exists N = N () + such that (9) holds. Define Gk : R+ 0 → R0 for k ∈ N by ( Gk (r) := r− 0 1 k :r≥ 1 k , :0≤r< 1 k , for all r ≥ 0 . Then |Gk (r) − Gk (s)| ≤ |r − s| for all r, s ≥ 0 . 2 Lyapunov functions for autonomous attractors 13 Now choose q so that 0 < q < min{1, L}, where L is the Lipschitz constant of the mapping f , and define gk := q N (1/k) L for all k ∈ N and Vk (x) = gk sup q −n Gk (dist(f n (x), A)) n∈Z+ for all k ∈ N . Then i) Vk (x) = 0 if and only if dist(x, A) < δ(1/k), due to Lyapunov stability. ii) Since |dist(x, A) − dist(y, A)| ≤ d(x, y) and d(f n (x), f n (y)) ≤ Ld f n−1 (x), f n−1 (y) ≤ · · · ≤ Ln d(x, y) , it follows that |Vk (x) − Vk (y)| ≤ gk sup q −n |Gk (dist(f n (x), A)) − Gk (dist(f n (y), A))| n≥0 ≤ gk ≤ gk ≤ gk sup 0≤n≤N (1/k) q −n |Gk (dist(f n (x), A)) − Gk (dist(f n (y), A))| q −n d (f n (x), f n (y)) sup 0≤n≤N (1/k) q −n Ln d(x, y) = d(x, y) . sup 0≤n≤N (1/k) iii) From above, it holds that Vk (x) ≤ Vk (y) + d(x, y). For all y ∈ A, one obtains that Vk (y) = 0 and Vk (x) ≤ d(x, y), and since A is compact, the minimum over all y ∈ A is attained and Vk (x) ≤ dist(x, A). iv) Vk (f (x)) ≤ qVk (x), since Vk (f (x)) ≤ gk sup q −n Gk (dist(f n (f (x)), A)) n≥0 = qgk sup q −n−1 Gk (dist(f n+1 (x), A)) n≥0 = qgk sup q −n Gk (dist(f n (x), A)) n≥1 ≤ qgk sup q −n Gk (dist(f n (x), A)) = qVk (x) n≥0 Finally, define V (x) = ∞ X k=1 2−k Vk (x). 14 2 Autonomous Difference Equations The main difficulty is to show the existence of the lower bound function α. This is systematically built up via the component functions Vk , which vanish u successively on a closed k1 -neighbourhood of the set A. t Remarks For a more comprehensive introduction to discrete dynamical systems and their attractors we refer to e.g. [40, 60]. In particular, for the case of infinitedimensional state spaces see [18] and [57, Chapter 2], where also connectedness issues of attractors or compactness properties for the semigroup π are addressed. Chapter 3 Nonautonomous Difference Equations Difference equations on Rd of the form xn+1 = fn (xn ) , (∆) in which continuous mappings fn : Rd → Rd on the right-hand side are allowed to vary with the time n, are called nonautonomous difference equations . Such nonautonomous difference equations arise quite naturally in many different ways. The mappings fn in (∆) may of course vary completely arbitrarily, but often there is some relationship between them or some regularity in the way in which they are given. For example, the mappings may all be the same as in the very special autonomous subcase (1) or they may vary periodically within, or be chosen irregularly from, a finite family {g1 , · · · , gr }, in which case (∆) can be rewritten as xn+1 = gkn (xn ) , (10) with the kn ∈ {1, . . . , r} and fn = gkn . As another example, the difference equation (∆) may represent a variable time-step discretization method for a differential equation ẋ = f (x), the simplest of which being the Euler method with a variable time-step hn > 0, xn+1 = xn + hn f (xn ) , (11) in which case fn (x) = x + hn f (x). More generally, a difference equation may involve a parameter λ ∈ Λ which varies in time by choice or randomly, giving rise to the nonautonomous difference equation xn+1 = g(xn , λn ), (12) so fn (x) = g(x, λn ) here for the prescribed choice of λn ∈ Λ. The nonautonomous difference equation (∆) generates a solution mapping φ : Z2≥ × Rd → Rd , where 15 16 3 Nonautonomous Difference Equations Z2≥ := {(n, n0 ) ∈ Z2 : n ≥ n0 }, through iteration, i.e., φ(n0 , n0 , x0 ) := x0 , φ(n, n0 , x0 ) := fn−1 ◦ · · · ◦ fn0 (x0 ) for all n > n0 , n0 ∈ Z, and each x0 ∈ Rd . This solution mapping satisfies the two-parameter semigroup property φ(m, n0 , x0 ) = φ(m, n, φ(n, n0 , x0 )) for all (n, n0 ) ∈ Z2≥ , (m, n) ∈ Z2≥ and x0 ∈ Rd . In this sense, φ is called general solution of (∆). In particular, as composition of continuous functions the mapping x0 7→ φ(n, n0 , x0 ) is continuous for (n, n0 ) ∈ Z2≥ . The general nonautonomous case differs crucially from the autonomous in that the starting time n0 is just as important as the time that has elapsed since starting, i.e., n − n0 , and hence many of the concepts that have been developed and extensively investigated for autonomous dynamical systems in general and autonomous difference equations in particular are either too restrictive or no longer valid or meaningful. 3 Processes Solution mappings of nonautonomous difference equations (∆) are one of the main motivations for the process formulation of an abstract nonautonomous dynamical system on a metric state space (X, d) and time set Z. The following definition originates from Dafermos [14] and Hale [18]. Definition 3.1. A (discrete-time) process on a state space X is a mapping φ : Z2≥ × X → X, which satisfies the initial value, two-parameter evolution and continuity properties: i) φ(n0 , n0 , x0 ) = x0 for all n0 ∈ Z and x0 ∈ X, ii) φ(n2 , n0 , x0 ) = φ (n2 , n1 , φ(n1 , n0 , x0 )) for all n0 ≤ n1 ≤ n2 in Z and x0 ∈ X, iii) the mapping x0 7→ φ(n, n0 , x0 ) of X into itself is continuous for all n0 ≤ n in Z. The evolution property ii) is illustrated in Figure 2. Given a process φ on X there is an associated nonautonomous difference equation like (∆) on X with mappings defined by fn (x) := φ(n + 1, n, x) for all x ∈ X and n ∈ Z. 4 Skew-product systems 17 X x0 φ(n1 , n0 , x0 ) φ(n2 , n1 , φ(n1 , n0 , x0 )) Z φ(n2 , n0 , ·) n1 n2 n0 2 Fig. 2 Property ii) of a discrete-time process φ : Z≥ × X → X A process is often called a two-parameter semigroup on X in contrast with the one-parameter semigroup of an autonomous semidynamical system since it depends on both the initial time n0 and the actual time n rather than just the elapsed time n − n0 . This abstract formalism of a nonautonomous dynamical system is a natural and intuitive generalization of autonomous systems to nonautonomous systems. 4 Skew-product systems The skew-product formalism of a nonautonomous dynamical system is somewhat less intuitive than the process formalism. It represents the nonautonomous system as an autonomous system on the cartesian product of the original state space and some other space such as a function or sequence space on which an autonomous dynamical systems called the driving system acts. This driving system is the source of nonautonomity in the dynamics on the original state space. Let (P, dP ) be a metric space with metric dP and let θ = {θn : n ∈ Z} be a group of continuous mappings from P onto itself. Essentially, θ is an autonomous dynamical system on P that models the driving mechanism for the change in the mappings fn on the right-hand side of a nonautonomous difference equation like (∆), that will now be written as xn+1 = f (θn (p), xn ) (13) for n ∈ Z+ , where f : P ×Rd → Rd is continuous. The corresponding solution mapping ϕ : Z+ × P × Rd → Rd is now defined by ϕ(0, p, x) := x, ϕ(n, p, x) := f (θn−1 (p), ·) ◦ · · · ◦ f (p, x) for all n ∈ N 18 3 Nonautonomous Difference Equations and p ∈ P , x ∈ Rd . The mapping ϕ satisfies the cocycle property w.r.t. the driving system θ on P , i.e., ϕ(0, p, x) := x, ϕ(m + n, p, x) := ϕ (m, θn (p), ϕ (n, p, x)) (14) for all m, n ∈ Z+ , p ∈ P and x ∈ Rd . 4.1 Definition Consider now a state space X instead of Rd , where (X, d) is a metric space with metric d. The above considerations lead to the following definition of a skew-product system, which is an alternative abstract formulation of a discrete nonautonomous dynamical system on the state space X. Definition 4.1. A (discrete-time) skew-product system (θ, φ) is defined in terms of a cocycle mapping ϕ on a state space X, driven by an autonomous dynamical system θ acting on a base space P . Specifically, the driving system θ on P is a group of homeomorphisms {θn : n ∈ Z} under composition on P with the properties i) θ0 (p) = p for all p ∈ P , ii) θm+n (p) = θm (θn (p)) for all m, n ∈ Z and p ∈ P , iii) the mapping p 7→ θn (p) is continuous for each n ∈ Z, and the cocycle mapping φ : Z+ × P × X → X satisfies I) ϕ(0, p, x) = x for all p ∈ P and x ∈ X, II) ϕ(m + n, p, x) = ϕ(m, θn (p), ϕ(n, p, x)) for all m, n ∈ Z+ , p ∈ P , x ∈ X, III) the mapping (p, x) 7→ φ(n, p, x) is continuous for each n ∈ Z. For an illustration we refer to the subsequent Figure 3. A difference equation of the form (13) can be obtained from a skew-product system by defining f (p, x) := ϕ(1, p, x) for all p ∈ P and x ∈ X. A process φ admits a formulation as a skew-product system with P = Z, the time shift θn (n0 ) := n + n0 and the cocycle mapping ϕ(n, n0 , x) := φ(n + n0 , n0 , x) for all n ∈ Z+ , x ∈ X. The real advantage of the somewhat more complicated skew-product system formulation of nonautonomous dynamical systems occurs when P is compact. This never happens for a process reformulated as a skew-product system as 4 Skew-product systems 19 {θn p} × X {θn+m p} × X {p} × X ϕ(n, p, ·) ϕ(m, θn+m p, ·) ϕ(n + m, p, ·) θn p p θn+m p P Fig. 3 A discrete-time skew-product system (θ, ϕ) over the base space P above since the parameter space P then is Z, which is only locally compact and not compact. 4.2 Examples The examples above can be reformulated as skew-product systems with appropriate choices of parameter space P and the driving system θ. Example 4.1. A nonautonomous difference equation (∆) with continuous right-hand sides fn : Rd → Rd generates a cocycle mapping ϕ over the parameter set P = Z w.r.t. the group of left shift mappings θj := θj for j ∈ Z, where θ(n) := n + 1 for n ∈ Z. Here ϕ is defined by ϕ(0, n, x) := x and ϕ(j, n, x) := fn+j−1 ◦ · · · ◦ fn (x) for all j ∈ N and n ∈ Z, x ∈ Rd . The mappings ϕ(j, n, ·) : Rd → Rd are all continuous. Example 4.2. Let f : Rd → Rd be a continuous mapping used in an autonomous difference equation (1). The solution mapping ϕ defined by ϕ(0, x) := x and ϕ(j, x) = f j (x) := f ◦ · · · ◦ f (x) | {z } for all j ∈ N j times and x ∈ Rd generates a semigroup on Rd . It can be considered as a cocycle mapping w.r.t. a singleton parameter set P = {p0 } and the singleton group consisting only of identity mapping θ := idP on P . Since the driving system just sits at p0 , the dependence on the parameter in ϕ can be suppressed. 20 3 Nonautonomous Difference Equations While the integers Z appears to be the natural choice for the parameter set in Example 4.1 and the choice is trivial in the autonomous case of Example 4.2, in the remaining examples the use of sequence spaces is more advantageous because such spaces are often compact. Example 4.3. The nonautonomous difference equation (10) with continuous mappings gk : Rd → Rd for k ∈ {1, · · · , r} generates a cocycle mapping over the parameter set P = {1, · · · , r}Z of bi-infinite sequences p = {kn }n∈Z with kn ∈ {1, · · · , r} w.r.t. the group of left shift operators θn := θn for n ∈ Z, where θ({kn }n∈Z ) = {kn+1 }n∈Z . The mapping ϕ is defined by ϕ(0, p, x) := x and ϕ(j, p, x) := gkj−1 ◦ · · · ◦ gk0 (x) for all j ∈ N and x ∈ Rd , where p = {kn }n∈Z , is a cocycle mapping. Note that the parameter space {1, · · · , r}Z here is a compact metric space with the metric X d (p, p0 ) = (r + 1)−|n| |kn − kn0 | . n∈Z In addition, θn : P → P and ϕ(j, ·, ·) : P × Rd → Rd are all continuous. We omit a reformulation of the numerical scheme (11) as it is similar to the next example, but with a bi-infinite sequence p = {hn }n∈Z of stepsizes satisfying a constraint such as 21 δ ≤ hn ≤ δ for n ∈ Z with appropriate δ > 0. Example 4.4. As an example of a parametrically perturbed difference equation (12), consider the mapping g : R1 × 21 , 1 7→ R1 defined by |x| + λ2 , 1+λ Z which is continuous in x ∈ R1 and λ ∈ 21 , 1 . Let P = 12 , 1 be the space of bi-infinite sequences p = {λn }n∈Z taking values in 12 , 1 , which is a compact metric space with the metric X d (p, p0 ) = 2−|n| |λn − λ0n | , g(x, λ) = n∈Z and let {θn , n ∈ Z} be the group generated by the left shift operator θ on this sequence space (analogously to Example 4.3). The mapping ϕ is defined by ϕ(0, p, x) := x and ϕ(j, p, x) := g(qj−1 , ·) ◦ · · · ◦ g(q0 , x) for all j ∈ N and x ∈ R1 , where p = {λn }n∈Z , is a cocycle mapping on R1 with parameter Z space 12 , 1 and the above shift operators θn . The mappings θn : P → P and ϕ(j, ·, ·) : P × Rd → Rd are all continuous here. 4 Skew-product systems 21 4.3 Skew-product systems as autonomous semidynamical systems A skew-product system (θ, ϕ) can be reformulated as autonomous semidynamical system on the extended state space X := P × X. Define a mapping π : Z+ × X → X by π (n, (p, x0 )) := θn (p), φ(n, p, x0 for all n ∈ Z+ , (p, x0 ) ∈ X . Note that the variable n in π (n, (p, x0 )) is the time that has elapsed since starting at state (p, x0 ). Theorem 4.1. π is an autonomous semidynamical system on X. Proof. It is obvious that π(n, ·) is continuous in its variables (p, x0 ) for every n ∈ Z+ and satisfies the initial condition π(0, (p, x0 )) = (p, ϕ(0, p, x0 )) = (p, x0 ) for all p ∈ P, x0 ∈ X. It also satisfies the one-parameter semigroup property π(m + n, (p, x0 )) = π (m, π(n, (p, x0 ))) for all m, n ∈ Z+ , p ∈ P, x0 ∈ X since, by the group property of the driving system and the cocycle property of the skew-product, π(m + n, (p, x0 )) = (θm+n (p), ϕ(m + n, p, x0 )) = θm (θn (p)) , ϕ(m, θn (p), ϕ(n, p, x0 )) = π m, (θn (p), ϕ(n, p, x0 )) = π m, π(n, (p, x0 )) . t u As seen in Example 4.1, a process φ on the state space X is also a skewproduct on X with the shift operator θ on P := Z and thus generates an autonomous semidynamical system π on the extended state space X := Z×X. This semidynamical system has some unusual properties. In particular, π has no nonempty ω-limit sets and, indeed, no compact subset of X can be π-invariant. This is a direct consequence of the fact that the initial time is a component of the extended state space. 22 3 Nonautonomous Difference Equations Remarks An early reference to the description of nonautonomous discrete dynamics via processes or skew-product flows, is given in [40, pp. 45–56, Chapt. 4]. Chapter 4 Nonautonomous invariant sets and attractors of processes Invariant sets and attractors are important regions of state space that characterize the long-term behaviour of a dynamical system. Let φ : Z2≥ × X → X be a process on a metric state space (X, d). This generates a solution xn = φ(n, n0 , x0 ) to (∆) that depends on the starting time n0 as well as the current time n and not just on the time n − n0 that has elapsed since starting as in an autonomous system. This has some profound consequences in terms of definitions and the interpretation of dynamical behaviour. As pointed out above, many concepts and results from the autonomous case are no longer valid or are too restrictive and exclude many interesting types of possible behaviour. For example, it is too great a restriction of generality to consider a single subset A of X to be invariant under φ in the sense that φ(n, n0 , A) = A for all n ≥ n0 , n0 ∈ Z, which is equivalent to fn (A) = A for every n ∈ Z, where the fn are mappings in the corresponding nonautonomous difference equation (∆). Then, in general, neither the trajectory {χ∗n : n ∈ Z} of a solution χ∗ that exists on all of Z nor a nonautonomous ω-limit set defined by ω + (n0 , x0 ) = {y ∈ X : ∃nj → ∞, φ (nj , n0 , x0 ) → y} , will be invariant in such a sense. Moreover, such nonautonomous ω-limit sets exist in the infinite future in absolute time rather than in current time like autonomous ω-limit sets, so it is not so clear how useful or even meaningful dynamically they are. Hence, the appropriate formulation of asymptotic behaviour of a nonautonomous dynamical system needs some careful consideration. Lyapunov asymptotical stability of a solution of a nonautonomous system provides a clue. This requires the definition of an entire solution. 23 24 4 Attractors of processes Definition 4.2. An entire solution of a process φ on X is a sequence {χk : k ∈ Z} in X such that φ(n, n0 , χn0 ) = χn for all n ≥ n0 and all n0 ∈ Z, or equivalently, χn+1 = fn (χn ) for all n ∈ Z in terms of the nonautonomous difference equation (∆) corresponding to the process φ. Definition 4.3. An entire solution χ∗ of a process φ on X is said to be (globally) Lyapunov asymptotically stable if it is Lyapunov stable , i.e., for every > 0 and n0 ∈ Z there exists a δ = δ(, n0 ) > 0 such that d (φ(n, n0 , x0 ), χ∗n ) < for all n ≥ n0 whenever d x0 , χ∗n0 < δ, and attracting in the sense that d (φ (n, n0 , x0 ) , χ∗n ) → 0 as n → ∞ (15) for all x0 ∈ X and n0 ∈ Z. Note, in particular, that the limiting “target” χ∗n exists for all time and is, in general, also changing in time as the limit is taken. 5 Nonautonomous invariant sets Let χ∗ be an entire solution of a process φ on a metric space (X, d) and consider the family A = {An : n ∈ Z} of singleton subsets An := {χ∗n } of X. Then by the definition of an entire solution it follows that φ (n, n0 , An0 ) = An for all n ≥ n0 , n0 ∈ Z . This suggests the following generalization of invariance for nonautonomous dynamical systems. Definition 5.1. A family A = {An : n ∈ Z} of nonempty subsets of X is invariant under a process φ on X, or φ-invariant , if 6 Forwards and pullback convergence φ (n, n0 , An0 ) = An 25 for all n ≥ n0 and all n0 ∈ Z, or, equivalently, if fn (An ) = An+1 for all n ∈ Z in terms of the corresponding nonautonomous difference equation (∆). A φ-invariant family consists of entire solutions. This is essentially due to the fact that process is onto between the component subsets. The backward solutions, however, need not to be uniquely determined, since the mappings fn are usually not assumed to be one-to-one. Proposition 5.1 (Characterization of invariant sets). A family A = {An : n ∈ Z} is φ-invariant if and only if for every pair n0 ∈ Z and x0 ∈ An0 there exists an entire solution χ such that χn0 = x0 and χn ∈ An for all n ∈ Z. Moreover, the entire solution χ is uniquely determined provided the mapping fn (·) := φ(n + 1, n, ·) : X → X is one-to-one for every n ∈ Z. Proof. Sufficiency Let A be φ-invariant and pick an arbitrary x0 ∈ An0 . For n ≥ n0 define the sequence χn := φ(n, n0 , x0 ). Then the φ-invariance of A yields χn ∈ An . On the other hand, An0 = φ(n0 , n, An ) for n ≤ n0 , so there exists a sequence xn ∈ An with x0 = φ(n0 , n, xn ) and xn = φ(n, n − 1, xn−1 ) for all n < n0 . Hence define χn := xn for n < n0 and χ becomes an entire solution with the desired properties. If the mappings fn are all one-to-one, then the sequence {xn } is uniquely determined. Necessity Suppose for an arbitrary n0 ∈ Z and x0 ∈ An0 that there is an entire solution χ with χn0 = x0 and χn ∈ An for all n ∈ Z. Hence φ(n, n0 , x0 ) = φ(n, n0 , χn0 ) = χn ∈ An for n ≥ n0 . From this it follows that fn (An ) ⊆ An+1 . The remaining inclusion fn (An ) ⊇ An+1 follows from the fact that x0 = φ(n0 , n, χn ) ∈ φ(n0 , n, An ) for n ≤ n0 . t u 6 Forwards and pullback convergence The convergence d (φ (n, n0 , x0 ) , χ∗n ) → 0 as n → ∞ (n0 fixed) in the attraction property (15) in the definition of a Lyapunov asymptotically stable entire solution χ∗ of a process φ will be called forwards convergence 26 4 Attractors of processes (cf. Figure 4) to distinguish it from another kind of convergence that is useful for nonautonomous systems. X χ∗ φ(·, n0 , x0 ) Z n0 Fig. 4 Forward convergence n → ∞ Forwards convergence does not, however, provide convergence to a particular point χ∗n∗ for a fixed n∗ ∈ Z, which is important in many practical situations because the actual solution χ∗ may not be known and thus needs to be determined. To obtain such convergence one has to start progressively earlier. This leads to the concept of pullback convergence , defined by d (φ (n, n0 , x0 ) , χ∗n ) → 0 as n0 → −∞ (n fixed) and illustrated in Figure 5. X χ∗ x0 Z n0 n0 n0 n0 n Fig. 5 Pullback convergence n0 → −∞ In terms of the elapsed time j, forwards convergence can be rewritten as d φ (n0 + j, n0 , x0 ) , χ∗n0 +j → 0 as j → ∞ (16) for all x0 ∈ X and n0 ∈ Z, while pullback convergence becomes d (φ (n, n − j, x0 ) , χ∗n ) → 0 as j → ∞ 7 Forwards and pullback attractors 27 for all x0 ∈ X and n ∈ Z. Example 6.1. The nonautonomous difference equation xn+1 = 12 xn + gn on Pj R has the solution mapping φ(j + n0 , n0 , x0 ) = 2−j x0 + k=0 2−j+k gn0 +n , for which pullback convergence gives φ(n0 , n0 − j, x0 ) = 2−j x0 + j X k=0 2−k gn0 −k → ∞ X k=0 2−k gn0 −k as j → ∞, provided the series here converges. The limiting solution χ∗ is given Pinfinite ∞ ∗ −k by χn0 := k=0 2 gn0 −k for each n0 ∈ Z. It is an entire solution of the nonautonomous difference equation. Pullback convergence makes use of information about the nonautonomous dynamical system from the past, while forwards convergence uses information about the future. In autonomous dynamical systems, forwards and pullback convergence are equivalent since the elapsed time n − n0 → ∞ if either n → ∞ with n0 fixed or n0 → −∞ with n fixed. In nonautonomous dynamical systems pullback convergence and forwards convergence do not necessarily imply each other. Example 6.2. Consider the process φ on R generated fn = g1 for n ≤ 0 and fn = g2 for n ≥ 1 where the mappings g1 , g2 : R → R are given by g1 (x) := 21 x and g2 (x) := max{0, 4x(1 − x)} for all x ∈ R. Then φ is pullback convergent to the entire solution χ∗ defined by χ∗n ≡ 0 for n ∈ Z, but is not forwards convergent to χ∗ . In particular, χ∗ is not Lyapunov stable. 7 Forwards and pullback attractors Forwards and pullback convergence can be used to define two distinct types of nonautonomous attractors for a process φ on a state space X. Instead of a family A = {An : n ∈ Z} of singleton subsets An := {χ∗n } for an entire solution χ∗ of the process consider a φ-invariant family of A = {An : n ∈ Z} of nonempty subsets An of X. In this context forwards convergence generalizes to dist (φ(n0 + j, n0 , x0 ), An0 +j ) → 0 as j → ∞ (n0 fixed) (17) and pullback convergence to dist (φ(n, n − j, x0 ), An ) → 0 as j → ∞ (n fixed). (18) More generally, A is said to forwards (resp. pullback) attracts bounded subsets of X if x0 is replaced by an arbitrary bounded subset D of X in (17) (resp. (18)). 28 4 Attractors of processes Definition 7.1. A φ–invariant family A = {An : n ∈ Z} of nonempty compact subsets of X is called a forward attractor if it forward attracts bounded subsets of X and a pullback attractor if it pullback attracts bounded subsets of X. As a φ-invariant family A of nonempty compact subsets of X, by Proposition 5.1, both pullback and forwards attractors consist of entire solutions. In fact when the component subsets of a pullback attractor are uniformly bounded, i.e., if there exists a bounded subset B of X such that An ⊂ B for all n ∈ Z, then pullback attractors are characterized by the bounded entire solutions of the process. Proposition 7.1 (Dynamical characterization of pullback attractors). A uniformly bounded pullback attractor A = {An : n ∈ Z} admits the dynamical characterization: for each n0 ∈ Z x0 ∈ An0 ⇔ there exists a bounded entire solution χ with χn0 = x0 . Such a pullback attractor is therefore uniquely determined. Proof. Sufficiency Pick n0 ∈ Z and x0 ∈ An0 arbitrarily. Then, due to the φ-invariance of the pullback attractor A, by Proposition 5.1 there exists an entire solution χ with χn0 = x0 and χn ∈ An for each n ∈ Z. Moreover, χ is bounded since the component sets of the pullback attractor are uniformly bounded. Necessity If there exists a bounded entire solution χ of the process φ, then the set of points Dχ := {χn : n ∈ Z} is bounded in X. Since A pullback attracts bounded subsets of X, for each n ∈ Z, 0 ≤ dist (χn , An ) ≤ lim dist (φ(n, n − j, Dχ ), An ) = 0, j→∞ so χn ∈ An . t u 8 Existence of pullback attractors 29 8 Existence of pullback attractors Absorbing sets can also be defined for pullback attraction. Wider applicability can be attained if they are also allowed to depend on time. Definition 8.1. A family B = {Bn : n ∈ Z} of nonempty compact subsets of X is called a pullback absorbing family for a process φ on X if for each n ∈ Z and every bounded subset D of X there exists an Nn,D ∈ Z+ such that φ (n, n − j, D) ⊆ Bn for all j ≥ Nn,D , n ∈ Z. The existence of a pullback attractor follows from that of a pullback absorbing family in the following generalization of Theorem 1.1 for autonomous global attractors. The proof is simpler if the pullback absorbing family is assumed to be φ-positive invariant. Definition 8.2. A family B = {Bn : n ∈ Z} of nonempty compact subsets of X is said to be φ-positive invariant if φ (n, n0 , Bn0 ) ⊆ Bn for all n ≥ n0 . Theorem 8.1 (Existence of pullback attractors). Suppose that a process φ on a complete metric space (X, d) has a φ-positive invariant pullback absorbing family B = {Bn : n ∈ Z}. Then there exists a global pullback attractor A = {An : n ∈ Z} with component sets determined by \ An = φ (n, n − j, Bn−j ) for all n ∈ Z. (19) j≥0 Moreover, if A is uniformly bounded then it is unique. Proof. Let B be a pullback absorbing family and let An be defined as in (19). Clearly An ⊂ Bn for each n ∈ Z. (i) First, it will be shown for any n ∈ Z that 30 4 Attractors of processes lim dist (φ(n, n − j, Bn−j ), An ) = 0 . j→∞ (20) Assume to the contrary that there exist sequences xjk ∈ φ (n, n − jk , Bn−jk ) ⊂ Bn and jk → ∞ such that dist(xjk , An ) > for all k ∈ N. The set {xjk : k ∈ N} ⊂ Bn is relatively compact, so there is a point x0 ∈ Bn and an index subsequence k 0 → ∞ such that xjk0 → x0 . Now xjk0 ∈ φ n, n − jk0 , Bn−jk0 ⊂ φ (n, n − k, Bn−k ) for all kj 0 ≥ k and each k ≥ 0 . This implies that x0 ∈ φ (n, n − k, Bn−k ) for all k ≥ 0 . Hence, x0 ∈ An , which is a contradiction. This proves the assertion (20). (ii) By (20), for every > 0, n ∈ Z, there exists an N = N,n ≥ 0 such that dist (φ(n, n − N, Bn−N ), An ) < . Let D be a bounded subset of X. The fact that B is a pullback absorbing family implies that φ (n, n − j, D) ⊂ Bn for all sufficiently large j. Hence, by the cocycle property, φ (n, n − N − j, D) = φ (n, n − N, φ(n − N, n − N − j, D)) ⊂ φ (n, n − N, Bn−N ) . (iii) The φ-invariance of the family A will now be shown. By (19), the set Fm (n) := φ (n, n −Tm, Bn−m ) is contained in Bn for every m ≥ 0, and by definition, An−j = m≥0 Fm (n − j). First, it will be shown that φ n, n − j, \ m≥0 Fm (n − j) = \ m≥0 φ (n, n − j, Fm (n − j)) . (21) One sees directly that “⊂” holds. To prove “⊃”, let x be contained in the set on the right side. Then for any n ≥ 0, there exists an xm ∈ Fm (n−j) ⊂ Bn−j such that x = φ (n, n − j, xm ). Since the sets Fm (n − j) are compact and m monotonically T decreasing with increasing m, the set {x : m ≥ 0} has a limit point x̂ ∈ m≥0 Fm (n − j) . By the continuity of φ (n, n − j, ·), it follows that x = φ (n, n − j, x̂). Thus, \ x ∈ φ n, n − j, Fm (n − j) = φ (n, n − j, An−j ) . m≥0 Hence, equation (21), the compactness of Fm (n − j) and the continuity of φ (n, n − j, ·) imply that 8 Existence of pullback attractors φ (n, n − j, An−j ) = ⊃ = \ m≥0 \ m≥0 \ m≥0 = \ m≥0 = \ m≥j 31 φ (n, n − j, Fm (n − j)) φ (n, n − j, Fm (n − j)) φ (n, n − j, φ (n − j, n − j − m, Bn−j−m )) φ (n, n − j − m, Bn−j−m ) φ (n, n − m, Bn−m ) ⊃ An , which means that An ⊂ φ (n, n − j, An−j ) , j ∈ Z+ for all n ∈ Z . (22) Replacing n by n − m in (22) and using the cocycle property gives φ (n, n − m, An−m ) ⊂ φ (n, n − m, φ (m − m, n − m − j, An−m−j )) = φ (n, n − j, φ (n − j, n − m − j, An−m−j )) ⊂ φ (n, n − j, φ(n − j, n − m − j, Bn−m−j )) ⊂ φ (n, n − j, Bn−j ) ⊂ U (An ) for all -neighborhoods U (An ) of An , where > 0, provided that j = J() is sufficiently large. Hence, φ (n, n − m, An−m ) ⊂ An For all m ∈ Z+ , n ∈ Z. With m replaced by j, this yields with (22) the φ-invariance of the family {An : n ∈ Z}. (iv) It remains to observe that if the sets in A = {An : n ∈ Z} are uniformly bounded, then the pullback attractor A is unique by Proposition 7.1. t u Remark 8.1. There is no counterpart of Theorem 8.1 for nonautonomous forwards attractors If the pullback absorbing family B is not φ-positive invariant, then the proof is somewhat more complicated and the component subsets of the pullback attractor of A are given by An = \ [ k≥0 j≥k φ (n, n − j, Bn−j ). 32 4 Attractors of processes However, the assumption in Theorem 8.1 that φ-positively invariant pullback absorbing systems is not a serious restriction. Proposition 8.1. If B = {Bn : n ∈ Z} is a pullback absorbing system for a process φ fulfilling Bn ⊂ C for n ∈ Z, where C is a bounded subset of X, then there exists a φ-positively invariant pullback absorbing system bn : n ∈ Z} containing B = {Bn : n ∈ Z} component set-wise. Bb = {B Proof. For each n ∈ Z define bn := B [ j≥0 φ(n, n − j, Bn−j ). bn for every n ∈ Z. Obviously Bn ⊂ B To show positive invariance, the cocycle property is used in what follows. bn ) = φ(n + 1, n, B [ j≥0 = [ j≥0 = [ i≥1 ⊆ [ i≥0 φ(n + 1, n, φ(n, n − j, Bn−j )) φ(n + 1, n − j, Bn−j ) φ(n + 1, n + 1 − i, Bn+1−i ) bn+1 , φ(n + 1, n + 1 − i, Bn+1−i ) = B bn ) ⊆ B bn+1 . By this and the cocycle property again so φ(n + 1, n, B bn ) = φ n + 2, n + 1, φ(n + 1, n, B bn ) φ(n + 2, n, B bn+1 ) ⊆ B bn+2 . ⊆ φ(n + 2, n + 1, B The general positive invariance assertion then follows by induction. Now by the continuity of φ(n, n−j, ·) and the compactness of Bn−j , the set φ(n, n − j, Bn−j ) is compact for each j ≥ 0 and n ∈ Z. Moreover, Bn−j ⊂ C for each j ≥ 0 and n ∈ Z, so by the pullback absorbing property of B there exists an N = Nn,C ∈ N such that φ(n, n − j, Bn−j ) ⊂ φ(n, n − j, C) ⊂ Bn 9 Limitations of pullback attractors 33 for all j ≥ N . Hence bn = B [ j≥0 φ(n, n − j, Bn−j ) ⊆ Bn ∪ = [ 0≤j<N [ 0≤j<N φ(n, n − j, Bn−j ) φ(n, n − j, Bn−j ), bn is compact. which is compact as a finite union of compact sets, so B b To see that B so constructed is pullback absorbing, let D be a bounded subset of X and fix n ∈ Z. Since B is pullback absorbing, there exists an bn , so Nn,D ∈ N such that φ(n, n − j, D) ⊂ Bn for all j ≥ Nn,D . But Bn ⊂ B bn φ(n, n − j, D) ⊂ B for all j ≥ Nn,D . Hence Bb is pullback absorbing as required. t u 9 Limitations of pullback attractors Pullback attractors are based on the behaviour of a nonautonomous system in the past and may not capture the complete dynamics of a system when it is formulated in terms of a process. This was already indicated by Example 6.2 and will be illustrated here through some simpler examples. First consider the autonomous scalar difference equation xn+1 = λxn 1 + |xn | (23) depending on a real parameter λ > 0. Its zero solution x∗ = 0 exhibits a pitchfork bifurcation at λ = 1. Its global dynamical behavior can be summarized as follows (see Figure 6): • If λ ≤ 1, then x∗ = 0 is the only constant solution and is globally asymptotically stable. Thus {0} is the global attractor of the autonomous dynamical system generated by the difference equation (23). • If λ > 1, then there exist two additional nontrivial constant solutions given by x± := ±(λ − 1). The zero solution x∗ = 0 is an unstable steady state solution and the symmetric interval A = [x− , x+ ] is the global attractor. λx . These constant solutions are the fixed points of the mapping f (x) = 1+|x| 34 4 Attractors of processes Fig. 6 Trajectories of the autonomous difference equation (23) with λ = 0.5 (left) and λ = 1.5 (right) Piecewise autonomous difference equation: Consider now the piecewise autonomous equation ( λ, n ≥ 0, λ n xn xn+1 = , λn := (24) −1 1 + |xn | λ , n<0 for some λ > 1, which corresponds to a switch between the two autonomous problems (23) at n = 0. The zero solution of the resulting nonautonomous system is the only bounded entire solution, so by Proposition 7.1 the pullback attractor A has component sets An ≡ {0} for all n ∈ Z. Note that the zero solution seems to be “asymptotically stable” for n < 0 and then “unstable” for n ≥ 0. Moreover the interval [x− , x+ ] is like a global attractor for the whole equation on Z, but it is not really one since it is not invariant or minimal for n < 0. The nonautonomous difference equation (24) is asymptotically autonomous in both directions, but the pullback attractor does not reflect the full limiting dynamics (see Figure 7 (left)), in particular in the forwards time direction. Fully nonautonomous equation: If the parameters λn do not switch from one constant to another as above, but increase monotonically, e.g., such as 0.9n λn = 1 + 1+|n| , then the dynamics is similar, although the limiting dynamics is not so obvious from the equation. See Fig. 7 (left). Let {λn }n∈Z be a monotonically increasing sequence with limk→±∞ λn = λ̄±1 for λ̄ > 1. The nonautonomous problem xn+1 = fn (xn ) := λ n xn . 1 + |xn | (25) is asymptotically autonomous in both directions with the limiting autonomous systems given above. 9 Limitations of pullback attractors 35 Fig. 7 Trajectories of the piecewise autonomous equation (24) with λ = 1.5 (left) and the 0.9k asymptotically autonomous equation (25) with λk = 1 + 1+|k| (right) Its pullback attractor A has component sets An ≡ {0} for all n ∈ Z corresponding to the zero entire solution, which is the only bounded entire solution. As above, the zero solution x∗ = 0 seems to be “asymptotically stable” for n < 0 and then “unstable” for n ≥ 0. However, the forward limit points for nonzero solutions are ±(λ̄ − 1), neither of which is a solution at all. In particular, they are not entire solutions, so cannot belong to an attractor, forward or pullback, since these consist of entire solutions. See Figure 2 (right). Remark 9.1. Pullback attraction alone does not characterize fully the bounded limiting behaviour of a nonautonomous system formulated as a process. Something in addition like nonautonomous limit sets [32, 48], limiting equations [22] or asymptotically invariant sets [23] and eventual asymptotic stability [24] or a mixture of these ideas is needed to complete the picture. However, this varies from example to example and is somewhat ad hoc. In contrast, this information is built into the skew-product system formulation of a nonautonomous dynamical system, especially when the state space P of the driving system is compact. Essentially, the skew-product system already includes the limiting dynamics and no further ad hoc methods are needed to determine it. Remarks Pullback attractors for nonautonomous difference equations were introduced in [27, 28] and a comparison between different attractor types is given in [12] (see also Section 11). Without the assumption of being uniformly bounded, pullback attractors of processes need not to be unique (see [48, p. 18, Example 1.3.5]). In ap- 36 4 Attractors of processes plications, absorbing sets are frequently not compact and one has to assume ambient compactness properties of a process in order to establish the existence of a pullback attractor (see [48, pp. 12ff]). Chapter 5 Nonautonomous invariant sets and attractors: Skew-product systems 10 Existence of pullback attractors Consider a discrete-time skew-product system (θ, ϕ) on P × X, where (P, dP ) and (X, d) are metric spaces. There are counterparts for skew-product systems of the concepts of invariance, forwards and pullback convergence and forwards and pullback attractors considered in the previous chapter for discretetime processes. Definition 10.1. A family A = {Ap : p ∈ P } of nonempty subsets of X is called ϕ-invariant for a skew-product system (θ, ϕ) on P × X if ϕ(n, p, Ap ) = Aθn (p) for all n ∈ Z+ , p ∈ P. It is called ϕ-positively invariant if ϕ(n, p, Ap ) ⊆ Aθn (p) for all n ∈ Z+ , p ∈ P. Definition 10.2. A family A = {Ap : p ∈ P } of nonempty compact subsets of X is called pullback attractor of a skew-product system (θ, ϕ) on P × X if it is ϕ-invariant and pullback attracts bounded sets, i.e., dist (ϕ(j, θ−j (p), D), Ap ) → 0 for j → ∞ (26) for all p ∈ P and all bounded subsets D of X. It is called a forwards attractor if it is ϕ-invariant and forward attracts bounded sets, i.e., 37 38 5 Attractors of skew-product systems dist ϕ(j, p, D), Aθj (p) → 0 for j → ∞. (27) As with processes, the existence of a pullback attractor for skew-product systems is ensured by that of a pullback absorbing system. Definition 10.3. A family B = {Bp : p ∈ P } of nonempty compact subsets of X is called a pullback absorbing family for a skew-product system (θ, ϕ) on P × X if for each p ∈ P and every bounded subset D of X there exists an Np,D ∈ Z+ such that ϕ (j, θ−j (p), D) ⊆ Bp for all j ≥ Np,D , p ∈ P. The following result generalizes Theorem 1.1 for autonomous semidynamical systems and the first half is the counterpart of Theorem 8.1 for processes. The proof is similar in the latter case, essentially with j and θ−j (p) changed to n0 and n0 − j, respectively, but additional complications due to the fact that the pullback absorbing family is no longer assumed to be ϕ-positively invariant. See [33] for details. Theorem 10.1 (Existence of pullback attractors). Let (X, d) and (P, dP ) be complete metric spaces and suppose that a skew-product system (θ, ϕ) has a pullback absorbing set family B = {Bp : p ∈ P }. Then there exists a pullback attractor A = {Ap : p ∈ P } with component sets determined by Ap = \ [ ϕ j, θ−j (p), Bθ−j (p) ; (28) n≥0 j≥n it is unique if its component sets are uniformly bounded. The pullback attractor of a skew-product system (θ, ϕ) has some nice properties when its component subsets are contained in a common compact subset or if the state space P of the driving system is compact. Proposition 10.1 (Upper semi-continuity of pullback attracS tors). Suppose that A(P ) := p∈P Ap is compact for a pullback at- 10 Existence of pullback attractors 39 tractor A = {Ap : p ∈ P }. Then the set-valued mapping p 7→ Ap is upper semi-continuous in the sense that dist (Aq , Ap ) → 0 as q → p. On the other hand, if P is compact and the set-valued mapping p 7→ Ap is upper semi-continuous, then A(P ) is compact. Proof. First note that, since A(P ) is compact, the pullback attractor is uniformly bounded by a compact set and hence is uniquely determined. Assume that the set-valued mapping p 7→ Ap is not upper semi-continuous. Then there exist an 0 > 0 and a sequence pn → p0 in P such that dist (Apn , Ap0 ) ≥ 30 for all n ∈ N. Since the sets Apn are compact, there exists an an ∈ Apn such that dist (an , Ap0 ) = dist (Apn , Ap0 ) ≥ 30 for each n ∈ N . (29) By pullback attraction, dist (ϕ (m, θ−m (p0 ), B) , Ap0 ) ≤ 0 for m ≥ MB,0 for any bounded subset B of X; in particular, below A(P ) will be used for the set B. By the ϕ-invariance of the pullback attractor, there exist bn ∈ Aθ−m (pn ) ⊂ A(P ) for n ∈ N such that ϕ (m, θ−m (pn ), bn ) = an . Since A(P ) is compact, there is a convergent subsequence bn0 → b̄ ∈ A(P ). Finally, by the continuity of θ−m (·) and of the cocycle mapping ϕ(n, ·, ·), d ϕ(m, θ−m (pn0 ), bn0 ), ϕ(m, θ−m (p0 ), b̄) ≤ 0 for n0 large enough. Thus, dist (an0 , Ap0 ) = dist (ϕ(m, θ−m (pn0 ), bn0 ), Ap0 ) ≤ d ϕ(m, θ−m (pn0 ), bn0 ), ϕ(m, θ−m (p0 ), b̄) + dist ϕ(m, θ−m (p0 ), b̄), Ap0 ≤ 20 , which contradicts (29). Hence, p 7→ Ap must be upper semi-continuous. The remaining assertion follows since the image of a compact subset under an upper semi-continuous set-valued mapping is compact (cf. [4]). t u Pullback attractors are in general not forwards attractors. When, however, the state space P of the driving system is compact, then one has the following partial forwards convergence result for the pullback attractor. Theorem 10.2. In addition to the assumptions of Theorem 10.1, suppose that P is compact and suppose that the pullback absorbing family 40 5 Attractors of skew-product systems B is uniformly bounded by a compact subset C of X. Then lim sup dist (ϕ(n, p, D), A(P )) = 0 (30) n→∞ p∈P for every bounded subset D of X, where A(P ) := S p∈P Ap . Proof. First note that A(P ) is compact since the component subsets Ap are all contained in the common compact set C. This means also that the pullback attractor is unique. Suppose to the contrary that the convergence (30) does not hold. Then there exist an 0 > 0 and sequences nj → ∞, p̂j ∈ P and xj ∈ C such that dist (ϕ(nj , p̂j , xj ), A(P )) > 0 . (31) Set pj = θnj (p̂j ). By the compactness of P , there exists a convergent subsequence pj 0 → p0 ∈ P . From the pullback attraction, there exists an n > 0 such that 0 . dist (ϕ(n, θ−n (p0 ), C), Ap0 ) < 2 The cocycle property then gives ϕ nj , θ−nj (pj ), xj = ϕ n, θ−n (pj ), ϕ nj − n, θ−nj (pj ), xj for any nj > n. By the pullback absorption of B, it follows that ϕ nj − n, θ−nj (pj ), xj ⊂ Bθ−n (pj ) ⊂ C , and since C is compact, there is a further index subsequence j 00 of j 0 (depending on n) such that znj00 := ϕ nj 00 − n, θ−nj00 (pj 00 ), xj 00 → z0 ∈ C. The continuity of the skew-product mappings in the p and x variables implies 0 dist ϕ(n, θ−n (pj 00 ), znj00 ), ϕ(n, θ−n (p0 ), z0 ) < , when nj 00 > n(0 ) . 2 Therefore, 0 > dist ϕ(nj 00 , θ−nj00 (p0 ), xj 00 ), Ap0 = dist (ϕ (nj 00 , p̂j 00 , xj 00 ) , Ap0 ) ≥ dist (ϕ (nj 00 , p̂j 00 , xj 00 ) , A(P )) , which contradicts (31). Thus, the asserted convergence (30) must hold. t u 11 Comparison of nonautonomous attractors 41 11 Comparison of nonautonomous attractors Recall from Theorem 4.1 that the mapping π : Z+ × X → X defined by π(n, (p, x)) := (θn (p), ϕ(n, p, x)) for all j ∈ Z+ and (p, x) ∈ X := P × X forms an autonomous semidynamical system on the extended state space X with the metric distX ((p1 , x1 ), (p2 , x2 )) = dP (p1 , p2 ) + d(x1 , x2 ). Proposition 11.1 (Uniform and global attractors). Suppose that A is a uniform attractor (i.e., uniformly attracting in both theSforward and pullback senses) of a skew-product system S (θ, ϕ) and that p∈P Ap is precompact in X. Then the union A := p∈P {p} × Ap is the global attractor of the autonomous semidynamical system π. Proof. The π-invariance of A follows from the ϕ-invariance of A, and the θ-invariance of P via [ [ [ π(n, A) = {θn (p)}×ϕ(n, p, Ap ) = {θn (p)}×Aθn (p) = {q}×Aq = A . p∈P p∈P q∈P S Since A is also a pullback attractor and p∈P Ap is precompact in X (and P is compact too), the set-valued mapping p 7→ Ap is upper semi-continuous, which means that p 7→ F (p) := {p} × Ap is also upper semi-continuous. Hence, F (P ) = A is a compact subset of X. Moreover, the definition of the metric distX on X implies that distX (π(n, (p, x)), A) = distX ((θn (p), ϕ(n, p, x)) , A) ≤ distX (θn (p), ϕ(n, p, x)) , {θn (p)} × Aθn (p) = distP (θn (p), θn (p)) + dist ϕ(n, p, x), Aθn (p) = dist ϕ(n, p, x), Aθn (p) , where π(n, (p, x)) = (θn (p), ϕ(n, p, x)). The desired attraction to A w.r.t. π then follows from the forward attraction of A w.r.t. ϕ. t u Without uniform attraction as in Proposition 11.1 a pullback attractor need not give a global attractor, but the following result does hold. 42 5 Attractors of skew-product systems Proposition 11.2. sysS If A is a pullback attractor for a skew-product S tem (θ, ϕ) and p∈P Ap is precompact in X, then A := p∈P {p} × Ap is the maximal invariant compact set of the autonomous semidynamical system π. Proof. The compactness and π-invariance of A are proved in the same way as in first part of the proof of Proposition 11.1. To prove that the compact invariant set A is maximal, let C be any other compact invariant set of the autonomous semidynamical system π. Then A is a compact and ϕ-invariant family of compact sets, and by pullback attraction, dist ϕ n, θ−n (p), Cθ−n (p) , Ap ≤ dist (ϕ (n, θ−n (p), K) , Ap ) → 0 S as n → ∞, S where K := p∈P Cp is compact. Hence, Cp ⊆ Ap for all p ∈ P , i.e., C := p∈P {p} × Cp ⊆ A, which finally means that A is a maximal π-invariant set. t u The set A here need not be the global attractor of π. In the opposite direction, the global attractor of the associated autonomous semidynamical system always forms a pullback attractor of the skew-product system. Proposition 11.3 (Global and pullback attractors). If an autonomous semidynamical system π has a global attractor [ A= {p} × Ap , p∈P then A = {Ap : p ∈ P } is a pullback attractor for the skew-product system (θ, ϕ). S Proof. The sets P and K := p∈P Ap are compact by the compactness of A. Moreover, A ⊂ P × K, which is a compact set. Now dist (ϕ(n, p, x), K) = distP (θn (p), P ) + dist (ϕ(n, p, x), K) = distX ((θn (p), ϕ(n, p, x)), P × K) ≤ distX (π(n, (p, x)), P × K) ≤ distX (π(n, P × D), A) → 0 as n → ∞ 12 Limitations of pullback attractors revisited 43 for all (p, x) ∈ P × D and every arbitrary bounded subset D of X, since A is the global attractor of π. Hence, replacing p by θ−n (p) implies lim dist (ϕ(n, θ−n (p), D), K) = 0 . n→∞ Then the system is pullback asymptotic compact (see the definition in Chapter 12 of [32]) and by Theorem 12.12 in [32] this is a sufficientScondition for the existence of a pullback attractor A0 = {A0p : p ∈ P } with p∈P A0p ⊂ K. S 0 From Proposition 11.2, A := p∈P {p} × A0p is the maximal π-invariant subset of X, but so is the global attractor A. This means that A0 = A. Thus, A is a pullback attractor of the skew-product system (θ, ϕ). t u 12 Limitations of pullback attractors revisited The limitations of pullback attraction for processes were illustrated in Section 9 through the scalar nonautonomous difference equation xn+1 = fn (xn ) := λ n xn , 1 + |xn | (32) where {λn }n∈Z is an increasing sequence with limn→±∞ λn = λ̄±1 for λ̄ > 1. The pullback attractor A of the corresponding process has component sets An ≡ {0} for all n ∈ Z corresponding to the zero entire solution, which is the only bounded entire solution. The zero solution x∗ = 0 seems to be “asymptotically stable” for n < 0 and then “unstable” for n ≥ 0. However the forward limit points for nonzero solutions are ±(λ̄ − 1), which both are not solutions at all. In particular, they are not entire solutions. An elegant way to resolve the problem is to consider the skew-product system formulation of a nonautonomous dynamical system. This includes an autonomous dynamical system as a driving mechanism, which is responsible for the temporal change in the dynamics of the nonautonomous difference equation. It also includes the dynamics of the asymptotically autonomous difference equations above and their limiting autonomous systems. The nonautonomous difference equation (32) can be formulated as a skewproduct system with the diving system defined in terms of the shift operator θ on the space of bi-infinite sequences ΛL = {λ = {λn }n∈Z : λn ∈ [0, L] , n ∈ Z} for some L > λ̄ > 1. It yields a compact metric space with the metric X dΛL (λ, λ0 ) := 2−|n| |λn − λ0n | . n∈Z 44 5 Attractors of skew-product systems This is coupled with a cocycle mapping with values xn = ϕ(n, λ, x0 ) on R generated by the difference equation (32) with a given coefficient sequence λ. For the sequence λ from (32), the limit of the shifted sequences θn (λ) in the above metric as n → ∞ is the constant sequence λ∗+ equal to λ̄, while the limit as n → −∞ is the sequence λ∗− with all components equal to λ̄−1 . The pullback attractor of the corresponding skew-product system (θ, ϕ) on Λ × R consists of compact subsets Aλ of R for each λ ∈ ΛL . It is easy to see that Aλ = {0} for any λ with components λn < 1 for n ≤ 0, which includes the constant sequence λ∗− as well as the switched sequence in (32). On the other hand, Aλ∗+ = [−λ̄, λ̄]. Here ∪λ∈ΛL Aλ is precompact, so contains all future limiting dynamics. The pullback attractor of the skew-product system includes that of the process for a given bi-infinite coefficient sequence, but also includes its forward asymptotic limits and much more. The coefficient sequence set ΛL includes all possibilities, in fact, far more than may be of interest in particular situation. If one is interested in the dynamics of a process corresponding to a specific λ̂ ∈ ΛL , then it would suffice to consider the skew-product system w.r.t. the driving system on the smaller space Λλ̂ defined as the hull of this sequence, i.e., the set of accumulation points of the set {θn (λ̂) : n ∈ Z} in the metric space (ΛL , dΛL ). In particular, if λ̂ is the specific sequence in (32), then the union ∪λ∈Λλ̂ Aλ = Aλ∗+ = [−λ̄, λ̄] contains all future limiting dynamics, i.e., lim dist ϕ(n, λ, x), [−λ̄, λ̄] = 0 n→∞ for all x ∈ R. The example described by nonautonomous difference equation (32) is asymptotically autonomous with Λλ = {λ∗± } ∪ {θn (λ) : n ∈ Z}. The forward limit points ±(λ̄ − 1) of the process generated by (25), which were not steady states of the process, are now locally asymptotic steady states of the skew product flow with base space P = Λ̄ consisting of the single constant sequence λk ≡ λ̄, when the skew product system is interpreted as an autonomous semidynamical system on the product space P × X. More generally, unlike the process formulation, the skew-product system formulation and its pullback attractor include the forwards limiting dynamics. 13 Local pullback attractors Less uniform behaviour such as parameter dependent domains of definition and local pullback attractors can be handled by introducing the concept of a basin of attraction system. 13 Local pullback attractors 45 Let Domp ⊂ X be the domain of definition of f (p, ·) in the nonautonomous equation (13), which requires f (p, Domp ) ⊂ Domθ(p) . Then S the corresponding cocycle mapping ϕ has the domain of definition Z+ × p∈P ({p} × Domp ). Consequently one needs to restrict the admissible families of bounded sets in the pullback convergence to subsets of Domp for each p ∈ P . Definition 13.1. An ensemble Dad of families D = {Dp : p ∈ P } of nonempty subsets X is called admissible if i) Dp is bounded and Dp ⊂ Domp for each p ∈ P and every D = {Dp : p ∈ P } ∈ Dad ; and b (1) = {Dp(1) : p ∈ P } ∈ Dad whenever D b (2) = {Dp(2) : p ∈ P } ∈ ii) D (1) (2) Dad and Dp ⊆ Dp for all p ∈ P . Further restrictions will allow one to consider local or otherwise restricted form of pullback attraction. Definition 13.2. A ϕ-invariant family A = {Ap : p ∈ P } of nonempty compact subsets of X with Ap ⊂ Domp for each p ∈ P is called a pullback attractor w.r.t. the basin of attraction system Datt if Datt is an admissible ensemble of families of subsets such that lim dist ϕ(j, θ−j (p), Dθ−j (p) ), Ap = 0 (33) j→∞ for every D = {Dp : p ∈ P } ∈ Datt . In this case a pullback absorbing set system B = {Bp : p ∈ P } should also satisfy B ∈ Datt and the pullback absorbing property be modified to ϕ j, θ−j (p), Dθ−j (p) ) ⊆ Bp for all j ≥ Np,D , p ∈ P and D = {Dp ; p ∈ P } ∈ Datt . A counterpart of Theorem 10.1 then holds here. In this case the pullback attractor is unique within the basin of attraction system, but the skewproduct system may have other pullback attractors within other basin of attraction systems, which may be either disjoint from or a proper sub-ensemble of the original basin of attraction system. Example 13.1. Consider the scalar nonautonomous difference equation xn+1 = fn (xn ) := xn + γn xn 1 − x2n (34) 46 5 Attractors of skew-product systems for given parameters γn > 0, n ∈ Z. First let γn ≡ γ̄ for all n ∈ Z, so the system is autonomous. It has the at tractor A∗ = [−1, 1] for the maximal basin of attraction −1 − γ̄ −1 , 1 + γ̄ −1 , but if one restricts attention further to the basin of attraction 0, 1 + γ̄ −1 then the attractor is only A∗∗ = {1}. Now let γn be variable with γn ∈ 12 γ̄, γ̄ for each n ∈ Z, so the system is now nonautonomous and representable as a skew-product on the state space X = Z × R with the parameter set P = Z. Then A∗ = {A∗n : n ∈ Z} with A∗n = [−1, 1] for all n ∈ Z is the pullback attractor for the basin of attraction system Datt consisting of all families D = {Dn : n ∈ Z} satisfying ∗∗ Dn ⊂ −1 − γ̄ −1 , 1 + γ̄ −1 , whereas A∗∗ = {A∗∗ n : n ∈ Z} with An = {1} for all n ∈ Z is the pullback attractor for the basin of attraction system Datt consisting of all families D = {Dn : n ∈ Z} with Dn ⊂ 0, 1 + γ̄ −1 . Chapter 6 Lyapunov functions for pullback attractors A Lyapunov function characterizing pullback attraction and pullback attractors for a discrete-time process in Rd will be constructed here. Consider a nonautonomous difference equation xn+1 = fn (xn ) (∆) on Rd , where the fn : Rd → Rd are Lipschitz continuous mappings. This generates a process φ : Z2≥ × Rd → Rd through iteration by φ(n, n0 , x0 ) = fn−1 ◦ · · · ◦ fn0 (x0 ) for all n ≥ n0 and each x0 ∈ Rd , which in particular satisfies the continuity property x0 7→ φ(n, n0 , x0 ) is Lipschitz continuous for all n ≥ n0 . The pullback attraction is taken w.r.t. a basin of attraction system, which is defined as follows for a process. Definition 13.3. A basin of attraction system Datt consists of families D = {Dn : n ∈ Z} of nonempty bounded subsets of Rd with the property (1) (2) that D(1) = {Dn : n ∈ Z} ∈ Datt if D(2) = {Dn : n ∈ Z} ∈ Datt and (1) (2) Dn ⊆ Dn for all n ∈ Z. Although somewhat complicated, the use of such a basin of attraction system allows both nonuniform and local attraction regions, which are typical in nonautonomous systems, to be handled. 47 48 6 Lyapunov functions Definition 13.4. A φ-invariant family of nonempty compact subsets A = {An : n ∈ Z} is called a pullback attractor w.r.t. a basin of attraction system Datt if it is pullback attracting lim dist (φ(n, n − j, Dn−j ), An ) = 0 j→∞ (35) for all n ∈ Z and all D = {Dn : n ∈ Z} ∈ Datt . Obviously A ∈ Datt . The construction of the Lyapunov function requires the existence of a pullback absorbing neighbourhood family. 14 Existence of a pullback absorbing neighbourhood system The following lemma shows that there always exists such a pullback absorbing neighbourhood system for any given pullback attractor. This will be required for the construction of the Lyapunov function for the proof of Theorem 15.1. The proof is very similar to that of Proposition 8.1. Lemma 14.1. If A is a pullback attractor with a basin of attraction system Datt for a process φ, then there exists a pullback absorbing neighbourhood system B ⊂ Datt of A w.r.t. φ. Moreover, B is φ-positive invariant. Proof. For each n0 ∈ Z pick δn0 > 0 such that B[An0 ; δn0 ] := {x ∈ Rd : dist(x, An0 ) ≤ δn0 } such that {B[An0 ; δn0 ] : n0 ∈ Z} ∈ Datt and define Bn0 := [ j≥0 φ(n0 , n0 − j, B[An0 −j ; δn0 −j ]). Obviously An0 ⊂ intB[An0 ; δn0 ] ⊂ Bn0 . To show positive invariance the two-parameter semigroup property will be used in what follows. 14 Existence of a pullback absorbing neighbourhood system φ(n0 + 1, n0 , Bn0 ) = [ j≥0 = [ j≥0 = [ i≥1 ⊆ [ i≥0 49 φ(n0 + 1, n0 , φ(n0 , n0 − j, B[An0 −j ; δn0 −j ])) φ(n0 + 1, n0 − j, B[An0 −j ; δn0 −j ]) φ(n0 + 1, n0 + 1 − i, B[An0 +1−i ; δn0 +1−i ]) φ(n0 + 1, n0 + 1 − i, B[An0 +1−i ; δn0 +1−i ]) = Bn0 +1 , so φ(n0 + 1, n0 , Bn0 ) ⊆ Bn0 +1 . This and the two-parameter semigroup property again gives φ(n0 + 2, n0 , Bn0 ) = φ(n0 + 2, n0 + 1, φ(n0 + 1, n0 , Bn0 ) ⊆ φ(n0 + 2, n0 + 1, Bn0 +1 ) ⊆ Bn0 +2 . The general positive invariance assertion then follows by induction. Now referring to the continuity of φ(n0 , n0 − j, ·) and the compactness of B[An0 −j ; δn0 −j ], the set φ(n0 , n0 − j, B[An0 −j ; δn0 −j ]) is compact for each j ≥ 0 and n0 ∈ Z. Moreover, by pullback convergence, there exists an N = N (n0 , δn0 ) ∈ N such that φ(n0 , n0 − j, B[An0 −j ; δn0 −j ]) ⊆ B[An0 ; δn0 ] ⊂ Bn0 for all j ≥ N . Hence B n0 = [ j≥0 φ(n0 , n0 − j, B[An0 −j ; δn0 −j ]) ⊆ B[An0 ; δn0 ] = [ 0≤j<N [ [ 0≤j<N φ(n0 , n0 − j, B[An0 −j ; δn0 −j ]) φ(n0 , n0 − j, B[An0 −j ; δn0 −j ]), which is compact, so Bn0 is compact. To see that B so constructed is pullback absorbing w.r.t. Datt , let D ∈ Datt . Fix n0 ∈ Z. Since A is pullback attracting, there exists an N (D, δn0 , n0 ) ∈ N such that dist (φ(n0 , n0 − j, Dn0 −j ), An0 ) < δn0 for all j ≥ N (D, δn0 , n0 ). But (φ(n0 , n0 − j, Dn0 −j ) ⊂ intB[An0 ; δn0 ] and B[An0 ; δn0 ] ⊂ Bn0 , so 50 6 Lyapunov functions φ(n0 , n0 − j, Dn0 −j ) ⊂ intBn0 for all j ≥ N (D, δn0 , n0 ). Hence B is pullback absorbing as required. t u 15 Necessary and sufficient conditions The main result is the construction of a Lyapunov function that characterizes this pullback attraction. Theorem 15.1. Let the fn be uniformly Lipschitz continuous on Rd for each n ∈ Z and let φ be the process that they generate. In addition, let A be aφ-invariant family of nonempty compact sets that is pullback attracting with respect to φ with a basin of attraction system Datt . Then there exists a Lipschitz continuous function V : Z × Rd → R such that Property 1 (upper bound): For all n0 ∈ Z and x0 ∈ Rd V (n0 , x0 ) ≤ dist(x0 , An0 ); (36) Property 2 (lower bound): For each n0 ∈ Z there exists a function a(n0 , ·) : R+ → R+ with a(n0 , 0) = 0 and a(n0 , r) > 0 for all r > 0 which is monotonically increasing in r such that a(n0 , dist(x0 , An0 )) ≤ V (n0 , x0 ) for all x0 ∈ Rd ; (37) Property 3 (Lipschitz condition): For all n0 ∈ Z and x0 , y0 ∈ Rd |V (n0 , x0 ) − V (n0 , y0 )| ≤ kx0 − y0 k; (38) Property 4 (pullback convergence): For all n0 ∈ Z and any D ∈ Datt limsupn→∞ sup zn0 −n ∈Dn0 −n V (n0 , φ(n0 , n0 − n, zn0 −n )) = 0. (39) In addition, Property 5 (forwards convergence): There exists N ∈ Datt . which is positively invariant under φ and consists of nonempty compact sets Nn0 with An0 ⊂ intNn0 for each n0 ∈ Z such that V (n0 + 1, φ(n0 + 1, n0 , x0 )) ≤ e−1 V (n0 , x0 ) for all x0 ∈ Nn0 and hence (40) 15 Necessary and sufficient conditions V (n0 + j, φ(j, n0 , x0 )) ≤ e−j V (n0 , x0 ) 51 for all x0 ∈ Nn0 , j ∈ N. (41) Proof. The aim is to construct a Lyapunov function V (n0 , x0 ) that characterizes a pullback attractor A and satisfies properties 1–5 of Theorem 15.1. For this define V (n0 , x0 ) := sup e−Tn0 ,n dist (x0 , φ(n0 , n0 − n, Bn0 −n )) n∈N for all n0 ∈ Z and x0 ∈ Rd , where Tn0 ,n = n + n X αn+0 −j j=1 with Tn0 ,0 = 0. Here αn = log Ln , where Ln is the uniform Lipschitz constant of fn on Rd , and a+ = (a + |a|)/2, i.e., the positive part of a real number a. Note 4: Tn0 ,n ≥ n and Tn0 ,n+m = Tn0 ,n + Tn0 −n,m for n, m ∈ N, n0 ∈ Z. Proof of property 1 Since e−Tn0 ,n ≤ 1 for all n ∈ N and dist (x0 , φ(n0 , n0 − n, Bn0 −n )) is monotonically increasing from 0 ≤ dist (x0 , φ(n0 , n0 , Bn0 )) at n = 0 to dist (x0 , An0 ) as n → ∞, V (n0 , x0 ) = sup e−Tn0 ,n dist (x0 , φ(n0 , n0 − n, Bn0 −n )) ≤ 1 · dist (x0 , An0 ) . n∈N Proof of property 2 If x0 ∈ An0 , then V (n0 , x0 ) = 0 by Property 1, so assume that x0 ∈ Rd \ An0 . Now in V (n0 , x0 ) = sup e−Tn0 ,n dist (x0 , φ(n0 , n0 − n, Bn0 −n )) n≥0 the supremum involves the product of an exponentially decreasing quantity bounded below by zero and a bounded increasing function, since the sets φ(n0 , n0 −n, Bn0 −n ) are a nested family of compact sets decreasing to An0 with increasing n. In particular, dist (x0 , An0 ) ≥ dist (x0 , φ(n0 , n0 − n, Bn0 −n )) Hence there exists an N ∗ = N ∗ (n0 , x0 ) ∈ N such that for all n ∈ N. 52 6 Lyapunov functions 1 dist(x0 , An0 ) ≤ dist (x0 , φ(no , n0 − n, Bn0 −n )) ≤ dist(x0 , An0 ) 2 for all n ≥ N ∗ , but not for n = N ∗ − 1. Then, from above, V (n0 , x0 ) ≥ e−Tn0 ,N ∗ dist (x0 , φ(n0 , n0 − N ∗ , Bn0 −N ∗ )) ≥ 1 −Tn ,N ∗ 0 e dist (x0 , An0 ) . 2 Define N ∗ (n0 , r) := sup{N ∗ (n0 , x0 ) : dist (x0 , An0 ) = r} Now N ∗ (n0 , r) < ∞ for x0 ∈ / An0 with dist (x0 , An0 ) = r and N ∗ (n0 , r) is nondecreasing with r → 0. To see this note that by the triangle rule dist(x0 , An0 ) ≤ dist(x0 , φ(n0 , n0 −n, Bn0 −n ))+dist(φ(n0 , n0 −n, Bn0 −n ), An0 ). Also by pullback convergence there exists an N (n0 , r/2) such that dist(φ(n0 , n0 − n, Bn0 −n ), An0 ) < 1 r 2 for all n ≥ N (n0 , r/2). Hence for dist(x0 , An0 ) = r and n ≥ N (n0 , r/2), 1 r ≤ dist(x0 , φ(n0 , n0 − n, Bn0 −n )) + r, 2 that is 1 r ≤ dist(x0 , φ(n0 , n0 − n, Bn0 −n )). 2 Obviously N ∗ (n0 , r) ≤ N ∗ (n0 , r/2). Finally, define 1 a(n0 , r) := r e−Tn0 ,N ∗ (n0 ,r) . (42) 2 Note that there is no guarantee here (without further assumptions) that a(n0 , r) does not converge to 0 for fixed r 6= 0 as n0 → ∞. Proof of property 3 |V (n0 , x0 ) − V (n0 , y0 )| 15 Necessary and sufficient conditions 53 = sup e−Tn0 ,n dist (x0 , φ(n0 , n0 − n, Bn0 −n )) n∈N − sup e−Tn0 ,n dist (y0 , φ(n0 , n0 − n, Bn0 −n )) n∈N ≤ sup e−Tn0 ,n |dist (x0 , φ(n0 , n0 − n, Bn0 −n )) − dist (y0 , φ(n0 , n0 − n, Bn0 −n ))| n∈N ≤ sup e−Tn0 ,n kx0 − y0 k ≤ kx0 − y0 k n∈N since |dist (x0 , C) − dist (y0 , C)| ≤ kx0 − y0 k for any x0 , y0 ∈ R and nonempty compact subset C of Rd . d Proof of property 4 Assume the opposite. Then there exists an ε0 > 0, a sequence nj → ∞ in N and points xj ∈ φ(n0 , n0 −nj , Dn0 −nj ) such that V (n0 , xj ) ≥ ε0 for all j ∈ N. Since D ∈ Datt and B is pullback absorbing, there exists an N = N (D, n0 ) ∈ N such that φ(n0 , n0 − nj , Dn0 −nj ) ⊂ Bn0 for all nj ≥ N. Hence, for all j such that nj ≥ N , it holds xj ∈ Bn0 , which is a compact set, so there exists a convergent subsequence xj 0 → x∗ ∈ Bn0 . But also xj 0 ∈ [ n≥nj 0 φ(n0 , n0 − n, Dn0 −n ) and \ [ nj 0 n≥nj 0 φ(n0 , n0 − n, Dn0 −n ) ⊆ An0 by the definition of a pullback attractor. Hence x∗ ∈ An0 and V (n0 , x∗ ) = 0. But V is Lipschitz continuous in its second variable by property 3, so ε0 ≤ V (n0 , xj 0 ) = kV (n0 , xj 0 ) − V (n0 , x∗ )k ≤ kxj 0 − x∗ k, which contradicts the convergence xj 0 → x∗ . Hence property 4 must hold. Proof of property 5 Define Nn0 := {x0 ∈ B[Bn0 ; 1] : φ(n0 + 1, n0 , x0 ) ∈ Bn0 +1 } , 54 6 Lyapunov functions where B[Bn0 ; 1] = {x0 : dist(x0 , Bn0 ) ≤ 1} is bounded because Bn0 is compact and Rd is locally compact, so Nn0 is bounded. It is also closed, hence compact, since φ(n0 + 1, n0 , ·) is continuous and Bn0 +1 is compact. Now An0 ⊂ intBn0 and Bn0 ⊂ Nn0 , so An0 ⊂ intNn0 . In addition, φ(n0 + 1, n0 , Nn0 ) ⊂ Bn0 +1 ⊂ Nn0 +1 , so N is positive invariant. It remains to establish the exponential decay inequality (40). This needs the following Lipschitz condition on φ(n0 + 1, n0 , ·) ≡ fn0 (·): kφ(n0 + 1, n0 , x0 ) − φ(n0 + 1, n0 , y0 )k ≤ eαn0 kx0 − y0 k for all x0 , y0 ∈ Dn0 . It follows from this that dist(φ(n0 + 1, n0 , x0 ), φ(n0 + 1, n0 , Cn0 )) ≤ eαn0 dist(x0 , Cn0 ) for any compact subset Cn0 ⊂ Rd . From the definition of V , V (n0 + 1, φ(n0 + 1, n0 , x0 )) = sup e−Tn0 +1,n dist(φ(n0 + 1, n0 , x0 ), φ(n0 , n0 − n, Bn0 −n )) n≥0 = sup e−Tn0 +1,n dist(φ(n0 + 1, n0 , x0 ), φ(n0 , n0 − n, Bn0 −n )) n≥1 since φ(n0 + 1, n0 , x0 ) ∈ Bn0 +1 when x0 ∈ Nn0 . Hence re-indexing and then using the two-parameter semigroup property and the Lipschitz condition on φ(1, n0 , ·) V (n0 + 1, φ(n0 + 1, n0 , x0 )) = sup e−Tn0 +1,j+1 dist(φ(n0 + 1, n0 , x0 ), φ(n0 , n0 − j − 1, Bn0 −j−1 )) j≥0 = sup e−Tn0 +1,j+1 dist(φ(n0 + 1, n0 , x0 ), φ(n0 + 1, n0 , φ(n0 , n0 − j, Bn0 −j ))) j≥0 ≤ sup e−Tn0 +1,j+1 eαn0 dist(x0 , φ(n0 , n0 − j, Bn0 −j )) j≥0 Now Tn0 +1,j+1 = Tn0 ,j + 1 − αn+0 , so 15 Necessary and sufficient conditions 55 V (n0 + 1, φ(n0 + 1, n0 , x0 )) ≤ sup e−Tn0 +1,j+1 +αn0 dist(x0 , φ(n0 + j, n0 − j, Bn0 −j )) j≥0 + = sup e−Tn0 ,j −1−αn0 +αn0 dist(x0 , φ(n0 , n0 − j, Bn0 −j )) j≥0 ≤ e−1 sup e−Tn0 ,j dist(x0 , φ(n0 , n0 − j, Bn0 −j )) ≤ e−1 V (n0 , x0 ), j≥0 which is the desired inequality. Moreover, since φ(1, n0 , x0 ) ∈ Bn0 +1 ⊂ Nn0 +1 , the proof continues inductively to give V (n0 + j, φ(n0 + j, n0 , x0 )) ≤ e−j V (n0 , x0 ) for all j ∈ N. This completes the proof of Theorem 15.1. t u 15.1 Comments on Theorem 15.1 Note 1: It would be nice to use φ(n0 , n0 − n, x0 ) for a fixed x0 in the pullback convergence property (39), but this may not always be possible due to nonuniformity of the attraction region, i.e., there may not be a D ∈ Datt with x0 ∈ Dn0 −n for all n ∈ N. Note 2: The forwards convergence inequality (41) does not imply forwards Lyapunov stability or Lyapunov asymptotical stability. Although a(n0 + j, dist(φ(n0 + j, n0 , x0 ), An0 +j )) ≤ e−j V (n0 , x0 ) there is no guarantee (without additional assumptions) that inf a(n0 + j, r) > 0 j≥0 for r > 0, so dist(φ(n0 + j, n0 , x0 ), An0 +j ) need not become small as j → ∞. As a counterexample consider Example 6.2 of the process φ on R generated by (∆) with fn = g1 for n ≤ 0 and fn = g2 for n ≥ 1 where the mappings g1 , g2 : R → R are given by g1 (x) := 21 x and g2 (x) := max{0, 4x(1−x)} for all x ∈ R. Then A with An0 = {0} for all n0 ∈ Z is pullback attracting for φ but is not forwards Lyapunov asymptotically stable. (Note one can restrict g1 , g2 to [−R, R] → [−R, R] for any fixed R > 1 to ensure the required uniform Lipschitz continuity of the fn ). Note 3: The forwards convergence inequality (41) can be rewritten as V (n0 , φ(n0 , n0 − j, xn0 −j )) ≤ e−j V (n0 − j, xn0 −j ) ≤ e−j dist(xn0 −j , An0 −j ) 56 6 Lyapunov functions for all xn0 −j ∈ Nn0 −j and j ∈ N. Definition 15.1. A family D ∈ Datt is called past–tempered w.r.t. A if 1 lim log+ dist(Dn0 −j , An0 −j ) = 0 for all n0 ∈ Z , j→∞ j or equivalently if lim e−γj dist(Dn0 −j , An0 −j ) = 0 j→∞ for all n0 ∈ Z, γ > 0. This says that there is at most subexponential growth backwards in time of the starting sets. It is reasonable to restrict attention to such sets. For a past-tempered family D ⊂ N it follows that V (n0 , φ(n0 , n0 − j, xn0 −j )) ≤ e−j dist(Dn0 −j , An0 −j ) −→ 0 as j → ∞. Hence a (n0 , dist(φ(n0 , n0 − j, xn0 −j ), An0 )) ≤ e−j dist(Dn0 −j , An0 −j ) −→ 0 as j → ∞. Since n0 is fixed in the lower expression, this implies the pullback convergence lim dist(φ(n0 , n0 − j, Dn0 −j ), An0 ) = 0. j→∞ A rate of pullback convergence for more general sets D ∈ Datt will be considered in the next subsection. 15.2 Rate of pullback convergence Since B is a pullback absorbing neighbourhood system, then for every n0 ∈ Z, n ∈ N and D ∈ Datt there exists an N (D, n0 , n) ∈ N such that φ(n0 − n, n0 − n − m, Dn0 −n−m ) ⊆ Bn0 −n Hence, by the two-parameter semigroup property, for all m ≥ N. 15 Necessary and sufficient conditions 57 φ(n0 , n0 − n − m, Dn0 −n−m ) = φ(n0 , n0 − n, φ(n0 − n, n0 − n − m, Dn0 −n−m )) ⊆ φ(n0 , n0 − n, Bn0 −n ) = φ(n0 , n0 − i, φ(n0 − i, n0 − n, Bn0 −n )) ⊆ φ(n0 , n0 − i, Bn0 −i ) for all m ≥ N, 0 ≤ i ≤ n, where the positive invariance of B was used in the last line. Hence φ(n0 , n0 − n − m, Dn0 −n−m ) ⊆ φ(n0 , n0 − i, Bn0 −i ) for all m ≥ N (D, n0 , n) and 0 ≤ i ≤ n, or equivalently φ(n0 , n0 − m, Dn0 −m ) ⊆ φ(n0 , n0 − i, Bn0 −i ) for all m ≥ n + N (D, n0 , n) and 0 ≤ i ≤ n. This means that for any zn0 −m ∈ Dn0 −m the supremum in V (n0 , φ(n0 , n0 − m, zn0 −m )) = sup e−Tn0 ,i dist (φ(n0 , n0 − m, zn0 −m ), φ(n0 , n0 − i, Bn0 −i )) i≥0 need only be considered over i ≥ n. Hence V (n0 , φ(n0 , n0 − m, zn0 −m )) = sup e−Tn0 ,i dist (φ(n0 , n0 − m, zn0 −m ), φ(n0 , n0 − i, Bn0 −i )) i≥n ≤ e−Tn0 ,n sup e−Tn0 −n,j dist (φ(n0 , n0 − m, zn0 −m ), φ(n0 , n0 − n − j, Bn0 −n−j )) j≥0 ≤ e−Tn0 ,n dist (φ(n0 , n0 − m, zn0 −m ), An0 ) ≤ e−Tn0 ,n dist (Bn0 , An0 ) since An0 ⊆ φ(n0 , n0 −n−j, Bn0 −n−j ) and φ(n0 , n0 −m, zn0 −m ) ∈ Bn0 . Thus V (n0 , φ(n0 , n0 − m, zn0 −m )) ≤ e−Tn0 ,n dist (Bn0 , An0 ) for all zn0 −m ∈ Dn0 −m , m ≥ n + N (D, n0 , n) and n ≥ 0. It can be assumed that the mapping n 7→ n + N (D, n0 , n) is monotonic increasing in n (by taking a larger N (D, n0 , n) if necessary), and is hence invertible. Let the inverse of m = n + N (D, n0 , n) be n = M (m) = M (D, n0 , m). Then V (n0 , φ(n0 , n0 − m, zn0 −m )) ≤ e−Tn0 ,M (m) dist (Bn0 , An0 ) 58 6 Lyapunov functions for all m ≥ N (D, n0 , 0) ≥ 0. Usually N (D, n0 , 0) > 0. This expression can be modified to hold for all m ≥ 0 by replacing M (m) by M ∗ (m) defined for all m ≥ 0 and introducing a constant KD,n0 ≥ 1 to account for the behaviour over the finite time set 0 ≤ m < N (D, n0 , 0). For all m ≥ 0 this gives V (n0 , φ(n0 , n0 − m, zn0 −m )) ≤ KD,n0 e−Tn0 ,M ∗ (m) dist (Bn0 , An0 ) . Chapter 7 Bifurcations The classical theory of dynamical bifurcation focusses on autonomous difference equations xn+1 = g(xn , λ) (43) with a right-hand side g : Rd × Λ → Rd depending on a parameter λ from some parameter space Λ, which is typically a subset of Rn (cf., e.g., [37] or [21]). A central question is how stability and multiplicity of invariant sets for (43) changes when the parameter λ is varied. In the simplest, and most often considered situation, these invariant sets are fixed points or periodic solutions to (43). Given some parameter value λ∗ , a fixed point x∗ = g(x∗ , λ∗ ) of (43) is called hyperbolic , if the derivative D1 g(x∗ , λ∗ ) has no eigenvalue on the complex unit circle S1 . Then it is an easy consequence of the implicit function theorem (cf. [38, p. 365, Theorem 2.1]) that x∗ allows a unique continuation x(λ) ≡ g(x(λ), λ) in a neighborhood of λ∗ . In particular, hyperbolicity rules out bifurcations understood as topological changes in the set {x ∈ Rd : g(x, λ) = x} near (x∗ , λ∗ ) or a stability change of x∗ . On the other hand, eigenvalues on the complex unit circle give rise to various well-understood autonomous bifurcation scenarios. Examples include fold, transcritical or pitchfork bifurcations (eigenvalue 1), flip bifurcations (eigenvalue −1) or the Sacker–Neimark bifurcation (a pair of complex conjugate eigenvalues for d ≥ 2). 16 Hyperbolicity and simple examples Even in the autonomous set-up of (43) one easily encounters intrinsically nonautonomous problems, where the classical methods of, for instance, [37, 21] do not apply: 59 60 7 Bifurcations 1. Investigate the behaviour of (43) along an entire reference solution (χn )n∈Z , which is not constant or periodic. This is typically done using the (obviously nonautonomous) equation of perturbed motion xn+1 = g(xn + χn , λ) − g(χn , λ). 2. Replace the constant parameter λ in (43) by a sequence (λn )n∈Z in Λ, which varies in time. Also the resulting parametrically perturbed equation xn+1 = g(xn , λn ) becomes nonautonomous. This situation is highly relevant from an applied point of view, since parameters in real world problems are typically subject to random perturbations or an intrinsic background noise. Both the above problems fit into the framework of general nonautonomous difference equations xn+1 = fn (xn , λ) (∆λ ) with a sufficiently smooth right-hand side fn : Rd × Λ → Rd , n ∈ Z. In addition, suppose that fn and its derivatives map bounded subsets of Rd × Λ into bounded sets uniformly in n ∈ Z. Generically, nonautonomous equations (∆λ ) do not have constant solutions, and the fixed point sequences x∗n = fn (x∗n , λ∗ ) are usually not solutions to (∆λ ). This gives rise to the following question: If there are no equilibria, what should bifurcate in a nonautonomous set-up? Before suggesting an answer, a criterion to exclude bifurcations is proposed. For motivational purposes consider again the autonomous case (43) and the problem of parametric perturbations. Example 16.1. The autonomous difference equation xn+1 = 21 xn + λ has the unique fixed point x∗ (λ) = 2λ for all λ ∈ R. Replace λ by a bounded sequence (λn )n∈Z and observe as in Example 6.1 that the nonautonomous counterpart xn+1 = 12 xn + λn Pn−1 1 n−k−1 has a unique bounded entire solution χ∗n := λk . For the k=−∞ 2 special case λn ≡ λ, this solution reduces to the known fixed point χ∗n ≡ 2λ. This simple example yields the conjecture that equilibria of autonomous equations persist as bounded entire solutions under parametric perturbations. It will be shown below in Theorem 16.1 (or in [45, Theorem 3.4]) that this conjecture is generically true in the sense that the fixed point of (43) has to be hyperbolic in order to persist under parametric perturbations. Example 16.2. TheP linear difference equation xn+1 = xn +λn has the forward n−1 solution xn = x0 + k=0 λn , whose boundedness requires the assumption that 16 Hyperbolicity and simple examples 61 the real sequence (λn )n≥0 is summable. Thus, the nonhyperbolic equilibria x∗ of xn+1 = xn do not necessarily persist as bounded entire solutions under arbitrary bounded parametric perturbations. Typical examples of nonautonomous equations having an equilibrium, given by the trivial solution are equations of perturbed motion. Their variational equation along (χn )n∈Z is given by xn+1 = D1 g(χn , λ)xn and investigating the behaviour of its trivial solution under variation of λ requires an appropriate nonautonomous notion of hyperbolicity. Suppose that An ∈ Rd×d , n ∈ Z, is a sequence of invertible matrices, and consider a linear difference equation xn+1 = An xn (44) with the transition matrix l < n, An−1 · · · Al , Φ(n, l) := I, n = l, −1 , n < l. An · · · A−1 l−1 Let I be a discrete interval and define I0 := {k ∈ I : k+1 ∈ I}. An invariant projector for (44) is a sequence Pn ∈ Rd×d , n ∈ I, of projections Pn = Pn2 such that An+1 Pn = Pn An for all n ∈ I0 . Definition 16.1. A linear difference equation (44) is said to admit an exponential dichotomy on I, if there exist an invariant projector Pn and real numbers K ≥ 0, α ∈ (0, 1) such that for all n, l ∈ I one has kΦ(n, l)Pl k ≤ Kαn−l kΦ(n, l)[id −Pl ]k ≤ Kα l−n if l ≤ n, if n ≤ l. Remark 16.1. An autonomous difference equation xn+1 = Axn has an exponential dichotomy, if and only if the coefficient matrix A ∈ Rd×d has no eigenvalues on the complex unit circle. In terms of this terminology an entire solution (χn )n∈Z of (∆λ ) is called hyperbolic , if the variational equation xn+1 = D1 fn (χn , λ)xn has an exponential dichotomy on Z. Let `∞ denote the space of bounded sequences in Rd . (Vλ ) 62 7 Bifurcations Theorem 16.1 (Continuation of bounded entire solutions). If χ∗ = (χ∗n )n∈Z is an entire bounded and hyperbolic solution of (∆λ∗ ), then there exists an open neighborhood Λ0 ⊆ Λ of λ∗ and a unique function χ : Λ0 → `∞ such that i) χ(λ∗ ) = χ∗ , ii) each χ(λ) is a bounded entire and hyperbolic solution of (∆λ ), iii) χ : Λ0 → `∞ is as smooth as the functions fn . Proof. The proof is based on the idea to formulate a nonautonomous difference equation (∆λ ) as an abstract equation F (χ, λ) = 0 in the space `∞ . This is solved using the implicit mapping theorem, where the invertibility of the Fréchet derivative D1 F (χ∗ , λ∗ ) is characterised by the hyperbolicity assumption on χ∗ . For details, see [47, Theorem 2.11]. t u Consequently, in order to deduce sufficient conditions for bifurcations, one must violate the hyperbolicity of χ∗ . For this purpose, the following characterisation of an exponential dichotomy is useful. Theorem 16.2 (Characterization of exponential dichotomies). A variational equation (Vλ ) has an exponential dichotomy on Z, if and only if the following conditions are fulfilled: i) (Vλ ) has an exponential dichotomy on Z+ with projector Pn+ , as well as an exponential dichotomy on Z− with projector Pn− , ii) R(P0+ ) ⊕ N (P0− ) = Rd . Proof. See [10, Lemma 2.4]. t u The subsequent examples illustrate various scenarios that can arise, if a condition stated in Theorem 16.2 is violated. Example 16.3 (Pitchfork bifurcation). Consider the difference equation xn+1 = fn (xn , λ), fn (x, λ) := λx 1 + |x| from Section 9. It is a prototypical example of a supercritical autonomous pitchfork bifurcation (cf., e.g. [37, pp. 119ff, Sect. 4.4]), where the unique asymptotically stable equilibrium x∗ = 0 for λ ∈ (0, 1) bifurcates into two asymptotically stable equilibria x± := ±(λ − 1) for λ > 1. 16 Hyperbolicity and simple examples 63 Along the trivial solution the variational equation xn+1 = λxn becomes nonhyperbolic for λ = 1. Indeed, criterion i) of Theorem 16.2 is violated, since the variational equation does not admit a dichotomy on Z+ or on Z− . This loss of hyperbolicity causes an attractor bifurcation, since for • λ ∈ (0, 1), the set x∗ = 0 is the global attractor • λ > 1, the trivial equilibrium x∗ = 0 becomes unstable and the symmetric interval A = [x− , x+ ] is the global attractor. Bifurcations of pullback attractors can be observed as nonautonomous versions of pitchfork bifurcations. Example 16.4 (Pullback attractor bifurcation). Consider for parameter values λ > 0 the difference equation min an x3n , λ2 xn , xn ≥ 0, xn+1 = λxn − max a x3 , λ x , x < 0, n n 2 n n where (an )n∈Z is a sequence which is both bounded and bounded away from zero. Note that in a neighborhood U of 0, the difference equation is given by xn+1 = λxn − an x3n , and outside of a set V ⊃ U , the difference equation is given by xn+1 = λ2 xn . Both U and V here can be chosen independently of λ near λ = 1. Moreover, for fixed n ∈ Z, the right-hand side of this equation lies between the functions x 7→ λ2 x and x 7→ λx. It is clear that for λ ∈ (0, 1), the global pullback attractor is given by the trivial solution, which follows from the fact that points are contracted at each time step by the factor λ. For λ > 1, the trivial solution is no longer attractive, but there exists a (nontrivial) pullback attractor for λ ∈ (1, 2). This follows from Theorem 10.1, because the family B = {V : n ∈ Z} is pullback absorbing (the right-hand is given by x 7→ λ2 x outside of V ). At the parameter value λ = 1, the global pullback attractor changes its dimension. Thus, this difference equation provides an example of a nonautonomous pitchfork bifurcation, which will be treated below in Section 17. While these two examples show how (autonomous) bifurcations can be understood as attractor bifurcations, the following scenario is intrinsically nonautonomous (see [44] for a deeper analysis). Example 16.5 (Shovel bifurcation). Consider a scalar difference equation ( 1 + λ, n < 0, xn+1 = an (λ)xn , an (λ) := 2 (45) λ, n ≥ 0, with parameters λ > 0. In order to understand the dynamics of (45), distinguish three cases: 64 7 Bifurcations i) λ ∈ (0, 12 ): The equation (45) has an exponential dichotomy on Z with projector Pn ≡ 1. The uniquely determined bounded entire solution is the trivial one, which is uniformly asymptotically stable. ii) λ > 1: The equation (45) has an exponential dichotomy on Z with projector Pk ≡ 0. Again, 0 is the unique bounded entire solution, but is now unstable. iii) λ ∈ ( 12 , 1): In this situation, (45) has an exponential dichotomy on Z+ with projector Pn+ ≡ 1, as well as an exponential dichotomy on Z− with projector Pn− = 0. Thus condition ii) in Theorem 16.2 is violated and 0 is a nonhyperbolic solution. For this parameter regime, every solution of (45) is bounded. Moreover, (45) is asymptotically stable, but not uniformly asymptotically stable on the whole time axis Z. The parameter values λ ∈ { 21 , 1} are critical. In both situations, the number of bounded entire solutions to the linear difference equation (45) changes drastically. Furthermore, there is a loss of stability in two steps: From uniformly asymptotically stable to asymptotically stable, and finally to unstable, as λ increases through the values 12 and 1. Hence, both values can be considered as bifurcation values, since the number of bounded entire solutions changes as well as their stability properties. The next example requires the state space to be at least two-dimensional. Example 16.6 (Fold solution bifurcation). Consider the planar equation 0 b 0 0 − λ (46) xn+1 = fn (xn , λ) := n xn + 1 0 cn (x1n )2 with components xn = (x1n , x2n ), depending on a parameter λ ∈ R and asymptotically constant sequences ( ( 1 2, n < 0, , n < 0, bn := 1 (47) cn := 2 2, n ≥ 0. n ≥ 0, 2, The variational equation for (46) corresponding to the trivial solution and the parameter λ∗ = 0 reads as bn 0 xn+1 = D1 fn (0, 0)xn := xn . 0 cn It admits an exponential dichotomyon Z+ , as well ason Z− with corresponding invariant projectors Pn+ ≡ 10 00 and Pn− ≡ 00 01 . This yields 1 1 + − + − R(P0 ) ∩ N (P0 ) = R , R(P0 ) + N (P0 ) = R 0 0 and therefore condition ii) of Theorem 16.2 is violated. Hence, the trivial solution to (46) for λ = 0 is not hyperbolic. 16 Hyperbolicity and simple examples R2 λ < λ∗ R2 R2 λ = λ∗ 65 R2 R2 λ < λ∗ Λ λ > λ∗ λ = λ∗ R2 Λ λ > λ∗ Fig. 8 Left (supercritical fold): Initial values η ∈ R2 yielding a bounded solution φλ (·, 0, η) of (46) for different parameter values λ. Right (cusp): Initial values η ∈ R2 yielding a bounded solution φλ (·, 0, η) of (49) for different parameter values λ Let φλ (·, 0, η) be the general solution to (46). Its first component φ1λ is φ1λ (n, 0, η) = 2−|n| η1 for all n ∈ Z, (48) while the variation of constants formula (cf. [1, p. 59]) can be used to deduce the asymptotic representation 2n η2 + 47 η12 − λ + O(1), n → ∞, 2 φλ (n, 0, η) = 1 η − 1 η 2 + 2λ + O(1), n → −∞. 2 2n 2 1 Therefore, the sequence φλ (·, 0, η) is bounded if and only if η2 = − 47 η12 + λ and η2 = 12 η12 − 2λ holds, i.e., η12 = 72 λ, η2 = −λ. From the first relation, one sees that there exist two bounded solutions if λ > 0, the trivial solution is the unique bounded solution for λ = 0 and there are no bounded solutions for λ < 0; see Figure 8 (left) for an illustration. For this reason, λ = 0 can be interpreted as bifurcation value, since the number of bounded entire solutions increases from 0 to 2 as λ increases through 0. The method of explicit solutions can also be applied to the nonlinear equation 0 bn 0 0 −λ . (49) xn+1 = fn (xn , λ) := xn + 0 cn (x1n )3 1 However, using the variation of constants formula (cf. [1, p. 59]), it is possible to show that the crucial second component of the general solution φλ (·, 0, η) for (49) fulfills 8 3 2n η2 + 15 η1 − λ + O(1), n → ∞, 2 φλ (n, 0, η) = 1 η − 2 η 3 + 2λ + O(1), n → −∞. 2 2n 15 1 66 7 Bifurcations Since the first component is given in (48), φλ (·, 0, η) is bounded if and only 2 3 8 3 η1 + λ and η2 = 15 η1 − 2λ, which in turn is equivalent to if η2 = − 15 r η1 = 3 9 λ, 2 7 η2 = − λ. 5 Hence, these particular initial values η ∈ R2 given by the cusp shaped curve depicted in Figure 8 (right) lead to bounded entire solutions of (49). 17 Attractor bifurcation An easy example for a bifurcation of a pullback attractor was discussed already in Example 16.4. Now a general bifurcation pattern will be derived, which ensures, under certain conditions on Taylor coefficients, that a pullback attractor changes qualitatively under variation of the parameter. This generalizes the autonomous pitchfork bifurcation pattern. Although the pullback attractor discussed in Example 16.4 is a global attractor, the pitchfork bifurcation only yields results for a local pullback attractor. Definition 17.1. Consider a process φ on a metric state space X. A φ-invariant family A = {An : n ∈ Z} of nonempty compact subsets of X is called a local pullback attractor if there exists an η > 0 such that lim dist φ(n, n − k, Bη (An−k )), An = 0 k→∞ for all n ∈ Z . A local pullback attractor is a special case of a pullback attractor w.r.t. a certain basin of attraction, which was introduced in Definition 13.2. Here, the basin of attraction has to be chosen as a neighborhood of the local pullback attractor. Suppose now that (∆λ ) is a scalar equation (d = 1) with the trivial solution for all parameters λ from an interval Λ ⊆ R. The transition matrix of the corresponding variational equation xn+1 = D1 fn (0, λ)xn is denoted by Φλ (n, l) ∈ R. The hyperbolicity condition i) in Theorem 16.2 will be violated when dealing with attractor bifurcations. This yields a nonautonomous counterpart to the classical pitchfork bifurcation pattern. 17 Attractor bifurcation 67 Theorem 17.1 (Nonautonomous pitchfork bifurcation). Suppose that fn (·, λ) : R → R is invertible and of class C 4 with D12 fn (0, λ) = 0 for all n ∈ Z and λ ∈ Λ. Suppose there exists a λ∗ ∈ R such that the following hypotheses hold. • Hypothesis on linear part : There exists a K ≥ 1 and functions β1 , β2 : Λ → (0, ∞) which are either both increasing or decreasing with limλ→λ∗ b1 (λ) = limλ→λ∗ b2 (λ) = 1 and Φλ (n, l) ≤ Kβ1 (λ)n−l Φλ (n, l) ≤ Kβ2 (λ) n−l for all l ≤ n, for all n ≤ l and all λ ∈ Λ. • Hypothesis on nonlinearity : Assume that if the functions β1 and β2 are increasing, then inf D13 fn (0, λ) ≤ lim sup sup D13 fn (0, λ) < 0, −∞ < lim inf ∗ λ→λ n∈Z λ→λ∗ n∈Z and otherwise (i.e., if the functions β1 and β2 are decreasing), then 0 < lim inf inf D13 fn (0, λ) ≤ lim sup sup D13 fn (0, λ) < ∞. ∗ λ→λ n∈Z λ→λ∗ n∈Z In addition, suppose that the remainder satisfies Z 1 lim sup sup x (1 − t)3 D4 fn (tx, λ) dt = 0, x→0 λ∈(λ∗ −x2 ,λ∗ +x2 ) n≤0 3 Kx −1 } 1 − min{β 1 (λ), β2 (λ) n≤0 0 Z lim sup lim sup sup λ→λ∗ x→0 0 1 (1 − t)3 D4 fn (tx, λ) dt < 3. Then there exist λ− < λ∗ < λ+ so that the following statements hold: 1) If the functions β1 , β2 are increasing, the trivial solution is a local pullback attractor for λ ∈ (λ− , λ∗ ), which bifurcates to a nontrivial local pullback attractor {Aλn : n ∈ Z}, λ ∈ (λ∗ , λ+ ), satisfying the limit lim∗ sup dist(Aλn , {0}) = 0. λ→λ n≤0 2) If the functions β1 , β2 are decreasing, the trivial solution is a local pullback attractor for λ ∈ (λ∗ , λ+ ), which bifurcates to a nontrivial local pullback attractor {Aλn : n ∈ Z}, λ ∈ (λ− , λ∗ ), satisfying the limit 68 7 Bifurcations lim sup dist(Aλn , {0}) = 0. λ→λ∗ n≤0 For a proof of this theorem and extensions (to both different time domains and repellers), see [51, 53]. The next example, taken from [19], illustrates the above theorem. Example 17.1. Consider the nonautonomous difference equation xn+1 = λxn , 1 + bnλq xqn (50) where q ∈ N and the sequence (bn )n∈N is positive and both bounded and bounded away from zero. For q = 1, this difference equation can be transformed into the well-known Beverton–Holt equation, which describes the density of a population in a fluctuating environment. It was shown in [19] that in this case, the system admits a nonautonomous transcritical bifurcation (the bifurcation pattern of which was derived in [51]). For q = 2, a nonautonomous pitchfork bifurcation occurs. The above theorem can be applied, because the Taylor expansion of the right-hand side of + O(x2q+1 ), and the remainder fulfills the con(50) reads as λxn + bn xq+1 n ditions of the theorem (see [19] for details). This means that for λ ∈ (0, 1), the trivial solution is a local pullback attractor, which undergoes a transition to a nontrivial local pullback attractor when λ > 1. Note that the extreme solutions of the nontrivial local pullback attractor for λ > 1 are also local pullback attractors, which gives the interpretation of this bifurcation as a bifurcation of locally pullback attractive solutions. 18 Solution bifurcation In the previous section on attractor bifurcations, the first hyperbolicity condition i) in Theorem 16.2, given by exponential dichotomies on both semiaxes, was violated. The present concept of solution bifurcation is based on the assumption that merely condition ii) of Theorem 16.2 does not hold. This requires the variational difference equation (Vλ ) to be intrinsically nonautonomous. Indeed, if (Vλ ) is almost periodic, then an exponential dichotomy on a semiaxis extends to the whole integer axis (cf. [61, Theorem 2]) and the reference solution χ = (χn )n∈Z becomes hyperbolic. For this reason the following bifurcation scenarios cannot occur for periodic or autonomous difference equations. The crucial and standing assumption is the following: 18 Solution bifurcation 69 Hypothesis: The variational equation (Vλ ) admits an ED both on Z+ (with projector Pn+ ) and on Z− (with projector Pn− ) such that there exists nonzero vectors ξ1 ∈ Rd , ξ10 ∈ Rd satisfying R(P0+ ) ∩ N (P0− ) = Rξ1 , (R(P0+ ) + N (P0− ))⊥ = Rξ10 . (51) Then a solution bifurcation is understood as follows: Suppose that for a fixed parameter λ∗ ∈ Λ, the difference equation (∆λ∗ ) has an entire bounded reference solution χ∗ = χ(λ∗ ). One says that (∆λ ) undergoes a bifurcation at λ = λ∗ along χ∗ , or χ∗ bifurcates at λ∗ , if there exists a convergent parameter sequence (λn )n∈N in Λ with limit λ∗ so that (∆λn ) has two distinct entire solutions χ1λn , χ2λn ∈ `∞ both satisfying lim χ1λn = lim χ2λn = χ∗ . n→∞ n→∞ The above Hypothesis allows a geometrical insight into the following abstract bifurcation results using invariant fiber bundles , i.e., nonautonomous counterparts to invariant manifolds: Because (Vλ ) has an exponential dichotomy on Z+ , there exists a stable fiber bundle χ∗ + Wλ+ consisting of all solutions to (∆λ ) approaching χ∗ in forward time. Here, Wλ+ is locally a graph over the stable vector bundle {R(Pn+ ) : n ∈ Z+ }. Analogously, the dichotomy on Z− guarantees an unstable fiber bundle χ∗ + Wλ− consisting of solutions decaying to χ∗ in backward time (cf. [47, Corollary 2.23]). Then the bounded entire solutions to (∆λ ) are contained in the intersection (χ∗ + Wλ+ ) ∩ (χ∗ + Wλ− ). In particular, the intersection of the fibers − + ⊆ Rd ∩ χ∗0 + Wλ,0 Sλ := χ∗0 + Wλ,0 yields initial values for bounded entire solutions (see Figure 9). It can be assumed without loss of generality, using the equation of perturbed motion, that χ∗ = 0. In addition suppose that fn (0, λ) ≡ 0 on Z, which means that (∆λ ) has the trivial solution for all λ ∈ Λ. The corresponding variational equation is xn+1 = D1 fn (0, λ)xn with transition matrix Φλ (n, l) ∈ Rd×d . Theorem 18.1 (Bifurcation from known solutions). Let Λ ⊆ R and suppose fn is of class C m , m ≥ 2. If the transversality condition 70 7 Bifurcations φ∗ + Wλ− R φ∗ + Wλ+ φ2 d Z Sλ φ1 k=0 Fig. 9 Intersection Sλ ⊆ of the stable fiber bundle φ∗ + Wλ+ ⊆ Z+ × Rd with the ∗ unstable fiber bundle φ + Wλ− ⊆ Z− × Rd at time k = 0 yields two bounded entire solutions φ1 , φ2 to (∆λ ) indicated as dotted dashed lines Rd g11 := X hΦλ∗ (0, n + 1)0 ξ10 , D1 D2 fn (0, λ∗ )Φλ∗ (n, 0)ξ1 i = 6 0 (52) n∈Z is satisfied, then the trivial solution of a difference equation (∆λ ) bifurcates at λ∗ . In particular, there exists a ρ > 0, open convex neighborhoods U ⊆ `∞ (Ω) of 0, Λ0 ⊆ Λ of λ∗ and C m−1 -functions ψ : (−ρ, ρ) → U , λ : (−ρ, ρ) → Λ0 with 1) ψ(0) = 0, λ(0) = λ∗ and ψ̇(0) = Φλ∗ (·, 0)ξ1 , 2) each ψ(s) is a nontrivial solution of (∆)λ(s) homoclinic to 0, i.e., lim ψ(s)n = 0. n→±∞ Proof. See [46, Theorem 2.14]. t u Corollary 18.1 (Transcritical bifurcation). Under the additional assumption X g20 := hΦλ∗ (0, n + 1)0 ξ10 , D12 fn (0, λ∗ )[Φλ∗ (n, 0)ξ1 ]2 i = 6 0 n∈Z g20 one has λ̇(0) = − 2g and the following holds locally in U × Λ0 : The 11 difference equation (∆λ ) has a unique nontrivial entire bounded solution ψ(λ) for λ 6= λ∗ and 0 is the unique entire bounded solution of (∆)λ∗ ; moreover, ψ(λ) is homoclinic to 0. 18 Solution bifurcation R2 λ < λ∗ 71 R2 R2 λ = λ∗ R2 R2 λ < λ∗ Λ λ > λ∗ λ = λ∗ R2 Λ λ > λ∗ Fig. 10 Left (transcritical): Initial values η ∈ R2 yielding a homoclinic solution φλ (·, 0, η) of (53) for different parameter values λ. Right (supercritical pitchfork): Initial values η ∈ R2 yielding a homoclinic solution φλ (·, 0, η) of (54) for different parameter values λ Proof. See [46, Corollary 2.16]. t u Example 18.1. Consider the nonlinear difference equation b 0 0 xn+1 = fn (xn , λ) := n xn + λ cn (x1n )2 (53) depending on a bifurcation parameter λ ∈ R and sequences bn , cn defined in (47). As in Example 16.6, the assumptions hold with λ∗ = 0 and g11 = 4 6= 0, 3 g20 = 12 6= 0. 7 Hence, Corollary 18.1 can be applied in order to see that the trivial solution of (53) has a transcritical bifurcation at λ = 0. Again, this bifurcation will be described quantitatively. While the first component of the general solution φλ (·, 0, η) given by (48) is homoclinic, the second component satisfies 2n η2 + 47 η12 + 2λ n → ∞, 3 η1 + o(1), 2 φλ (n, 0, η) = 2−n η − 2 η 2 − 2λ η + o(1), n → −∞. 2 7 1 3 1 In conclusion, one sees that φλ (·, 0, η) is bounded if and only if η = (0, 0) or η1 = − 14 λ, 9 η2 = 28 2 λ . 81 Hence, besides the zero solution, there is a unique nontrivial entire solution passing through the initial point η = (η1 , η2 ) at time n = 0 for λ 6= 0. This means the solution bifurcation pattern sketched in Figure 10 (left) holds. 72 7 Bifurcations Corollary 18.2 (Pitchfork bifurcation). For m ≥ 3 and under the additional assumptions X hΦλ∗ (0, n + 1)0 ξ10 , D12 fn (0, λ∗ )[Φλ∗ (n, 0)ξ1 ]2 i = 0, n∈Z g30 := X hΦλ∗ (0, n + 1)0 ξ10 , D13 fn (0, λ∗ )[Φλ∗ (n, 0)ξ1 ]3 i = 6 0 n∈Z g30 one has λ̇(0) = 0, λ̈(0) = − 3g and the following holds locally in U ×Λ0 : 11 3) Subcritical case : If g30 /g11 > 0, then the unique entire bounded solution of (∆λ ) is the trivial one for λ ≥ λ∗ and (∆λ ) has exactly two nontrivial entire solutions for λ < λ∗ ; both are homoclinic to 0. 4) Supercritical case : If g30 /g11 < 0, then the unique entire bounded solution of (∆λ ) is the trivial one for λ ≤ λ∗ and (∆λ ) has exactly two nontrivial entire solutions for λ > λ∗ ; both are homoclinic to 0. Proof. See [46, Corollary 2.16]. t u Example 18.2. Let δ be a fixed nonzero real number and consider the nonlinear difference equation bn 0 0 xn+1 = fn (xn , λ) := (54) xn + δ λ cn (x1n )3 depending on a bifurcation parameter λ ∈ R and the bn , cn defined in (47). As in our above Example 18.1, the assumptions of Corollary 18.2 are fulfilled with λ∗ = 0. The transversality condition here reads g11 = 34 6= 0. Moreover, D12 fn (0, 0) ≡ 0 on Z implies g20 = 0, whereas the relation D13 fn (0, 0)ζ 3 = 0 for all n ∈ Z, ζ ∈ R2 leads to g30 = 4δ 6= 0. This gives the crucial 6δζ13 = 3δ. By Corollary 18.2, the trivial solution to (54) undergoes a quotient gg30 11 subcritical (supercritical) pitchfork bifurcation at λ = 0 provided δ > 0 (resp. δ < 0). As before one can illustrate this result using the general solution φλ (·, 0, η) to (54). The first component is given by (48) and helps to show for the second component that 8δ 3 2n η2 + 15 η1 + 2λ η1 + o(1), n → ∞, 3 φ2λ (n, 0, η) = 2−n η − 2δ η 3 − 4λ η + o(1), n → −∞. 2 15 1 3 1 This asymptotic representation shows that φλ (·, 0, η) is homoclinic to 0 if 4 (5δ+16λ) 2 and only if η = 0 or η12 = − 2δ λ and η2 = 15 λ . Hence, there is a δ2 18 Solution bifurcation 73 correspondence to the pitchfork solution bifurcation from in Corollary 18.2. See Figure 10 (right) for an illustration. Remarks In [51, Theorem 5.1] one finds a nonautonomous generalization for transcritical bifurcations. Chapter 8 Random dynamical systems Random dynamical systems on a state space X are nonautonomous by the very nature of the driving noise. They can be formulated as skew-product systems with the driving system acting a probability sample space Ω rather than on a topological or metric parameter space P . A major difference is that only measurability and not continuity w.r.t. the parameter can be assumed, which changes the types of results that can be proved. In particular, the skewproduct system does not form an autonomous semidynamical system on the product space Ω × X. Nevertheless, there are many interesting parallels with the theory of deterministic nonautonomous dynamical systems. For further details see Arnold [2] and, for example, also [29], where the temporal discretization of random differential equations is also considered. 19 Random difference equations Let (Ω, F, P) be a probability space and let {ξn , n ∈ Z} be a discrete-time stochastic process taking values in some space Ξ, i.e., a sequence of random variables or, equivalently, F-measurable mappings ξn : Ω → Ξ for n ∈ Z. Let (X, d) be a complete metric space and consider a mapping g : Ξ × X → X. Then xn+1 (ω) = g (ξn (ω), xn (ω)) for all n ∈ Z, ω ∈ Ω, (55) is a random difference equation on X driven by the stochastic process ξn . Greater generality can be achieved by representing the driving noise process by a metrical (i.e., measure theoretic) dynamical system θ on some canonical sample space Ω, i.e., the group of F-measurable mappings {θn , n ∈ Z} under composition formed by iterating a measurable mapping θ : Ω → Ω and its measurable inverse mapping θ−1 : Ω → Ω, i.e., with θ0 = idΩ and θn+1 := θ ◦ θn , θ−n−1 := θ−1 ◦ θ−n for all n ∈ N, 75 76 8 Random dynamical systems where θ−1 := θ−1 . It is usually assumed that θ generates an ergodic process on Ω. Let f : Ω × X → X be an F × B(X)-measurable mapping, where B(X) is the Borel σ-algebra on X. Then, in this context, a random difference equation has the form xn+1 (ω) = f (θn (ω), xn (ω)) for all n ∈ Z, ω ∈ Ω. (56) Define recursively a solution mapping ϕ : Z+ ×Ω× X → X for the random difference equation (56) by ϕ(ω, 0, x) := x and ϕ(ω, n + 1, x) = f (θn (ω), φ(θn (ω), n, x)) for all n ∈ N, x ∈ X and ω ∈ Ω. Then, ϕ satisfies the discrete-time cocycle property w.r.t. θ, i.e., ϕ(n + m, ω, x) = ϕ (n, θm (ω), ϕ(m, ω, x0 )) for all m, n ∈ Z+ , x ∈ X and ω ∈ Ω. The mapping ϕ is called a cocycle mapping. In terms of Arnold [2], the random difference equation (56) generates a discrete-time random dynamical system (θ, φ) on Ω × X with the metric dynamical system θ on the probability space (Ω, F, P) and the cocycle mapping ϕ on the state space X. Definition 19.1. A (discrete-time) random dynamical system (θ, ϕ) on Ω × X consists of a metrical dynamical system θ on Ω, i.e., a group of measure preserving mappings θn : Ω → Ω, n ∈ Z, such that i) θ0 = idΩ and θn ◦ θm = θn+m for all n, m ∈ Z, ii) the map ω 7→ θn (ω) is measurable and invariant w.r.t. P in the sense that θn (P) = P for each n ∈ Z, and a cocycle mapping ϕ : Z+ × Ω × X → X such that a) ϕ(0, ω, x0 ) = x0 for all x0 ∈ X and ω ∈ Ω, b) ϕ(n + m, ω, x0 ) = ϕ (n, θm (ω), ϕ(m, ω, x0 )) for all n,m ∈ Z+ , x0 ∈ X and ω ∈ Ω, c) x0 7→ ϕ(n, ω, x0 ) is continuous for each (n, ω) ∈ Z+ × Ω, d) ω 7→ ϕ(n, ω, x0 ) is F-measurable for all (n, x0 ) ∈ Z+ × X. The notation θn (P) = P for the measure preserving property of θn w.r.t. P is just a compact way of writing P(θn (A)) = P(A) for all n ∈ Z, A ∈ F . 20 Random attractors 77 A systematic treatment of the random dynamical system theory, both continuous and discrete time, is propounded in Arnold [2]. Note that π = (θ, φ) has a skew-product structure on Ω × X, but it is not an autonomous semidynamical system on Ω × X since no topological structure is assumed on Ω. 20 Random attractors Unlike a deterministic skew-product system, a random dynamical system (θ, ϕ) on Ω × X is not an autonomous semidynamical system on Ω × X. Nevertheless, skew-product deterministic systems and random dynamical systems have many analogous properties, and concepts and results for one can often be used with appropriate modifications for the other. The most significant modification concerns measurability and the nonautonomous sets under consideration are random sets. Let (X, d) be a complete and separable metric space (i.e., a Polish space) Definition 20.1. A family D = {Dω , ω ∈ Ω} of nonempty subsets of X is called a random set if the mapping ω 7→ dist(x, Dω ) is F-measurable for all x ∈ X. A random set D is called a random closed set if Dω is closed for each ω ∈ Ω and is called a random compact set if Dω is compact for each ω ∈ Ω. Random sets are called tempered if their growth w.r.t. the driving system θ is sub-exponential (cf. Definition 15.1). Definition 20.2. A random set D = {Dω , ω ∈ Ω} in X is said to be tempered if there exists a x0 ∈ X such that Dω ⊂ {x ∈ X : d(x, x0 ) ≤ r(ω)} for all ω ∈ Ω , where the random variable r(ω) > 0 is tempered, i.e., sup{r(θn (ω))e−γ|n| } < ∞ for all ω ∈ Ω, γ > 0. n∈Z The collection of all tempered random sets in X will be denoted by D. 78 8 Random dynamical systems A random attractor of a random dynamical system is a random set which is a pullback attractor in the pathwise sense w.r.t. the attracting basin of tempered random sets. Definition 20.3. A random compact set A = (Aω )ω∈Ω from D is called a random attractor of a random dynamical system (θ, ϕ) on Ω × X in D if A is a ϕ-invariant set, i.e., ϕ(n, ω, Aω ) = Aθn (ω) for all n ∈ Z+ , ω ∈ Ω , and pathwise pullback attracting in D, i.e., lim dist ϕ n, θ−n (ω), D(θ−n (ω)) , Aω = 0 n→∞ for all ω ∈ Ω, D ∈ D. If the random attractor consists of singleton sets, i.e., Aω = {Z ∗ (ω)} for some random variable Z ∗ with Z ∗ (ω) ∈ X, then Z̄n (ω) := Z ∗ (θn (ω)) is a stationary stochastic process on X. The existence of a random attractor is ensured by that of a pullback absorbing set. The tempered random set B = {Bω , ω ∈ Ω} in the following theorem is called a pullback absorbing random set. Theorem 20.1 (Existence of random attractors). Let (θ, ϕ) be a random dynamical system on Ω × X such that ϕ(n, ω, ·) : X → X is a compact operator for each fixed n > 0 and ω ∈ Ω. If there exist a tempered random set B = {Bω , ω ∈ Ω} with closed and bounded component sets and an ND,ω ≥ 0 such that ϕ n, θ−n (ω), D(θ−n (ω)) ⊂ Bω for all n ≥ ND,ω , (57) and every tempered random set D = {Dω , ω ∈ Ω}, then the random dynamical system (θ, ϕ) possesses a random pullback attractor A = {Aω : ω ∈ Ω} with component sets defined by Aω = \ [ m>0 n≥m ϕ(n, θ−n (ω), B(θ−n (ω)) for all ω ∈ Ω. (58) The proof of Theorem 20.1 is essentially the same as its counterparts for deterministic skew-product systems. The only new feature is that of measurability, i.e., to show that A = {Aω ), ω ∈ Ω} is a random set. This follows 21 Random Markov chains 79 from the fact that the set-valued mappings ω 7→ ϕ n, θ−n (ω), B(θ−n (ω)) are measurable for each n ∈ Z+ . Arnold & Schmalfuß [3] showed that a random attractor is also a forward attractor in the weaker sense of convergence in probability, i.e., Z lim dist ϕ(n, ω, Dω ), Aθn (ω) P(dω) = 0 n→∞ Ω for all D ∈ D. This allows individual sample paths to have large deviations from the attractor, but for all to converge in this probabilistic sense. 21 Random Markov chains Discrete-time finite state Markov chains with a tridiagonal structure are common in biological applications. They have a transition matrix [IN + ∆Q], where IN is the N × N identity matrix and Q is the tridiagonal N × N matrix −q1 q2 q1 −(q2 + q3 ) q4 .. .. .. .. .. (59) Q= . . . . . q2N −5 −(q2N −4 + q2N −3 ) q2N −2 q2N −3 −q2N −2 where the qj are positive constants. Such a Markov chain is a first order linear difference equation p(n+1) = [IN + ∆Q] p(n) (60) on the probability simplex ΣN in RN defined by N X ΣN = p = (p1 , · · · , pN )T : pj = 1, p1 , . . . , pN ∈ [0, 1] . j=1 The Perron-Frobenius theorem applies to the matrix L∆ := IN +∆Q when ∆ > 0 is chosen sufficiently small. In particular, it has eigenvalue λ = 1 and there is a positive eigenvector x̄, which can be normalized (in the k · k1 norm) to give a probability vector p̄, i.e., [IN + ∆Q]p̄ = p̄, so Qp̄ = 0. Specifically, the probability vector 1 p̄1 = , kx̄k1 p̄j+1 j 1 Y q2i−1 = kx̄k1 i=1 q2i for all j = 1, . . . , N − 1, 80 8 Random dynamical systems where kx̄k1 = N X x̄j = 1 + j=1 j N −1 Y X j=1 q2i−1 . q2i i=1 The following result is well known. Theorem 21.1. The probability eigenvector p̄ is an asymptotically stable steady state of the difference equation (60) on the simplex ΣN . In a random environment, e.g., with randomly varying food supply, the transition probabilities may be random, i.e., the band entries qi of the matric Q may depend on the sample space parameter ω ∈ Ω. Thus, qi = qi (ω) for i = 1, 2, . . ., 2N − 2, and these may vary in turn according to some metric dynamical system θ on the probability space (Ω, F, P). The following basic assumption will be used. Assumption 1 There exist numbers 0 < α ≤ β < ∞ such that the uniform estimates hold α ≤ qi (ω) ≤ β for all ω ∈ Ω, i = 1, 2, . . . , 2N − 2. (61) Let L be a set of linear operators Lω : RN → RN parametrized by the parameter ω taking values in some set Ω and let {θn , n ∈ Z} be a group of maps of Ω onto itself. The maps Lω x serve as the generator of a linear cocycle FL (n, ω). Then (θ, FL ) is a random dynamical system on Ω × ΣN . Theorem 21.2. Let FL (n, ω)x be the linear cocycle FL (n, ω)x = Lθn−1 ω · · · Lθ1 ω Lθ0 ω x. with matrices Lω := IN + ∆Q(ω), where the tridiagonal matrices Q(ω) are of the form (59) with the entries qi = qi (ω) satisfying the uniform 1 estimates (61) in Assumption 1. In addition, suppose that 0 < ∆ < 2β . Then, the simplex ΣN is positively invariant under FL (n, ω), i.e., FL (n, ω)ΣN ⊆ ΣN for all ω ∈ Ω. Moreover, for n large enough, the restriction of FL (n, ω)x to the set ΣN is a uniformly dissipative and uniformly contractive cocycle (w.r.t. the Hilbert metric), which has a random attractor A = {Aω , ω ∈ Ω} such that each set Aω , ω ∈ Ω, consists of a single point. 22 Approximating invariant measures 81 The proof can be found in [30]. It involves positive matrices and the Hilbert projective metric on positive cones in RN . Henceforth write Aω = {aω } for the singleton component subsets of the random attractor A. Then the random attractor is an entire random sequence ◦ {aθn ω , n ∈ Z} in ΣN (γ) ⊂ Σ N , where ΣN (γ) = x = (x1 , x2 , . . . , xN ) : N X i=1 xi = 1, x1 , x2 , . . . , xN ≥ γ N −1 . with γ := min{∆α, 1 − 2∆β} > 0. It attracts other iterates of the random Markov chain in the pullback sense. Pullback convergence involves starting at earlier initial times with a fixed end time. It is, generally, not the same as forward convergence in the sense usually understood in dynamical systems, but in this case it is the same due to the uniform boundedness of the contractive rate w.r.t. ω. Corollary 21.1. For any norm k · k on RN , p(0) ∈ ΣN and ω ∈ Ω (n) p (ω) − aθn ω → 0 as n → ∞. The random attractor is, in fact, asymptotically Lyapunov stable in the conventional forward sense. 22 Approximating invariant measures Consider now a compact metric space (X, d). A random difference equation (56) on X driven by the noise process θ generates a random dynamical system (θ, ϕ). It can be reformulated as a difference equation with a triangular or skew-product structure θ(ω) (ω, x) 7→ F (ω, x) := f (ω, x) An invariant measure µ of F = (θ, ϕ) on Ω × X defined by µ = F ∗ µ (which is shorthand for an integral expression) can be decomposed as µ(ω, B) = µω (B) P(dω) for all B ∈ B(X), where the measures µω on X are θ-invariant w.r.t. f , i.e., 82 8 Random dynamical systems µθ(ω) (B) = µω f −1 (ω, B) for all B ∈ B(X), ω ∈ Ω. This decomposition is very important since only the state space X, but not the sample space Ω, can be discretized. To compute a given invariant measure µ consider a sequence of finite subsets XN of X given by (N ) (N ) XN = {x1 , · · · , xN } ⊂ X, for N ∈ N with maximal step size hN = sup dist(x, XN ) x∈X such that hN → 0 as N → ∞. Then the invariant µ will be approximated by a sequence of invariant stochastic vectors associated with random Markov chains describing transitions between the states of the discretized state spaces XN . These involve random N × N matrices, i.e., measurable mappings PN : Ω → SN , where SN denotes the set of N × N (nonrandom) stochastic matrices, satisfying the property for all m, n ∈ Z+ . PNn (θm (ω))PNm (ω) = PNm+n (ω) (62) Recall that a stochastic matrix has non-negative entries with the columns summing to 1. Consider a random Markov chain {PN (ω), ω ∈ Ω} and a random probability vector {pN (ω), ω ∈ Ω} on the deterministic grid XN . Then pN,n+1 (θn+1 (ω)) = pN,n (θn (ω))PN (θn (ω)) and an equilibrium probability vector is defined by for all ω ∈ Ω. p̄N (θ(ω)) = p̄N (ω)PN (ω) It can be represented trivially as a random measure µN,ω on X. The distance between random probability measures will be given with the Prokhorov metric ρ and the distance of a random Markov chain P : Ω → SN and the generating mapping f of the random dynamical system is defined by D(P (ω), f ) = N X (N ) pi,j (ω) distX×X ((xi (N ) , xj i,j=1 where the distance to the random graph is given by ), Grf (ω, ·) , (63) 22 Approximating invariant measures 83 distX×X ((x, y), Grf (ω, ·)) = inf max{d(x, z), d(y, f (ω, z))} z∈X for all x, y ∈ X. The following necessary and sufficient result holds if θ-semi-invariant rather than θ-invariant families of decomposed probability measures are used. Definition 22.1. A family of probability measures µω on X is called θ-semi-invariant w.r.t. f , if µθ(ω) (B) ≤ µω f −1 (ω, B) for all B ∈ B(X), ω ∈ Ω. Such θ-semi-invariant families are, in fact, θ-invariant when the mappings x 7→ f (ω, x) are continuous. Theorem 22.1. A random probability measure {µω , ω ∈ Ω} is θ-semiinvariant w.r.t. f on X if and only if it is randomly stochastically approachable , i.e., for each N there exist i) a grid XN with fineness hN → 0 as N → ∞ ii) a random Markov chain {PN (ω), ω ∈ Ω} on XN iii) random probability measure {µN,ω , ω ∈ Ω} on X corresponding to a random equilibrium probability vector {p̄N (ω), ω ∈ Ω} of {PN (ω), ω ∈ Ω} on XN with the expected convergences ED (PN (ω), f (ω, ·)) → 0, Eρ (µN,ω , µω ) → 0 Proof. See Imkeller & Kloeden [20]. as n → ∞. t u The double terminology “random stochastic” seems to be an overkill, but just think of a Markov chain for which the transition probabilities are not fixed, but can vary randomly in time. 84 8 Random dynamical systems References 1. R.P. Agarwal, Difference Equations and Inequalities, Monographs and Textbooks in Pure and Applied Mathematics 228, Marcel Dekker, New York, 2000 2. L. Arnold, Random Dynamical Systems, Monographs in Mathematics. Springer, Berlin etc., 1998 3. L. Arnold & B. Schmalfuss, Lyapunov’s second method for random dynamical systems, J. Differential Equations 177 (2001), 235–265 4. J.-P. Aubin & H. Frankowska, Set-valued analysis, Systems and Control: Foundations and Applications 2, Birkhäuser, Boston etc., 1990. 5. B. Aulbach, The fundamental existence theorem on invariant fiber bundles. J. Difference Equ. Appl. 3 (1998), 501–537 6. B. Aulbach & S. Siegmund, The dichotomy spectrum for noninvertible systems of linear difference equations. J. Difference Equ. Appl. 7(6) (2001), 895–913 7. B. Aulbach & T. Wanner, Invariant foliations and decoupling of non-autonomous difference equations. J. Difference Equ. Appl. 9(5) (2003), 459–472 8. B. Aulbach & T. Wanner, Topological simplification of nonautonomous difference equations. J. Difference Equ. Appl. 12(3–4) (2006), 283–296 9. A. Berger & S. Siegmund, On the gap between random dynamical systems and continuous skew products, J. Dynam. Differential Equations 15 (2003), 237–279 10. W.-J. Beyn & J.-M. Kleinkauf, The numerical computation of homoclinic orbits for maps, SIAM J. Numer. Anal. 34(3), 1209–1236, 1997 11. N.P. Bhatia & G.P. Szegö, Stability Theory of Dynamical Systems, Springer, Berlin etc., 2002 12. D. Cheban, P.E. Kloeden & B. Schmalfuß, The relationship between pullback, forwards and global attractors of nonautonomous dynamical systems, Nonlinear Dynamics & Systems Theory 2(2) (2002), 9–28 13. H. Crauel, Random point attractors versus random set attractors, J. London Math. Soc. (2) 63 (2001), 413–427 14. C. Dafermos, An invariance principle for compact processes, J. Differ. Equations 9 (1971), 239–252 15. P. Diamond & P.E. Kloeden, Spatial discretization of mappings, J. Computers Math. Appls. 26 (1993), 85–94 16. L. Grüne, Asymptotic Behavior of Dynamical and Control Systems Under Perturbation and Discretization, Lecture Notes in Mathematics 1783, Springer, Berlin etc., 2002 17. L. Grüne & P.E. Kloeden, Discretization, inflation and perturbation of attractors, in Ergodic Theory, Analysis, and Efficient Simulation of Dynamical Systems 399–416. Springer, Berlin etc., 2001 18. J.K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys and Monographs 25, AMS, Providence, RI, 1988 19. T. Hüls, A model function for non-autonomous bifurcations of maps, Discrete & Contin. Dyn. Syst. (Series B), 7(2) (2007), 351–363 20. P. Imkeller & P.E. Kloeden, On the computation of invariant measures in random dynamical systems, Stochastics & Dynamics 3 (2003), 247–265 21. G. Iooss, Bifurcation of Maps and Applications, Mathematics Studies 36, NorthHolland, Amsterdam etc., 1979 22. J. Kato, A.A. Martynyuk & A.A. Shestakov, Stability of motion of nonautonomous systems (method of limiting equations), Gordon and Breach, Amsterdam, 1996 23. P.E. Kloeden, Asymptotic invariance and limit sets of general control systems, J. Differential Equations 19 (1975), 91–105 24. P.E. Kloeden, Eventual stability in general control systems, J. Differential Equations 19 (1975), 106–124 References 85 25. P.E. Kloeden, Lyapunov functions for cocycle attractors in nonautonomous difference equations, Izvetsiya Akad Nauk Rep Moldovia Mathematika 26 (1998), 32–42 26. P.E. Kloeden, A Lyapunov function for pullback attractors of nonautonomous differential equations, Elect. J. Diff. Eqns., Conference 05 (2000), 91–102 27. P.E. Kloeden, Pullback attractors in nonautonomous difference equations, J. Difference Eqns. Appls. 6 (2000), 33–52 28. P.E. Kloeden, Pullback attractors of nonautonomous semidynamical systems, Stochastics & Dynamics 3(1) (2003), 101–112 29. P.E. Kloeden, H. Keller & B. Schmalfuß, Towards a theory of random numerical dynamics, in Stochastic Dynamics, eds.: H. Crauel & V.M. Gundlach, Springer, 1999, 259–282 30. P.E. Kloeden & V. Kozyakin, Asymptotic behaviour of random tridiagonal Markov chains in biological applications, Discrete & Conts. Dynamical Systems, Series B, (2011, submitted) 31. P.E. Kloeden, C. Pötzsche & M. Rasmussen, Limitations of pullback attractors of processes, J. Difference Equ. Appl. (2011, to appear) 32. P.E. Kloeden & M. Rasmussen, Nonautonomous Dynamical Systems, Mathematical Surveys and Monographs 176, Amer. Math. Soc., Providence, (2011). 33. P.E. Kloeden & B. Schmalfuß, Lyapunov functions and attractors under variable time– step discretization, Discrete & Conts. Dynamical Systems 2 (1996), 163–172 34. P.E. Kloeden & B. Schmalfuß, Nonautonomous systems, cocycle attractors and variable time–step discretization, Numer. Algorithms 14 (1997), 141–152 35. P.E. Kloeden & B. Schmalfuß, Asymptotic behaviour of nonautonomous difference inclusions. Systems & Control Letters 33 (1998), 275–280 36. P.E. Kloeden & S. Siegmund, Bifurcations and continuous transitions of attractors in autonomous and nonautonomous systems, Internat. J. Bifur. Chaos Appl. Sci. Engrg. 15 (2005), 743–762 37. Y.A. Kuznetsow, Elements of Applied Bifurcation Theory, Applied Mathematical Sciences 112, Springer, Berlin etc., 3rd edition, 2004 38. S. Lang, Real and Functional Analysis, Graduate Texts in Mathematics 42, Springer, Berlin etc., 1993 39. J.A. Langa, J.C. Robinson & A. Suárez, Stability, instability, and bifurcation phenomena in non-autonomous differential equations, Nonlinearity 15 (2002), 887–903 40. J.P. LaSalle, The Stability of Dynamical Systems, SIAM, Philadelphia, PA, 1976. 41. C. Pötzsche, Stability of center fiber bundles for nonautonomous difference equations. In S. Elaydi, et al, editor, Difference and Differential Equations, volume 42 of Fields Institute Communications, 295–304. AMS, Providence, RI, 2004. 42. C. Pötzsche, Discrete inertial manifolds. Math. Nachr. 281(6) (2008), 847–878 43. C. Pötzsche, A note on the dichotomy spectrum. J. Difference Equ. Appl. 15(10) (2009), 1021–1025, see also the Corrigendum (2011) 44. C. Pötzsche, Nonautonomous bifurcation of bounded solutions II: A shovel bifurcation pattern, Discrete & Conts. Dynamical Systems, Series A 31(3) (2011), 941–973 45. C. Pötzsche, Robustness of hyperbolic solutions under parametric perturbations, J. Difference Equ. Appl. 15(8–9) (2009), 803–819 46. C. Pötzsche, Nonautonomous bifurcation of bounded solutions I: A Lyapunov-Schmidt approach, Discrete & Contin. Dyn. Syst. (Series B) 14(2) (2010), 739–776 47. C. Pötzsche, Nonautonomous continuation of bounded solutions, Commun. Pure Appl. Anal. 10(3) (2011), 937–961 48. C. Pötzsche, Geometric Theory of Discrete Nonautonomous Dynamical Systems, Lecture Notes in Mathematics 2002, Springer, Berlin etc., 2010 49. C. Pötzsche & M. Rasmussen, Taylor approximation of invariant fiber bundles for nonautonomous difference equations. Nonlin. Analysis (TMA) 60(7) (2005), 1303–1330. 50. C. Pötzsche & S. Siegmund, C m -smoothness of invariant fiber bundles. Topological Methods in Nonlinear Analysis 24(1) (2004), 107–146 86 8 Random dynamical systems 51. M. Rasmussen, Towards a bifurcation theory for nonautonomous difference equation, J. Difference Equ. Appl. 12(3–4) (2006), 297–312 52. M. Rasmussen. Attractivity and Bifurcation for Nonautonomous Dynamical Systems, Lect. Notes Math. 1907, Springer, Berlin etc., 2007 53. M. Rasmussen, Nonautonomous bifurcation patterns for one-dimensional differential equations, J. Differ. Equations 234 (2007), 267–288 54. M. Rasmussen, An alternative approach to Sacker-Sell spectral theory. J. Difference Equ. Appl. (to appear, 2011) 55. R. Sacker & G. Sell, A spectral theory for linear differential systems. J. Differ. Equations 27 (1978), 320–358 56. G.R. Sell, Topological Dynamics and Ordinary Differential Equations, Van Nostrand Reinhold Co., London, 1971 57. G.R. Sell & Y. You, Dynamics of Evolutionary Equations, Applied Mathematical Sciences 143. Springer, Berlin etc., 2002. 58. K.S. Sibirsky, Introduction to Topological Dynamics, Noordhoff International Publishing, Leiden, 1975 59. S. Siegmund, Normal forms for nonautonomous difference equations. Comput. Math. Appl., 45(6–9) (2003), 1059–1073 60. A. Stuart & A. Humphries, Dynamical Systems and Numerical Analysis. Monographs on Applied and Computational Mathematics. University Press, Cambridge, 1998. 61. V.I. Tkachenko, On the exponential dichotomy of linear difference equations. Ukr. Math. J., 48(10) (1996) 1600–1608 62. T. Yoshizawa, Stability Theory by Liapunov’s Second Method, The Mathematical Society of Japan, Tokyo, 1966
© Copyright 2026 Paperzz