UNIVERSITÉ PARIS 13 - Institut Galilée
Laboratoire Analyse, Géométrie et Applications, UMR 7539
N ë attribué par la bibliothèque
THÈSE
pour obtenir le grade de
DOCTEUR DE L’UNIVERSITÉ PARIS 13
Discipline: Mathématiques
présentée et soutenue publiquement par:
Eva KASLIK
le 2 juin 2006
Domaines d’attraction et applications
dans la théorie du contrôle
Jury
M.
M.
M.
M.
M.
M.
Olivier LAFITTE
Alain GRIGIS
Ştefan BALINT
Constantin CHILĂRESCU
Gérard IOOSS
Seenith SIVASUNDARAM
Président
Directeur de thèse
Co-Directeur de thèse
Examinateur
Rapporteur
Rapporteur
2
REMERCIEMENTS
Je tiens tout d’abord à remercier mes directeurs de thèse Ştefan Balint et Alain Grigis. Ils m’ont
proposé un sujet de recherche vraiment intéressant, ils m’ont conseillé et soutenu durant toutes
ces années et ils ont rendu possible l’encadrement de ma thèse en co-tutelle.
J’aimerais exprimer ma reconnaissance à tous les membres du Laboratoire d’Analyse, Géométrie
et Applications de l’Université Paris 13 et du Département de Mathématiques et Informatique
de l’Université de l’Ouest de Timisoara, pour leur accueil chaleureux, me permettant ainsi de
mener mes recherches dans des conditions très agréables.
Je tiens particulièrement à remercier Gérard Iooss et Seenith Sivasundaram pour avoir accepté
de rapporter sur ma thèse et pour leurs conseils et remarques.
Je remercie également Olivier Lafitte et Constantin Chilărescu d’avoir bien voulu faire partie
de mon jury de thèse.
3
4
Contents
1 Regions of attraction in the case of autonomous differential equations
1.1
1.2
29
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
1.1.1
Comments on the use of step type signals in automatics . . . . . . . . .
29
1.1.2
Paths of steady states . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
1.1.3
Asymptotic stability and regions of attraction . . . . . . . . . . . . . .
30
1.1.4
Comments on the use of Lyapunov functions . . . . . . . . . . . . . .
31
Methods for determining the region of attraction in the case of exponential
asymptotic stability, using Lyapunov functions . . . . . . . . . . . . . . . . . .
35
1.2.1
The coefficients of the power series expansion for the optimal Lyapunov
function in the diagonalisable case . . . . . . . . . . . . . . . . . . . .
35
Determining the region of attraction by the gradual extension of the
optimal Lyapunov function’s embryo . . . . . . . . . . . . . . . . . .
38
Properties of Taylor polynomials Vp0 of the optimal Lyapunov function
and other method for approximating the domains of attraction . . . . .
45
Methods for determining the region of attraction in the case of non-exponential
asymptotic stability, using Lyapunov functions . . . . . . . . . . . . . . . . . .
54
1.3.1
The P(q) property for flows . . . . . . . . . . . . . . . . . . . . . . .
54
1.3.2
The region of attraction in the case of flows with the P(q) property . . .
57
1.3.3
Center manifold theory . . . . . . . . . . . . . . . . . . . . . . . . . .
59
1.3.4
Characterization of the region of attraction in the case of a simple zero
eigenvalue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
Characterization of the region of attraction in the case of a pair of pure
imaginary eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . .
63
1.4
Implementation of Mathematica 5.0 . . . . . . . . . . . . . . . . . . . . . . .
67
1.5
Control procedures using regions of attraction . . . . . . . . . . . . . . . . . .
71
1.2.2
1.2.3
1.3
1.3.5
2 Regions of attraction in the case of discrete semi-dynamical systems
2.1
75
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
75
Discrete semi-dynamical systems in Rn . . . . . . . . . . . . . . . . .
75
2.1.1
5
CONTENTS
6
2.2
2.1.2
Paths of steady states . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
2.1.3
Asymptotic stability and regions of attraction . . . . . . . . . . . . . .
77
2.1.4
Comments on the use of Lyapunov functions . . . . . . . . . . . . . .
78
Methods for determining the region of attraction with strong asymptotic stability conditions, using Lyapunov functions . . . . . . . . . . . . . . . . . . . . .
81
2.2.1
Determining the region of attraction by the gradual extension of the
optimal Lyapunov function’s embryo . . . . . . . . . . . . . . . . . .
81
Properties of partial sums Vp of the optimal Lyapunov function and
other methods for approximating the regions of attraction . . . . . . . .
85
Methods for determining the region of attraction with weak asymptotic stability
conditions, using Lyapunov functions . . . . . . . . . . . . . . . . . . . . . .
96
2.3.1
The P(q) property for maps . . . . . . . . . . . . . . . . . . . . . . . .
96
2.3.2
The region of attraction of maps with the P(q) property . . . . . . . . .
99
2.3.3
Center manifold theory . . . . . . . . . . . . . . . . . . . . . . . . . . 101
2.3.4
Weak asymptotic stability and regions of attraction for codimension 1
singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
2.2.2
2.3
2.4
Implementation of Mathematica 5.0 . . . . . . . . . . . . . . . . . . . . . . . 108
2.5
Control procedures using regions of attraction . . . . . . . . . . . . . . . . . . 112
3 Control procedure for the flight of the ALFLEX model plane during its final
approach and landing phases using domains of attraction
115
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.2
The mathematical model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.3
The set of steady states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
3.4
Zero roll rate steady states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.5
3.4.1
Sideslip descent flight solutions from S01 . . . . . . . . . . . . . . . . . 123
3.4.2
Straight descent flight solutions from S02 . . . . . . . . . . . . . . . . . 124
3.4.3
Symmetric descent flight solutions from S03 . . . . . . . . . . . . . . . 125
Numerical results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.5.1
The paths of steady states . . . . . . . . . . . . . . . . . . . . . . . . 127
3.5.2
Bifurcation analysis along some constant elevator angle contours . . . . 127
3.5.3
Zero roll rate descent flight solutions . . . . . . . . . . . . . . . . . . . 133
3.5.4
The effect of the perturbations and control surface angles ∆e and ∆a in
the moment of release . . . . . . . . . . . . . . . . . . . . . . . . . . 135
3.5.5
The region of attraction of a zero roll rate asymptotically stable steady
state and the control technique for the roll rate . . . . . . . . . . . . . . 138
3.5.6
Achieving a symmetric descent flight state . . . . . . . . . . . . . . . 142
CONTENTS
7
4 Control procedures for Hopfield-type neural networks using domains of attraction149
4.1
4.2
Continuous time Hopfield neural networks . . . . . . . . . . . . . . . . . . . . 149
4.1.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.1.2
Steady states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.1.3
Exponential stability of the steady states . . . . . . . . . . . . . . . . . 155
4.1.4
Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
4.1.5
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Discrete time Hopfield-type neural networks . . . . . . . . . . . . . . . . . . . 166
4.2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.2.2
Steady states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
4.2.3
Exponential stability of the steady states . . . . . . . . . . . . . . . . . 170
4.2.4
Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.2.5
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
8
CONTENTS
INTRODUCTION
La notion de domaine d’attraction est beaucoup utilisée dans la théorie de la stabilité et du
contrôle. Afin de garantir le comportement stable d’un système dynamique dans une région de
paramètres d’état, il est important de connaître les domaines d’attraction des états d’équilibre
asymptotiquement stables.
Nous donnons plusieurs méthodes pour déterminer de manière approchée le domaine d’attraction,
dans le cas continu d’une part et dans le cas discret d’autre part. C’est l’objet des deux premiers
chapitres. Nous donnons aussi une méthode de contrôle dans le cas de systèmes dépendant de
paramètres. Ensuite dans les chapitres suivants nous appliquons ces méthodes à deux problèmes
appliqués qui se modélisent à l’aide de systèmes autonomes. Il s’agit de la manoeuvrabilité du
véhicule spatial ALFLEX en phase d’atterrissage d’une part. D’autre part nous considérons un
réseau de neurones de type Hopfield qui est modélisé par des équations du même genre.
Voici maintenant un résumé détaillé de chacun des quatre chapitres.
Chapitre 1. Domaines d’attraction dans le cas des systèmes
dynamiques autonomes et analytiques en temps continu
On considère les systèmes dynamiques autonomes et analytiques en temps continu
ẋ = f (x)
(0.1)
où f : Rn ® Rn est une fonction R-analytique dans Rn avec f (0) = 0 (x = 0 est un état
d’équilibre pour (0.1)).
On note par x(t, x0 ) la solution du système (0.1) qui vérifie x(0) = x0 .
Si l’état d’équilibre x = 0 est asymptotiquement stable, on notera par Da (0) son domaine
d’attraction (ou domaine de stabilité asymptotique [GRBG04]). C’est l’ensemble des données
initiales x0 pour lesquelles x(t, x0 ) tend vers 0 quand t ® ¥.
Les travaux de Barbashin [Bar51], Barbashin-Krasovskii [BK54] et Zubov ([Zub64, Zub78]),
ont donné les premiers résultats concernant la détermination exacte du domaine Da (0), dans
le cas le plus général, en utilisant des fonctions de Lyapunov qui caractérisent le domaine
d’attraction. D’autres résultats de ce genre ont été obtenus par Knobloch et Kappel [KK74],
dans le cas plus restrictif où les parties réelles des valeurs propres de la matrice D f (0) sont
strictement négatives. Ces théorèmes ont une faible utilisation pratique, car la construction
effective de la fonction de Lyapunov est pratiquement impossible si on ne connaît pas les
solutions du système, donc implicitement, le domaine d’attraction Da (0).
9
CONTENTS
10
Plus tard, Vanelli et Vidyasagar [VV85] ont établi un résultat concernant l’existence d’une
fonction de Lyapunov maximale, et d’une suite de fonctions de Lyapunov qui peuvent être
utilisées pour approximer Da (0), dans le cas où la fonction f du système (0.1) est R-analytique
et les parties réelles des valeurs propres de la matrice D f (0) sont strictement négatives.
L’algorithme de construction de ces fonctions de Lyapunov est assez complexe, mais ne
nécessite pas la connaissance des solutions du système. Les calculs numériques sont longs.
Les méthodes proposées par Gruyitch [GRBG04] utilisent des fonctions de Lyapunov construites par la méthode des caractéristiques (qui suppose la connaissance des solutions du système).
Les méthodes sont appliquées pour quelques exemples concrets, mais on ne peut pas en déduire
un algorithme général.
La recherche théorique montre que Da (0) et sa frontière sont en général très complexes. Dans
la plupart des cas, ils n’ont pas de représentation explicite élémentaire. Pour cette raison,
on connaît plusieurs méthodes pour approximer Da (0) par des domaines ayant une forme
plus simple. Cette pratique est devenue fondamentale pendant les trente dernières années
[DK71, MSM82, GTV85, CGT97, Tib00, Che01, CGTV01, CTVG01]. Le domaine qui
approxime Da (0) est défini par une fonction de Lyapunov, en général quadratique. Pour une
fonction de Lyapunov donnée, le calcul de l’approximation optimale du Da (0) nécessite la
résolution d’un problème de distance non-convexe, qui n’est pas toujours abordable mais qui,
dans certains cas, donne des résultats satisfaisants.
Le but du premier chapitre est de donner deux méthodes efficaces d’approximation du domaine
d’attraction Da (0) de l’état x = 0 du système (0.1), dans le cas quand la matrice D f (0) est
diagonalisable. On traite d’abord le cas hyperbolique (les valeurs propres de D f (0) sont
strictement dans le demi plan gauche, donc l’état x = 0 est exponentiellement stable), puis
le cas non-hyperbolique (quelques valeurs propres de D f (0) se trouvent sur l’axe imaginaire, et
x = 0 est non-exponentiellement asymptotiquement stable).
Les méthodes proposées sont fondées sur les résultats théoriques obtenus en [Bal85, BBN86,
BNBS87] et notamment sur la construction d’une fonction de Lyapunov optimale V qui a les
propriétés suivantes: son domaine d’analyticité coïncide avec le domaine d’attraction Da (0), elle
est positivement définie sur Da (0) et V (x) ® ¥ quand x ® y, y Î ¶Da (0) ou quand üxü ® ¥.
Dans le cas de la stabilité exponentielle, on montre que la fonction de Lyapunov optimale est la
solution unique du problème
XÑV (x), f (x)\ = -üxü2
; V (0) = 0
(0.2)
En utilisant la propriété de la matrice D f (0) d’être diagonalisable, on donne des formules de
récurrence pour le calcul des coefficients du développement de la fonction V au point x = 0.
Donc théoriquement, on peut trouver la fonction de Lyapunov optimale sans connaître les
solutions du système (0.1).
Dans le cas de la stabilité asymptotique non-exponentielle, sous l’hypothèse qu’il existe q Î N*
et c > 0 tels que
c
üx(t, x0 )ü £ 0
(0.3)
2q
t +1
pour t ³ 0 et x0 dans un voisinage de x = 0, on montre que la fonction de Lyapunov optimale
CONTENTS
11
est la solution unique du système
XÑV (x), f (x)\ = -üxü2q+2
; V (0) = 0
(0.4)
On montre que, si D f (0) a une seule valeur propre simple Λ = 0 ou une paire de valeurs
propres simples ±iΩ sur l’axe imaginaire, les solutions x(t, x0 ) du (0.1) ont la propriété (0.3),
pour x0 dans un voisinage de x = 0 (asymptotiquement stable). A partir de (0.4), en utilisant la
propriété de D f (0) d’être diagonalisable, la théorie des formes normales [GH83, Ver90, Kuz98]
et la théorie de la variété centrale, on trouve des formules de récurrence pour le calcul des
coefficients du développement de la fonction V au x = 0 dans le cas de la dimension deux.
La base théorique de la première méthode d’approximation du domaine d’attraction proposée
est l’estimation du domaine de convergence D0 du développement de la fonction V en 0 par une
méthode de type Cauchy-Hadamard [Hor85]. On a mentionné plus haut que les coefficients du
développement de la fonction V en 0 peuvent être trouvés. A l’aide de ces coefficients, notés
par Ak (k Î Nn ), on peut estimer le domaine de convergence D0 :
2
0
n
D = {x Î R / lim m â |Ak xk | < 1}
m
|k|=m
On sait que D0 est une partie du domaine d’analyticité Da (0) de V . Si D0 est une partie stricte
de Da (0), l’algorithme de prolongation des fonctions analytiques permet d’obtenir une nouvelle
partie D1 du domaine d’attraction. Plus précisément, on considère un point x sur la frontière
de D0 tel que la fonction V est bornée dans un voisinage de x. On choisit un point x1 Î D0
dans un petit voisinage de x, on trouve les coefficients du développement de V en x1 , et on
estime le domaine de convergence D1 du développement de la fonction V en x1 . Le domaine
D0 Ç D1 est une nouvelle approximation de Da (0). Si on trouve un point x sur la frontière de
D0 Ç D1 tel que la fonction V est bornée dans un voisinage de x, on peut continuer l’algorithm.
On arrête après m étapes, quand on trouve que la fonction V n’est pas bornée sur la frontière
de D0 Ç D1 Ç ... Ç Dm . Théoriquement, cet algorithme peut être continué jusqu’à ce que l’on
obtienne tout le domaine d’attraction.
Pour la deuxième méthode proposée, on considère les développements limités d’ordre p de la
fonction de Lyapunov V au x = 0, que l’on notera par Vp0 (p ³ 2). On montre que chaque
fonction Vp0 est aussi une fonction de Lyapunov dans un voisinage G p de 0.
On montre le résultat suivant. Pour chaque p ³ 2, il existe c > 0 et un ensemble fermé, connexe
et compacte S Ì G p avec les propriétés suivantes:
1. 0 Î Int(S)
2. Vp0 (x) < c si x Î Int(S)
3. Vp0 (x) = c si x Î ¶S
4. S est invariant par le flot du système (0.1)
5. S Ì Da (0)
En plus, pour chaque p ³ 2 et c > 0 il existe au plus un ensemble S satisfaisant les propriétés
ci-dessus. S’il existe on le notera Npc . On montre que pour un certain p ³ 2, la famille (Npc )c est
CONTENTS
12
totalement ordonné et Ç Npc Ì Da (0). Donc la plus large partie de Da (0) que l’on peut trouver
c
par cette méthode est Np = Ç Npc .
c
Les résultats théoriques sont suivis par quelques exemples concrets d’application de ces méthodes pour l’approximation de Da (0) (e.g. le système de Van der Pol), en utilisant le logiciel
Mathematica 5.0. Le programme écrit en Mathematica 5.0 est donné.
Les résultats concernant l’approximation des domaines d’attraction pour les systèmes dynamiques en temps continu ont été publiés dans [KBB03, KBB05a, Kas05].
Finalement, on obtient un théorème concernant la manoeuvrabilité des systèmes dynamiques
aux paramètres, dans le cas continu (valable dans le cas discret, aussi), le long d’une branche des
états d’équilibre asymptotiquement stables, en utilisant les domaines d’attraction. Ces résultats
on été publiés dans [KBGB05a, KBGB05c].
Plus précisément, on considère le système dynamique aux paramètres
ẋ = f (x, Α)
(0.5)
où x = (x1 , x2 ...xn ) Î Rn sont les paramètres d’état et Α = (Α1 , Α2...Αm ) Î Rm sont les
paramètres de contrôle. On appelle branche des états d’équilibre de (0.5) une fonction continue
j : W Ì Rm ® Rn satisfaisant
f (j(Α), Α) = 0
pour Α Î W
(0.6)
On appelle manoeuvre un changement de paramètres de contrôle de Α¢ à Α¢¢ dans (0.5) et l’on
note Α¢ ® Α¢¢ . On dit la manoeuvre Α¢ ® Α¢¢ est gagnante sur la branche j : W Ì Rm ® Rn de
(0.5) si Α¢ , Α¢¢ Î W et la solution du système
ẋ = f (x, Α¢¢ )
x(0) = j(Α¢)
(0.7)
tends vers j(Α¢¢ ) quand t ® ¥. C’est à dire, j(Α¢ ) est dans le domaine d’attraction de j(Α¢¢).
On montre le résultat suivant: pour deux états d’équilibre j(Αø ) et j(Αøø ) appartenant à une
branche des états d’équilibre asymptotiquement stables j : W Ì Rm ® Rn , il existe un nombre
fini des manoeuvres gagnantes successives
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
transférant le système de j(Αø ) à j(Αøø ).
Chapitre 2. Domaines d’attraction dans le cas des systèmes
demi dynamiques autonomes et analytiques en temps discret
Soit le système demi dynamique en temps discret:
xk+1 = f (xk )
k = 0, 1, 2, ...
(0.8)
où f : W ® W est une fonction R-analytique définie dans un domaine W Ì Rn , 0 Î W avec
f (0) = 0, donc x = 0 est un état d’équilibre (point fixe) pour (0.8).
CONTENTS
13
Si l’état d’équilibre x = 0 est asymptotiquement stable, on notera par Da (0) son domaine
d’attraction, l’ensemble des x Î Rn pour lesquels f k (x) ® 0 quand k ® ¥.
La possibilité d’évaluer les domaines d’attraction pour les systèmes demi dynamiques en temps
discret a été peu étudiée. Quelques résultats de [Koc90, LQVY91, LT88, LaS97, LaS86],
montrent que Da (0) (qui n’est pas nécessairement connexe) peut être assez compliqué. On
retrouve en [KP01] une méthode d’approximation pour la composante connexe de 0 dans Da (0)
qui utilise une fonction de Lyapunov construite à partir de la matrice D f (0), en supposant que
Ρ(D f (0)) < 1 (où Ρ(D f (0)) est le rayon spectral de D f (0)).
Dans le deuxième chapitre, on essaye de donner des résultats analogues à ceux du premier
chapitre, pour l’évaluation de Da (0), à l’aide d’une fonction de Lyapunov optimale. On traite
d’abord le cas de la stabilité asymptotique forte (Ρ(D f (0)) < 1), puis le cas de la stabilité
asymptotique faible (Ρ(D f (0)) = 1).
Dans le cas de la stabilité asymptotique forte, on montre que si Ρ(D f (0)) < 1, le domaine
d’attraction Da (0) est ouvert et coïncide avec le région d’analyticité de la fonction de Lyapunov
optimale V donnée par:
V ( f (x)) - V (x) = -üxü2
(0.9)
; V (0) = 0
En effet, on montre que la fonction V est définie par
¥
V (x) = â ü f k (x)ü2
"x Î Da (0)
(0.10)
k=0
Dans le cas de la stabilité asymptotique faible, sous l’hypothèse qu’il existe c > 0 et q Î N* tels
que
c
(0.11)
ü f k (x)ü £ 0
2q
k+1
pour x dans un voisinage de x = 0, on montre que la fonction de Lyapunov optimale est la
solution unique du système
; V (0) = 0
V ( f (x)) - V (x) = -üxü2q+2
(0.12)
La fonction V est définie par
¥
V (x) = â ü f k (x)ü2q+2
"x Î Da (0)
(0.13)
k=0
On étudie la stabilité asymptotique faible pour les singularités de codimension 1, en utilisant
la théorie des formes normales et la théorie de la variété centrale. On montre que si la matrice
D f (0) a une seule valeur propre simple Λ = -1 sur le cercle unité (donc x = 0 est un singularité
flip) ou une seule paire des valeurs propres e±iΘ , Θ Î [0, Π] {0, Π2 , 2Π
, Π} sur le cercle unité (donc
3
x = 0 est une singularité Neimark-Sacker), les solutions f k (x) du (0.8) ont la propriété (0.11),
pour x dans un voisinage de x = 0 (asymptotiquement stable). Donc, le résultat précédent pour
l’evaluation du domaine d’attraction peut être appliqué dans ces situations.
A partir de la fonction de Lyapunov optimale V , on construit deux techniques d’approximation
de Da (0), qui correspondent à celles présentées dans le chapitre 1, dans le cas continu.
CONTENTS
14
On présente aussi une troisième méthode d’approximation fondée sur le reversement des
trajectoires, et on compare les résultats de ces trois méthodes pour des exemples concrets (les
calculs numériques avec Mathematica 5.0.). Pour chaque méthode proposée, on montre que la
réunion des approximations de Da (0) trouvées coincide avec Da (0).
Les résultats concernant l’approximation des domaines d’attraction pour les systèmes demi
dynamiques en temps discret ont été publiés dans [KBBB03, BKBG05].
Finalement, on présente le théorème concernant la manoeuvrabilité des systèmes dynamiques
discrets aux paramètres, le long d’une branche des états d’équilibre asymptotiquement stables.
Notamment, on considère le système dynamique discret aux paramètres
xk+1 = f (xk , Α)
k = 0, 1, 2, ...
(0.14)
où x = (x1 , x2 ...xn ) Î Rn sont les paramètres d’état et Α = (Α1 , Α2...Αm ) Î Rm sont les paramètres
de contrôle. Dans ce cas, on appelle branche des états d’équilibre de (0.14) une fonction
continue j : W Ì Rm ® Rn satisfaisant
f (j(Α), Α) = j(Α)
pour Α Î W
(0.15)
On appelle manoeuvre un changement de paramètres de contrôle de Α¢ à Α¢¢ dans (0.14) et l’on
note Α¢ ® Α¢¢ . On dit que la manoeuvre Α¢ ® Α¢¢ est gagnante sur la branche j : W Ì Rm ® Rn
de (0.14) si Α¢ , Α¢¢ Î W et la solution du système
xk+1 = f (xk , Α¢¢ )
x0 = j(Α¢ )
(0.16)
tends vers j(Α¢¢ ) quand k ® ¥. C’est à dire, j(Α¢ ) est dans le domaine d’attraction de j(Α¢¢).
On montre le même résultat comme dans le cas continu: pour deux états d’équilibre j(Αø ) et
j(Αøø) appartenant à une branche des états d’équilibre asymptotiquement stables j : W Ì Rm ®
Rn , il existe un nombre fini des manoeuvres gagnantes successives
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
transférant le système de j(Αø ) à j(Αøø ).
Chapitre 3. Applications à la phase d’atterrissage du véhicule
rentrant ALFLEX
Dans le troisième chapitre, on fait une analyse d’un certain type de vol du véhicule rentrant
ALFLEX (Automatic Landing Flight Experiment). C’est un modèle à l’échelle réduit de l’avion
orbital H-II (HOPE), une aéronef orbitale sans pilote, développé par NASDAQ (Japon) et testé
en Australie en 1997. Il a été construit pour étudier les caractéristiques du vol de ce véhicule
pendant les phases d’approche finale et d’atterrissage.
Le type de vol qu’on étudie n’est pas stable et il est possible seulement grace à un système
automatique de contrôle du vol compliqué (SAV). En cas de malfonctionnement des SAV,
le phénomène d’inertial coupling apparaît. Ce phénomène est un effet gyroscopique, qui
intervient pendant les manoeuvres de haut roulis des avions modernes, qui ont leurs masse
concentrée dans leur fuselage. Pour ce type d’avion, des petites perturbations des conditions
CONTENTS
15
initiales ou des petits changements des angles de contrôle peuvent causer des changements
dramatiques en roulis, produisant des avaries graves sur les empennages. Pour cette raison,
il est nécessaire de développer une technique de contrôle qui conduise le véhicule d’un état à
haut roulis à un état stationnaire à roulis zéro. (voir les travaux de N. Goto, T. Kawakita et K.
Matsumoto [GM00, GK04]).
Le modèle mathématique du véhicule ALFLEX utilisé dans ce chapitre et celui proposé dans
[GM00, GK04]. Pour ce modèle simplifié du mouvement du véhicule autour de son centre de
gravité, l’angle du frein ∆SB est considéré fixé à une certaine valeur, la vitesse V et la masse
W du véhicule, en même temps que la densité de l’air Ρ sont considérées constantes. En plus,
l’angle d’Euler Ψ n’est pas inclus dans le modèle.
Ce modèle mathématique est défini par un système de sept équations différentielles ordinaires,
autonomes et analytiques. Les variables sont: l’angle d’attaque Α, l’angle de glissade latérale Β,
les vitesses angulaires autour des trois axes p, q et r et les angles d’Euler j (angle de roulis) et
Θ (angle de tangage). Les paramètres de contrôle qui interviennent dans ces équations sont les
angles de contrôle ∆e (angle de l’élévateur) et ∆a (angle de l’aileron). L’angle de la direction ∆r
est supposé constant (∆r = 0ë ).
Dans ce chapitre, un système de quatre équations algébriques est déterminé, qui définit
l’ensemble des états d’équilibre. Ce système algébrique permet d’établir quelques propriétés
globales de l’ensemble des états d’équilibre, par exemple, que cet ensemble est localement une
sous variété de dimension trois de R7 . On identifie l’ensemble des états d’équilibre au roulis
nul et les valeurs des paramètres de contrôle correspondants.
Dans la partie d’analyse numérique, on présente trois branches des états d’équilibre pour
ALFLEX et on fait une analyse de stabilité le long de ces branches. On trouve des parties
asymptotiquement stables juste pour les premières deux branches et on donne des exemples des
manoeuvres successives le long de ces branches.
On souligne le fait que (en accord avec le modèle mathématique) ALFLEX appartient à la
classe des aéronefs pour lesquelles le roulis ne s’annule pas forcément quand l’aileron est centré
(∆a = 0ë ) [Hac78]. On trouve différent successions de manoeuvres qui amènent le véhicule d’un
état d’équilibre à haut roulis dans un état d’équilibre asymptotiquement stable à roulis nul.
On trouve une approximation du domaine d’attraction de l’état d’équilibre au roulis nul x̃1
qui corresponde au paramètres ∆e = -2.2ë et ∆a = -0.68ë , en appliquant les deux méthodes
présentées dans le chapitre 1. On montre que ce domaine d’attraction est si vaste, qu’il inclut
tous les états d’équilibre des trois branches trouvées. Ce résultat résout le problème de contrôle
du roulis.
L’inconvénient de l’état d’équilibre x̃1 est que même si le roulis p est nul, les autres vitesses
angulaires q et r sont assez grands. Les calculs numériques montrent que les états d’équilibre
avec toutes les vitesses angulaires nulles ne sont pas asymptotiquement stables, mais des points
selle. Donc, si au cours de l’atterrissage, on veut amener le véhicule dans un état d’équilibre de
descente avec p = q = r = 0ë / s, il faut étudier la possibilité du contrôle à l’aide de la variété
stable d’un tel point selle.
Dans la dernière partie du chapitre 3, en utilisant une technique de manoeuvrabilité construite
avec des variétés stables, on trouve des manoeuvres successives qui maintiennent le véhicule
dans des petits voisinages des variétés stables des états d’équilibre correspondants aux phases
de path capture et steady descent, et qui l’amènent très proche de ces états. On souligne le fait
CONTENTS
16
que, les résultats numériques ainsi obtenus sont comparables avec les résultats obtenus par les
expériences de 97 en Australie [ALF97].
Les résultats de ce chapitre ont été publiés dans [KBCB02, KBBB04, KBGB04].
Chapitre 4. Applications aux reseaux de neurones déterministes
Dans le quatrième chapitre, on présente une analyse des reseaux de neurones de type Hopfield
(discret et continu), en appliquant les résultats obtenus dans les premiers deux chapitres.
Le reseau de neurones de type Hopfield en temps continu est décrit par le système nonlinéaire
suivant:
n
ẋi = -ai xi + â Ti j g j (x j ) + Ii
i = 1, n
(0.17)
j=1
où ai > 0, A = diag(-a1 , -a2 , ..., -an), Ii sont constants et représentent l’entrée (input) externe,
T = (Ti j )n´n est une matrice constante nommée matrice d’interconnexion, gi : R ® R (i = 1, n)
représentent les activations neuronales.
Le reseau de neurones Hopfield en temps discret est décrit par:
n
xip+1 = bi xip + â T̄i j g j (x pj ) + Īi
"i = 1, n, p Î N
(0.18)
j=1
où bi Î (0, 1), Īi représentent l’entrée externe, T̄ = (T̄i j )n´n est la matrice d’interconnexion,
gi : R ® R (i = 1, n) sont les activations neuronales.
Dans ce chapitre, on suppose que les fonctions gi sont R-analytiques et gi (0) = 0, pour i = 1, n.
Pour quelques résultats, on utilise aussi des autres hypothèses pour les fonctions d’activation:
(B) les fonctions d’activation sont bornées:
|gi (s)| £ 1
pour s Î R, i = 1, n
(M) les fonctions d’activations sont croissantes avec des dérivés bornées: ils existent ki > 0
tels que 0 < g¢i (s) £ ki pour s Î R, i = 1, n.
Les reseaux de neurones de type Hopfield ont été étudiés par [FT95, TH86]. Des conditions
pour l’existence et l’unicité de l’état d’équilibre du (0.17) et (0.18) ont été trouvées dans
[CZW04]. La stabilité, notamment la stabilité exponentielle globale, a été traitée dans [Cao04,
CT01, CZW04, CG83, DFK91, Din89, FT95, Ura89, YHF99, GHW04, YHH04, YHH05], en
utilisant des fonctions de Lyapunov simples et dans [KS93, LMS91] en utilisant des fonctions
de Lyapunov vectorielles.
Pour résoudre des problèmes d’optimization, contrôle neuronal et traitement des signaux, les
reseaux de neurones de type Hopfield doivent presenter un seul état d’équilibre globalement
exponentiellement stable [CZW04]. Mais, si les reseaux de neurones sont utilisés pour
l’analyse des mémoires associatives, on a besoin de plusieurs états d’équilibre localement
CONTENTS
17
exponentiellement stables pour une valeur de l’entrée, parce qu’ils stockent de l’information
et constituent des reseaux de mémoire neuronal distribués et parallèles. Dans ce cas, le but
de l’analyse qualitative est l’étude des états d’équilibre exponentiellement stables (existence,
nombre, domaines d’attraction), pour garantir la capacité de rappel des modèles. Quelques
résultats concernant l’estimation des domaines d’attraction sont présentés dans [Cao04, CT01,
YHF99].
Dans ce chapitre, on donne des résultats pour l’existence d’une ou plusieurs branches d’états
d’équilibre pour (0.17) et (0.18).
On traite d’abord le cas continu. On considère l’ensemble C = {x Î Rn / det(A + T Dg(x)) = 0}
et l’ensemble G = Rn C. Soit {GΑ }Α la famille des composantes connexes et ouverts de G. Si
DΑ est le plus grand rectangle inclut dans GΑ , il existe une branche unique d’états d’équilibre
dans DΑ .
Si l’hypothèse (B) tient, on montre que tous les états d’équilibre du (0.17) sont placés forcément
dans le rectangle D = [-M1 , M1 ] ´ [-M2 , M2] ´ ... ´ [-Mn , Mn] où
n
1
Mi = (|Ii | + â |Ti j |)
ai
j=1
pour i = 1, n
Pour ¶ Î {±1}n on définit le rectangle D¶ = J(¶1 ) ´ J(¶2 ) ´ ... ´ J(¶n ) où J(1) = (1, ¥) et
J(-1) = (-¥, -1).
S’il existe Α Î (0, 1) tel que gi (s) ³ Α if s ³ 1 et gi (s) £ -Α si s £ -1 et si l’entrée I Î Rn
satisfait
|Ii | < Tii Α - ai - â |Ti j | "i = 1, n
i¹ j
dans chaque rectangle D¶ , ¶ Î {±1}n (invariant au flot du système (0.17)) il existe au moins un
a
état d’équilibre de (0.17) qui corresponde à I. En plus, si |g¢i (s)| < n i pour |s| ³ 1 and i = 1, n
Ú |T ji |
j=1
il existe un état d’équilibre unique qui corresponde à l’entrée I dans le rectangle D¶ , ¶ Î {±1}n,
il est exponentiellement stable et son domaine d’attraction inclut D¶ .
Des résultats analogues sont obtenus pour les reseaux de neurones de type Hopfield en temps
discret.
On montre que les méthodes d’approximations des domaines d’attractions proposées dans le
premier chapitre donnent de très bons résultats dans le cas des reseaux de neurones de type
Hopfield concrets, proposés dans la literature [YHF99, MG00]. En utilisant la procédure du
contrôle des systèmes dynamiques à paramètres, le long d’une branche des états d’équilibre
asymptotiquement stables, on montre que dans le cas des reseaux de neurones, on peut transférer
une configuration d’états d’équilibre asymptotiquement stables dans une autre configuration
d’états d’équilibre asymptotiquement stables, par des manoeuvres successives et on donne des
exemples.
Quelques résultats de ce chapitre ont été publiés dans [KBB05b].
18
CONTENTS
INTRODUCTION
The notion of region (domain) of attraction is widely used in stability theory and control
theory. In order to guarantee the stable behavior of a dynamical system in a region of the
state parameters, it is important to know the regions of attraction of the asymptotically stable
steady states.
We give a few methods for determining and approximating the regions of attraction, in the case
of continuous and discrete autonomous dynamical systems. This is the objective of the first
two chapters. We also give a method of control of systems depending on parameters, using
regions of attraction. Then, in the following chapters, we apply these methods to two applied
problems which are modelled using autonomous dynamical systems. The first problem is that of
the maneuvering of the space vehicle ALFLEX during its landing phase. The second problem
concerns the Hopfield type neural networks which are modelled by the same type of equations.
In the followings, we present a detailed summary of each of the four chapters.
Chapter 1. Regions of attraction in the case of autonomous
differential equations
We consider the following continuous-time autonomous and analytical dynamical systems:
ẋ = f (x)
(0.19)
where f : Rn ® Rn is an R-analytical function in Rn with f (0) = 0 (x = 0 is a steady state for
(0.19)).
We denote by x(t, x0 ) the solution of system (0.19) which verifies x(0) = x0 .
If the steady state x = 0 is asymptotically stable, we denote by Da (0) its region of attraction (or
domain of asymptotical stability [GRBG04]). It’s the set of all initial states x0 for which x(t, x0 )
tends to 0 when t ® ¥.
The works of Barbashin [Bar51], Barbashin-Krasovskii [BK54] and Zubov ([Zub64, Zub78])
gave the first results concerning the exact determining of the domain Da (0), in the most general
case, using Lyapunov functions which characterize the region of attraction. Other results of this
type have been obtained by Knobloch and Kappel [KK74], in a more restrictive case, when the
real parts of the eigenvalues of the matrix D f (0) are strictly negative. These theorems have a
weak practical applicability, because the construction of the Lyapunov function is practically
impossible if the solutions (and implicitly, the domain of attraction Da (0)) of the system are not
known.
19
CONTENTS
20
Later, Vanelli and Vidyasagar [VV85] have established a result concerning the existence of
a maximal Lyapunov function, and a sequence of Lyapunov functions which can be used for
approximating Da (0), in the case when the function f of the system (0.19) is R-analytical and
the real parts of the eigenvalues of the matrix D f (0) are strictly negative. The algorithm of
construction of these Lyapunov functions is quite complex, but it is not necessary to know the
solutions of the system. The numerical computations are long.
The methods proposed by Gruyitch [GRBG04] use Lyapunov functions built up by the method
of characteristics (which supposes the knowledge of the solutions of the system). The methods
are applied for some concrete examples, but a general algorithm cannot be determined.
The theoretical research shows that Da (0) and its boundary are generally very complex sets.
In most of the cases, they do not have an explicit elementary representation. For this reason,
the region of attraction Da (0) is approximated by domains which have a simpler form. This
practice has become fundamental in the last thirty years [DK71, MSM82, GTV85, CGT97,
Tib00, Che01, CGTV01, CTVG01]. The domain which approximates Da (0) is defined by a
Lyapunov function, in general quadratic. For a given Lyapunov function, the computation of
the optimal approximation of Da (0) is reduced to solving a non-convex distance problem, which
is not always approachable, but, in some cases, gives satisfying results.
The aim of the first chapter is the give two efficient methods for approximating the region of
attraction Da (0) of the steady state x = 0 of the system (0.19), in the case when the matrix
D f (0) is diagonalisable. We treat first the the hyperbolical case (the eigenvalues of D f (0) are
strictly in the left half-plane, so the steady state x = 0 is exponentially stable), and then, the
non-hyperbolic case (some of the eigenvalues of D f (0) are on the imaginary axis and x = 0 is
non-exponentially asymptotically stable).
The proposed methods are based on the theoretical results obtained in [Bal85, BBN86,
BNBS87] and mainly on the construction of an optimal Lyapunov fonction V which has the
following properties: its domain of analyticity coincides with the domain of attraction Da (0), it
is positively defined on Da (0) and V (x) ® ¥ if x ® y, y Î ¶Da (0) or if üxü ® ¥.
In the case of exponential stability, we show that the optimal Lyapunov function is the unique
solution of the problem
XÑV (x), f (x)\ = -üxü2
(0.20)
; V (0) = 0
Using the fact that the matrix D f (0) is diagonalisable, we give recurrence formulae for the
computation of the coefficients of the power series development of the function V at the point
x = 0. So, theoretically, the optimal Lyapunov function can be found without knowing the
solutions of the system (0.19).
In the case of non-exponential asymptotical stability, under the hypothesis that there exist q Î N*
and c > 0 such that
c
üx(t, x0 )ü £ 0
(0.21)
2q
t +1
for t ³ 0 and x0 in a neighborhood of x = 0, we show that the optimal Lyapunov function is the
unique solution of the system
XÑV (x), f (x)\ = -üxü2q+2
; V (0) = 0
(0.22)
We show that, if the matrix D f (0) has a simple eigenvalue Λ = 0 or a simple pair of eigenvalues
±iΩ on the imaginary axis, the solutions x(t, x0 ) of (0.19) have the property (0.21), for x0 in
CONTENTS
21
a neighborhood of x = 0 (asymptotically stable). Using (0.22) and the property of D f (0) of
being diagonalisable, the normal form theory [GH83, Ver90, Kuz98] and the central manifold
theory, we find recurrence formulae for the computation of the coefficients of the power series
development of the function V at x = 0 in the two-dimensional case.
The theoretical basis of the first method of approximation is the estimation of the domain of
convergence D0 of the development of the function V in 0 by Cauchy-Hadamard-type method
[Hor85]. It has been emphasized before that the coefficients of the power series development of
the function V in 0 can be computed. Using these coefficients, denoted by Ak (k Î Nn ), we can
estimate the domain of convergence D0 :
2
0
n
D = {x Î R / lim m â |Ak xk | < 1}
m
|k|=m
It is known that D0 is a part of the domain of analyticity Da (0) of V . If D0 is a strict part of
Da (0), the algorithm of prolongation of analytic functions allows to obtain a new part D1 of
the domain of attraction. More precisely, we consider a point x on the boundary of D0 such
that the function V is bounded in a neighborhood of x. We chose a point x1 Î D0 in a small
neighborhood of x, we find the coefficients of the development of V in x1 , and we estimate the
domain of convergence D1 of the development of the function V in x1 . The domain D0 Ç D1 is
a new approximation of Da (0). If we find a point x on the boundary of D0 Ç D1 such that the
function V is bounded in a neighborhood of x, we can continue the algorithm. We stop after m
steps, when we find that the function V is not bounded on the boundary of D0 Ç D1 Ç ... Ç Dm .
Theoretically, this algorithm can be continued until we obtain all the domain of attraction.
For the second method of approximation that we propose, we consider the partial sums of order
p of the development of the Lyapunov function V at x = 0, and we denote them by Vp0 (p ³ 2).
We show that each function Vp0 is also a Lyapunov function in a neighborhood G p of 0.
We show the following result. For each p ³ 2, there exists c > 0 and a closed, connected and
compact set S Ì G p with the following properties:
1. 0 Î Int(S)
2. Vp0 (x) < c if x Î Int(S)
3. Vp0 (x) = c if x Î ¶S
4. S is invariant to the flow of the system (0.19)
5. S Ì Da (0)
More, for each p ³ 2 and c > 0 it exists at most one set S which satisfies these properties. If
it exists, we will denote this set by Npc . It is shown that for a given p ³ 2, the family of sets
(Npc )c is totaly ordered and Ç Npc Ì Da (0). Thus, the largest part of Da (0) that we can find by this
method is Np = Ç Npc .
c
c
The theoretical results are followed by some examples in which we apply these methods to
approximate Da (0) (e.g. for the system Van der Pol), using the software Mathematica 5.0. The
program written in Mathematica 5.0 is given.
CONTENTS
22
The results concerning the approximation of the domain of attraction for continuous time
dynamical systems have been published in [KBB03, KBB05a, Kas05].
Finally, we obtain a theorem concerning the maneuvering of parameter-dependent dynamical
systems, in the continuous case, along a branch of asymptotically stable steady states, using
domains of attraction. These results have been published in [KBGB05a, KBGB05c].
More precisely, we consider the following parameter-dependent dynamical systems:
ẋ = f (x, Α)
(0.23)
where x = (x1 , x2 ...xn ) Î Rn are the state parameters and Α = (Α1 , Α2...Αm ) Î Rm are the control
parameters. We call branch of steady states of (0.23) a continuous function j : W Ì Rm ® Rn
satisfying
f (j(Α), Α) = 0
for Α Î W
(0.24)
We call maneuver a changement of control parameters form Α¢ to Α¢¢ in (0.23) and we denote it
by Α¢ ® Α¢¢ . We say that a maneuver Α¢ ® Α¢¢ is successful along the branch j : W Ì Rm ® Rn
of (0.23) if Α¢ , Α¢¢ Î W and the solution of the system
ẋ = f (x, Α¢¢ )
x(0) = j(Α¢)
(0.25)
tends to j(Α¢¢ ) when t ® ¥. This means that j(Α¢ ) is in the domain of attraction of j(Α¢¢).
We show the following result: for two steady states j(Αø ) and j(Αøø ) belonging to a branch of
asymptotically stable steady states j : W Ì Rm ® Rn , there exists a finite number of successful
maneuvers
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
which transfer the system from j(Αø ) to j(Αøø ).
Chapter 2. Regions of attraction in the case of discrete semidynamical systems
Let be the following discrete semi-dynamical system:
xk+1 = f (xk )
k = 0, 1, 2, ...
(0.26)
where f : W ® W is an R-analytic function defined in a domain W Ì Rn , 0 Î W with f (0) = 0,
thus x = 0 is a steady state (fixed point) for (0.26).
If the steady state x = 0 is asymptotically stable, we denote by Da (0) its region of attraction, the
set of all points x Î Rn for which f k (x) ® 0 as k ® ¥.
The possibility of evaluating the regions of attraction for the discrete semi-dynamical systems
received less attention then its continuous counterparts. Some results of [Koc90, LQVY91,
LT88, LaS97, LaS86], show that Da (0) (which is not necessarily connected) may be quite
complicated. We find in [KP01] a method of approximation of the connected component of
Da (0) containing 0 which uses a Lyapunov function built up from the matrix D f (0), supposing
that Ρ(D f (0)) < 1 (where Ρ(D f (0)) is the spectral radius of D f (0)).
CONTENTS
23
In the second chapter, we try to give similar results as in the continuous case for the evaluation
of Da (0), using an optimal Lyapunov function. We first treat the case of strong asymptotical
stability (Ρ(D f (0)) < 1), and then, the case of weak asymptotical stability (Ρ(D f (0)) = 1).
In the case of strong asymptotical stability, we show that the domain of attraction Da (0) is open
and coincides with the region of analyticity of the optimal Lyapunov function V given by:
; V (0) = 0
V ( f (x)) - V (x) = -üxü2
(0.27)
In fact, we show that the function V is defined by
¥
V (x) = â ü f k (x)ü2
"x Î Da (0)
(0.28)
k=0
In the case of weak asymptotical stability,under the hypothesis that there exist c > 0 and q Î N*
such that
c
ü f k (x)ü £ 0
(0.29)
2q
k+1
for x in a neighborhood of x = 0, we show that the optimal Lyapunov function is the unique
solution of the system
V ( f (x)) - V (x) = -üxü2q+2
(0.30)
; V (0) = 0
The function V is defined by
¥
V (x) = â ü f k (x)ü2q+2
"x Î Da (0)
(0.31)
k=0
We study the weak asymptotic stability of singularities of codimension 1, using the normal
form theory and the central manifold theory. We show that if the matrix D f (0) has a simple
eigenvalue Λ = -1 on the unit circle (so x = 0 is a flip) or a single pair of eigenvalues
, Π} on the unit circle (so x = 0 is a Neimark-Sacker singularity),
e±iΘ , Θ Î [0, Π] {0, Π2 , 2Π
3
k
then the solutions f (x) of (0.26) have the property (0.29), for x in a neighborhood of x = 0
(asymptotically stable). So, the previous result for the evaluation of the region of attraction can
be applied in these situations.
Using the optimal Lyapunov function V , we construct two techniques of approximation of
Da (0), which correspond to those presented in chapter 1, in the continuous case. We also
present a third method of approximation, based on the trajectory reversing method, and we
compare the result of the three methods for concrete examples (the numerical computation with
Mathematica 5.0.). For each method, we show that the reunion of the computed approximations
of Da (0) coincide with Da (0).
The results concerning the approximation of the regions of attraction for discrete semidynamical systems have been published in [KBBB03, BKBG05].
Finally, we present the theorem concerning the maneuvering of discrete semi-dynamical systems with control, along a branch of asymptotically stable steady states.
More precisely, we consider the following discrete semi-dynamical system with control
xk+1 = f (xk , Α)
k = 0, 1, 2, ...
(0.32)
CONTENTS
24
where x = (x1 , x2 ...xn ) Î Rn are the state parameters and Α = (Α1 , Α2 ...Αm ) Î Rm are the
control parameters. In this case, we call branch of steady states of (0.32) a continuous function
j : W Ì Rm ® Rn satisfying
f (j(Α), Α) = j(Α)
for Α Î W
(0.33)
We call maneuver a changement of control parameters from Α¢ to Α¢¢ in (0.32) and we denote it
by Α¢ ® Α¢¢ . We say that the maneuver Α¢ ® Α¢¢ is successful along the branch j : W Ì Rm ® Rn
of (0.32) if Α¢ , Α¢¢ Î W and the solution of the system
xk+1 = f (xk , Α¢¢ )
x0 = j(Α¢ )
(0.34)
tends to j(Α¢¢ ) as k ® ¥. This means that j(Α¢ ) is in the region of attraction of j(Α¢¢ ).
We show the same result as in the continuous case: for two steady states j(Αø ) and j(Αøø )
belonging to a branch of asymptotically stable steady states j : W Ì Rm ® Rn , there exists a
finite number of successful maneuvers
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
which transfer the system from j(Αø ) to j(Αøø ).
Chapter 3. Control procedure for the flight of the ALFLEX
model plane during its final approach and landing phases
using domains of attraction
In the third chapter, the vehicle subjected to the analysis is the Automatic Landing Flight
Experiment (ALFLEX ) model plane. This is a reduced scale model of the H-II Orbiting Plane
(HOPE), an unmanned reusable orbiting spacecraft. It has been built in order to study the flight
of the spacecraft during its final approach and landing phases. This flight is made possible
due to complicated Automatic Flight Control Systems, designed to perform quick responses
to commands. That is because in this case, and in general in the case of modern high-speed
airplanes, including spinning missiles, designed in such a way that their masses are concentrated
in their fuselages, inertial coupling may occur. This phenomenon is a gyroscopic effect, due to
which small perturbations or small changes of the control surface angles may lead to dramatic
changes in roll rate. For this reason, it is necessary to develop a control technique which leads
the vehicle from a high roll rate steady state to a zero roll rate steady state. (see the works of N.
Goto, T. Kawakita and K. Matsumoto [GM00, GK04]).
The mathematical model of the ALFLEX reentry vehicle used in this chapter is the one proposed
in [GM00, GK04]. For this simplified model of the vehicle’s motion around its center of gravity,
the break angle ∆SB is considered fixed at a certain value, the velocity V and the weight W of
the vehicle, as well as the air density Ρ are considered constants. More, the Euler angle Ψ is not
included in the model.
This mathematical model is defined by a system of seven autonomous and analytical ordinary
differential equations. The state variables are: the angle of attack Α, the sideslip angle Β, the
angular velocities around the three axes p, q and r and the Euler angles j (roll angle) et Θ (pitch
CONTENTS
25
angle). The control parameters in these equations are ∆e (elevator angle) and ∆a (aileron angle).
The rudder angle ∆r is considered constant (∆r = 0ë ).
In this chapter, a system of four algebraic equations is determined, which implicitly defines the
whole set of steady states. This algebraic system permits to establish some global properties of
the set of steady states (for example, that this set is locally a three-dimensional sub-manifold of
R7 ), to identify all the zero roll rate steady states including those which correspond to desired
descent flights, to establish the values of the control surface angles which have to be used and
to clarify the stability of these states.
The numerical analysis of this system shows that there exist three branches of steady states for
ALFLEX and that only two of these branches contain asymptotically stable steady states. We
give examples of successful maneuvers along these branches.
We underline the fact that (according to the mathematical model) ALFLEX belongs to the class
of aeroplanes for which the roll rate may not decay to zero even if the aileron is centered
(∆a = 0ë ) [Hac78]. We find different successive maneuvers which bring the vehicle from a high
roll rate steady state to an asymptotically stable zero roll rate steady state.
We find an approximation of the region of attraction of the zero roll rate steady state x̃1 which
corresponds to the control angles ∆e = -2.2ë and ∆a = -0.68ë , using the two methods proposed
in the first chapter. We show that the region of attraction is so wide that it contains all the steady
states belonging to the three branches that have been found numerically. This result solves the
problem of roll rate control.
The inconvenience of the steady state x̃1 is that even if the roll rate p is zero, the other angular
velocities q and r are high. The numerical computations show that all the steady states for with
zero angular velocities are saddle points. Therefore, on order to bring the vehicle to a steady
state with p = q = r = 0ë / s during landing, one has to study the possibility of control using the
stable manifold of such a saddle point.
In the last part of this chapter, using a technique of maneuvering built up with the aid of stable
manifolds, we find successful maneuvers that maintain the vehicle in small neighborhoods
of the stable manifolds of the steady states which correspond to the path capture and steady
descent phases of the descent flight, and which bring the vehicle very close to these states.
We emphasize that the results that have been obtained are close to those obtained during the
experiments of 1997 in Australie [ALF97].
The results of this chapter have been published in [KBCB02, KBBB04, KBGB04].
Chapter 4. Control procedures for Hopfield-type neural networks using domains of attraction
In the fourth chapter, we present an analysis of continuous and discrete Hopfield-type neural
networks, applying the results obtained in the first two chapters.
The continuous time Hopfield-type neural network is descibed by the following nonlinear
system:
n
ẋi = -ai xi + â Ti j g j (x j ) + Ii
j=1
i = 1, n
(0.35)
CONTENTS
26
where ai > 0, A = diag(-a1 , -a2 , ..., -an), Ii are constant and represent the external input,
T = (Ti j )n´n is a constant matrix referred to as the interconnection matrix, gi : R ® R (i = 1, n)
represent the activation functions.
The discrete time Hopfield-type neural network is descibed by
n
xip+1
=
bi xip
+ â T̄i j g j (x pj ) + Īi
"i = 1, n, p Î N
(0.36)
j=1
where bi Î (0, 1), Īi represent the external input, T̄ = (T̄i j )n´n is the interconnection matrix,
gi : R ® R (i = 1, n) are the activation functions.
In this chapter, we suppose that the functions gi are R-analytical and gi (0) = 0, for i = 1, n.
For some of the results presented in this chapter, we also consider that the following hypothesis
holds for the activation functions:
(B) the activation functions are bounded:
|gi (s)| £ 1
for s Î R, i = 1, n
(M) the activation functions are strictly increasing with bounded derivatives: there exist ki > 0
such that 0 < g¢i (s) £ ki for s Î R, i = 1, n.
The Hopfield-type neural networks have been studied in [FT95, TH86]. Conditions for the
existence and uniqueness of the steady states of (0.35) and (0.36) have been given in [CZW04].
The stability, especially the global asymptotical stability has been treated in [Cao04, CT01,
CZW04, CG83, DFK91, Din89, FT95, Ura89, YHF99, GHW04, YHH04, YHH05], using
simple Lyapunov functions and in [KS93, LMS91] using vector Lyapunov functions.
To solve problems of optimization, neural control and signal processing, Hopfield-type neural
networks have to be designed to exhibit for an input only one globally exponentially stable
steady state [CZW04]. On the other hand, if neural networks are used to analyze associative
memories, several locally exponentially stable steady states are desired for one input, as they
store information and constitute distributed and parallel neural memory networks. In this
case, the purpose of the qualitative analysis is the study of the locally exponentially stable
steady states (existence, number, regions of attraction) so as to ensure the recall capability
of the models. Some results on the estimation of the local exponential convergence rate
and of the regions of attraction, in the case of Hopfiled-type neural networks, are given in
[Cao04, CT01, YHF99].
In this chapter, we give necessary conditions for the existence of one ore more branches of
steady states for (0.35) and (0.36).
First, we treat the continuous case. We consider the sets C = {x Î Rn / det(A + T Dg(x)) = 0}
and G = Rn C. Let be {GΑ }Α the family of connected and open components of G. If DΑ is the
largest rectangle included in GΑ , there exists a unique branch of steady states of (0.35) in DΑ .
Under hypothesis (B), we show that all the steady states of (0.35) belong to the rectangle
D = [-M1 , M1 ] ´ [-M2 , M2 ] ´ ... ´ [-Mn , Mn ] where
n
Mi =
1
(|I | + â |Ti j |) for
ai i
j=1
i = 1, n
CONTENTS
27
For ¶ Î {±1}n we define the rectangle D¶ = J(¶1 ) ´ J(¶2 ) ´ ... ´ J(¶n ) where J(1) = (1, ¥) and
J(-1) = (-¥, -1).
If there exists Α Î (0, 1) such that gi (s) ³ Α if s ³ 1 and gi (s) £ -Α if s £ -1 and if the input
I Î Rn satisfies
|Ii | < Tii Α - ai - â |Ti j | "i = 1, n
i¹ j
then in each rectangle D¶ , ¶ Î {±1} (invariant to the flow of system (0.35)) there exists at least
a
one steady state of (0.35) which corresponds to the input I. More, if |g¢i (s)| < n i for |s| ³ 1
n
Ú |T ji |
j=1
and i = 1, n there exists a unique steady state of (0.35)) which corresponds to the input I in the
rectangle D¶ , ¶ Î {±1}n, it is exponentially stable et and its domain of attraction includes D¶ .
Some similar results are obtained in the case of discrete time Hopfield-type neural networks.
We show that the methods of approximation of the domains of attraction proposed in the first
chapter give very good results in the case of concrete examples of Hopfield-type neural networks
proposed in the literature [YHF99, MG00]. Using the control procedure of dynamical systems
with control along a branch of asymptotically stable steady states, we show that in the case
of neural networks, we can transfer a configuration of asymptotically stable steady states into
another configuration of asymptotically stable steady states by successful maneuvers, and we
give some examples.
Some of the results of this chapter have been published in [KBB05b].
28
CONTENTS
Chapter 1
Regions of attraction in the case of
autonomous differential equations
1.1 Introduction
1.1.1 Comments on the use of step type signals in automatics
Assume that the evolution of a physical system is governed by the system of differential
equations:
dx
= f (x, Α)
(1.1)
dt
the state parameters of the system are x = (x1 , x2 ...xn ) Î Rn and Α = (Α1 , Α2 ...Αm ) Î Rm are the
control parameters. The steady states of system (1.1) are the solutions of the system of algebraic
equations:
f (x, Α) = 0
(1.2)
For a given Α Î Rm the system of algebraic equations (1.2) may have a solution, several
solutions or it my happen that it has no solution. This last situation is not significant from
the practical point of view. In practice, the systems are built in general in such a way that for
a given Α Î Rm , equation (1.2) has a unique solution. This option is due to the fact that in
this way, it is possible to realize different well defined steady states by modifying Α. More
precisely, if the system is in the steady state x̄ which corresponds to Α = Ᾱ and we wish to
transfer the system in the steady state x̂ defined by f (x, Α̂) = 0 then the solution is to change
the value of Α from Ᾱ to Α̂. That means to apply in the system (1.1) the input (Α̂ - Ᾱ)1(t) where
1(t) is the Heaviside function. This is a possible explanation of the use of "step type signals" in
automatics.
A natural question is: if this procedure is applied, is the system transferred in the new steady
state x̂ or not?
A positive answer of this question is the following: if x̂ is an asymptotically stable steady state
and x̄ is in the region of attraction of x̂, then by the above procedure, the system is transferred
from the steady state x̄ to the steady state x̂.
This is a possible motivation of the interest in the methods of determining and approximating
the region of attraction of an asymptotically stable steady state.
29
30
Practical engineers know that if üΑ̂ - Ᾱü is big then the transfer may not be possible by a
single change Ᾱ ® Α̂. But if the change is made by steps, by small modifications of Α, then
the transfer can be made. The explanation is that, by small successive changes, the system is
conducted through the regions of attraction of intermediary asymptotically stable steady states.
1.1.2 Paths of steady states
Definition 1.1. A path of steady states of the system (1.1) is a function j : W Ì Rm ® Rn
satisfying
f (j(Α), Α) = 0,
for any Α Î W
(1.3)
If the partial derivatives of j exist and are continuous until order k then the path j is said to
be Ck . The path j is C0 if j is just continuous. If j is R-analytic then the path j is said to be
analytical. If j(Α0) = x0 then the path j passes through (x0 , Α0).
Theorem 1.1. If the function f in the system (1.1) satisfies:
1. there exists x0 Î Rn and Α0 Î Rm such that f (x0 , Α0 ) = 0
2. f is C1 and the matrix Dx f (x0 , Α0 ) is non-singular
then there exists a path j of steady states satisfying the following properties:
a. j is C1 and j(Α0) = x0 ;
b. the matrix Dx f (j(Α), Α) is non-singular.
Theorem 1.2. The function j : W Ì Rm ® Rn is a path of steady states of system (1.1) if and
only if the function Ψ : W Ì Rm ® Rn defined by Ψ(Α) = 0 for any Α Î W is a path of steady
states for the system
dy
= g(y, Α)
(1.4)
dt
where y Î Rn , Α Î Rm and g(y, Α) = f (y + j(Α), Α).
1.1.3 Asymptotic stability and regions of attraction
Assume that the function f in the system (1.1) is of class C1 , j : W Ì Rm ® Rn is a path of
steady states of class C1 for (1.1) and that the matrix Dx (j(Α), Α) is non-singular for any Α Î W.
Definition 1.2. For a given Α Î W the steady state x = j(Α) of the system (1.1) is stable if for
every ¶ > 0 there exists ∆Α = ∆Α (¶) > 0 such that üx0 - j(Α)ü < ∆Α implies üxΑ (t, x0) - j(Α)ü < ¶
for t ³ 0.
Remark 1.1. In Definition 1.2, xΑ (t, x0) represents the solution of the initial value problem:
dx
= f (x, Α)
dt
x(0) = x0
(1.5)
and the following statement holds: the steady state x = j(Α) of the system (1.1) is stable if and
only if the steady state y = 0 of the system (1.4) is stable.
31
Definition 1.3. For a given Α Î W the steady state x = j(Α) of the system (1.1) is attractive if
there exists rΑ > 0 such that üx0 - j(Α)ü < rΑ implies lim xΑ (t, x0) = j(Α).
t®¥
Remark 1.2. The steady state x = j(Α) of the system (1.1) is attractive if and only if the steady
state y = 0 of the system (1.4) is attractive.
The definition of attractiveness requires only the existence of rΑ > 0 obeying its condition with
no respect to whether rΑ is large or small. For engineering purposes it is important to derive, or
at least to estimate well the set of all initial states x0 for which lim xΑ (t, x0) = j(Α).
t®¥
Stability and attraction are in general mutually independent properties. This was well illustrated
by Vinograd (Hahn 1967 [94] pp. 191-194). Both properties are often desired, which led to the
concept of asymptotic stability.
Definition 1.4. For a given Α Î W the steady state x = j(Α) of the system (1.1) is asymptotically
stable if it is both stable and attractive.
Remark 1.3. The steady state x = j(Α) of the system (1.1) is asymptotically stable if and only
if the steady state y = 0 of the system (1.4) is asymptotically stable.
Definition 1.5. The region of attraction of the asymptotically stable steady state x = j(Α) of the
system (1.1) is defined by:
Da (j(Α)) = {x0 Î Rn / lim xΑ (t, x0) = j(Α)}
t®¥
(1.6)
Theorem 1.3. The region of attraction Da (j(Α)) is an open and connected neighborhood of
x = j(Α) and it is invariant to the flow defined by (1.1).
Remark 1.4. The region of attraction Da (j(Α)) satisfies:
Da (j(Α)) = DΑa (0) + j(Α)
(1.7)
where DΑa (0) is the region of attraction of the asymptotically stable steady state y = 0 of the
system (1.4).
Due to (1.7) determining or estimating the Da (j(Α)) reduces to determining or estimating the
DΑa (0).
1.1.4 Comments on the use of Lyapunov functions
Let be the following system of differential equations:
ẋ = f (x)
(1.8)
where f : Rn ® Rn is a function of class C1 on Rn with f (0) = 0 (i.e. x = 0 is a steady state of
(1.8)). It is assumed that the steady state x = 0 of the system (1.8) is asymptotically stable and
let be Da (0) its region of attraction.
The results of Barbashin [Bar51], Barbashin-Krasovskii [BK54] and of Zubov ([Zub64],
Theorem 19, pp. 52-53, [Zub78]), have probably been the first results concerning the exact
Computation of Da (0) using Lyapunov functions.
In our context, the theorem of Zubov is the following:
32
Theorem 1.4. (Zubov) An invariant and open set S containing the origin and included in the
hypersphere B(r) = {x Î Rn : üxü < r}, r > 0, coincides with the region of attraction Da (0) if
and only if there exist two functions V and Ψ with the following properties:
1. the function V is defined and continuous on S, and the function Ψ is defined and
continuous on Rn
2. -1 < V (x) < 0 for any x Î S {0} and Ψ(x) > 0, for any x Î Rn {0}
3. lim V (x) = 0 and lim Ψ(x) = 0
x®0
x®0
4. for any Γ2 > 0 small enough, there exist Γ1 > 0 and Α1 > 0 such that V (x) < -Γ1 and
Ψ(x) > Α1 , for üxü ³ Γ2
5. for any y Î ¶S, lim V (x) = -1
x®y
6.
d
V (x(t; 0; x0))
dt
= Ψ(x(t; 0; x0))[1 + V (x(t; 0; x0))]
Remark 1.5. Zubov’s theorem concerns the Computation of Da (0) in the case when it is
bounded. The effective Computation of Da (0) using the functions V and Ψ from Zubov’s
theorem is not possible, because the function V (if Ψ is chosen) is constructed by the method of
characteristics, using the solutions of system (1.8). This fact implicitly requests the knowledge
of the region of attraction Da (0) itself.
Another interesting result concerning the exact Computation of Da (0) using Lyapunov functions
is due to Knobloch and Kappel [KK74]. The Knobloch-Kappel’s theorem is established under
the hypothesis that the real parts of the eigenvalues of the matrix D f (0) are negative. In our
context, Knobloch-Kappel’s theorem is the following:
Theorem 1.5. (Knobloch-Kappel) If the real parts of the eigenvalues of the matrix D f (0) are
negative, then for any function Ζ : Rn ® R, with the following properties:
1. Ζ is of class C2 on Rn
2. Ζ(0) = 0 and Ζ(x) > 0, for any x ¹ 0
3. the function Ζ has a positive lower limit on every subset of the set {x : üxü ³ ¶}, ¶ > 0
there exists a unique function V of class C1 on Da (0) which satisfies
a. XÑV (x), f (x)\ = -Ζ(x)
b. V (0) = 0
In addition, V satisfies the following conditions:
c. V (x) > 0, for any x ¹ 0
d. V (x) ® ¥ for x ® y, y Î ¶Da (0) or for üxü ® ¥
33
Remark 1.6. The effective Computation of Da (0) using the functions V and Ζ from KnoblochKappel’s theorem (at this level of generality) is not possible, because the function V (if Ζ is
chosen) is constructed by the method of characteristics using the solutions of system (1.8). This
fact implicitly requests the knowledge of Da (0).
Vanelli and Vidyasagar have established in [VV85] a result concerning the existence of a
maximal Lyapunov function (which characterizes Da (0)), and of a sequence of Lyapunov
functions which can be used for approximating the region of attraction Da (0). In the context
of our considerations, the theorem of Vanelli-Vidyasagar is the following:
Theorem 1.6. (Vanelli-Vidyasagar) An open set S which contains the origin coincides with the
domain of asymptotic stability of the asymptotically stable steady state x = 0, if and only if
there exists a continuous function V : S ® R+ and a positive definite function Ψ on S with the
following properties:
1. V (0) = 0 and V (x) > 0, for any x Î S {0} (V is positive definite on S)
V (x(t;0,x0 ))-V (x0 )
t
t®0+
2. DrV (x0 ) = lim
= -Ψ(x0 ), for any x0 Î S
3. V (x) ® ¥ for x ® y, y Î ¶Da (0) or for üxü ® ¥
Remark 1.7. The Computation of Da (0) using the functions V and Ψ from Vanelli-Vidyasagar’s
theorem is not possible, for the same reason as in the case of the theorems of Zubov and
Knobloch-Kappel.
Remark 1.8. Restraining generality, and considering the case of an R-analytic function f , for
which the real parts of the eigenvalues of the matrix D f (0) are negative, Vanelli and Vidyasagar
[VV85] establish a second theorem which provides a sequence of Lyapunov functions, which
are not necessarily maximal, but can be used in order to approximate Da (0). These Lyapunov
functions are of the form:
Vm (x) =
r2 (x) + r3 (x) + ... + rm (x)
1 + q1 (x) + q2 (x) + ... + qm (x)
mÎN
(1.9)
where ri and qi are i-th degree homogeneous polynomials, constructed using the elements of the
matrix D f (0), of a positively definite matrix G and the nonlinear terms from the development of
f . The algorithm of the construction of Vm is relatively complex, but does not suppose knowledge
of the solutions of system (1.8).
Very interesting results concerning the exact Computation of the domains of attraction (asymptotic stability domains) had been found by Gruyitch between 1985-1995. These results can be
found in [GRBG04], chap. 5. In these results, the function V which characterizes the region
of attraction is constructed by the method of characteristics, which uses the solutions of system
(1.8). Some illustrative examples are exceptions because for them V is found in a finite form,
for some concrete functions f , but without a precise generally applicable rule.
In the same year as Vanelli and Vidyasagar, Balint [Bal85], proved the following theorem:
34
Theorem 1.7. If the function f is R-analytic and the real parts of the eigenvalues of the matrix
D f (0) are negative, then the region of attraction Da (0) of the asymptotically stable steady state
x = 0 coincides with the natural domain of analyticity of the R-analytical function V defined by
XÑV (x), f (x)\ = -üxü2
V (0) = 0
(1.10)
The function V is strictly positive on Da (0) {0} and V (x) ® ¥ for x ® y, y Î ¶Da (0) or for
üxü ® ¥.
Definition 1.6. The unique R-analytical function V which satisfies (1.10) is called optimal
Lyapunov function [BBN86].
Remark 1.9. The Computation of the optimal Lyapunov function V from Theorem 1.7 can be
made by computing the coefficients of the power series expansion in 0 of the function V . These
coefficients can be computed using only (1.10) and it is not necessary to know the solutions of
equation (1.8) as in the case of Theorems 1.4-1.6 [BBN86, BNBS87].
35
1.2 Methods for determining the region of attraction in the
case of exponential asymptotic stability, using Lyapunov
functions
1.2.1 The coefficients of the power series expansion for the optimal Lyapunov function in the diagonalisable case
The following notations will be used:
• if k = (k1 , k2 , ..., kn ) Î Nn then |k| = Ú ki
n
i=1
• if j, k Î Nn then j £ k if and only if ji £ ki for any 1 £ i £ n
• ei denotes the vector from Nn defined by eip = ;
0 if i ¹ p
for any 1 £ i £ n
1 if i = p
k1 k2
kn
• if x = (x1 , x2 , ..., xn) Î Kn and k = (k1 , k2 , ..., kn) Î Nn then xk = x1 x2 ...xn
In the paper [BBN86] the following result is established:
Theorem 1.8. (the diagonal case) In the conditions of Theorem 1.7, if
¥
fi (x) = Λi xi + â aik xk
for i = 1, 2, .., n
(1.11)
|k|=2
then the coefficients Ak (where k = (k1 , k2 , ..., kn) Î Nn ) of the power series expansion
¥
V (x) = â Ak xk
(1.12)
|k|=2
of the optimal Lyapunov function V defined by (1.10) is given by:
Ak = -
1
2Λi
if there exists i Î {1, 2, .., n} such that |k| = ki = 2
Ak = 0
Ak = -
if |k| = 2 and ki £ 1 for any 1 £ i £ n
|k|-1
n
| j| = 2
j£k
i=1
â â[(ki - ji + 1)aij Ak- j+ei ]
1
Ú ki Λi
n
i=1
(1.13)
if |k| ³ 3
By (1.13)1,2 we have directly the coefficients of the terms of second degree and by (1.13)3 we
obtain the coefficients of the terms of degree m ³ 3 in function of the coefficients of the terms
of degree 2, 3, .., m - 1.
If the functions fi are polynomials of second degree, then (1.13)3 becomes
Ak = -
1
Ú ki Λi
n
i=1
n
â â[(ki - ji + 1)aij Ak- j+ei ]
| j| = 2
j£k
i=1
if |k| ³ 3
(1.14)
36
which means that the coefficients of the terms of degree m ³ 3 are linear combinations of the
coefficients of the terms of degree m - 1.
If the functions fi are polynomials of third degree, then the coefficients of the terms of degree
m ³ 3 are combinations of the coefficients of the terms of degree m - 1 and m - 2.
Using the above formulae, an explicit optimal Lyapunov function and the region of attraction
Da (0) have been obtained for each of the following systems:
Example 1.1.
; x˙1 = -Λx1
2
2
x˙ = -Λx + ax21 x2
Λ>0
a Î R1
(1.15)
The optimal Lyapunov function corresponding to the x = (0, 0) asymptotically stable steady
state of this system is
x2
x21
V (x1 , x2 ) = 2 +
(1.16)
2Λ 2Λ - ax1 x2
and the region of attraction is
Example 1.2.
Da (0, 0) = {x = (x1 , x2 ) Î R2 : ax1 x2 < 2Λ}
(1.17)
; x˙1 = -Λx1
2
2
(1.18)
x˙ = -Λx + ax1 x2
Λ>0
a Î R1
The optimal Lyapunov function corresponding to the x = (0, 0) asymptotically stable steady
state of this system is
ì
ï
ï
ï
ï
ï
V (x1 , x2 ) = í
ï
ï
ï
ï
ï
î
2
x2
2Λ
+
2
Λx1
[e
(2ax2 )2
2ax2
Λ
-1-
2ax2
]
Λ
if x2 ¹ 0
(1.19)
2
x1
2Λ
if x2 = 0
and the region of attraction is
Da (0, 0) = R2
Example 1.3.
; x˙1 = -Λx1 + Ρ1 x1 x +2Ρ1 x22
2
2
1 1 2
2 2
x˙ = -Λx + Ρ x2 + Ρ x x
(1.20)
Λ > 0 Ρ1 , Ρ1 Î R1
(1.21)
The optimal Lyapunov function corresponding to the x = (0, 0) asymptotically stable steady
state of this system is
ì
ï
ï
ï
ï
ï
V (x1 , x2 ) = í
ï
ï
ï
ï
ï
î
2
2
x1 +x2
Λ2
[ (Ρ x +Ρ
2
Λ
1 1
2 x2 )
2
2
x1 +x2
2Λ
ln Λ-(Ρ xΛ+Ρ x ) 1 1
2 2
Λ
]
Ρ1 x1 +Ρ2 x2
if Ρ1 x1 + Ρ2 x2 ¹ 0
(1.22)
if Ρ1 x1 + Ρ2 x2 = 0
and the region of attraction is
Da (0, 0) = {x = (x1 , x2 ) Î R2 : Ρ1 x1 + Ρ2 x2 < Λ}
(1.23)
37
Example 1.4.
1 2
; x˙1 = -Λx1 + Ρx21 x + Ρx
3
2
2
1 2
2
x˙ = -Λx + Ρx3 + Ρx x2
Λ > 0 Ρ Î R1
(1.24)
The optimal Lyapunov function corresponding to the x = (0, 0) asymptotically stable steady
state of this system is
1
Λ
V (x1 , x2 ) =
ln
(1.25)
2Ρ Λ - Ρ(x21 + x22 )
and the region of attraction is
Da (0, 0) = {x = (x1 , x2 ) Î R2 : Λ - Ρ(x21 + x22 ) > 0}
(1.26)
In the same paper [BBN86], the diagonalisable case is also considered and the following result
is established:
Theorem 1.9. (the diagonalisable case) In the conditions of Theorem 1.7, if the matrix D f (0)
is diagonalisable and S : Cn ® Cn is an isomorphism which reduces D f (0) to the diagonal form
S-1D f (0)S = diag(Λ1 , Λ2 ...Λn ) and g = S-1 ë f ë S then the problem
XÑW (z), g(z)\ = -üSzü2
W (0) = 0
(1.27)
has a unique analytical solution W = V ë S, where V is the solution of (1.10).
The coefficients Bk (where k = (k1 , k2 , ..., kn) Î Nn ) of the power series expansion of W
¥
W (z) = â Bk zk
(1.28)
|k|=2
are given by
n
Bk
1
= â s2
2Λ p i=1 ip
if |k| = k p = 2
n
Bk = Bk = -
2
âs s
Λ p + Λq i=1 ip iq
1
Ú ki Λi
n
i=1
|k|-1
if |k| = 2 and k p = kq = 1
n
â â[(ki - ji + 1)bij Bk- j+ei ]
| j| = 2
j£k
(1.29)
if |k| ³ 3
i=1
where bij are the coefficients of the expansions of gi
¥
gi (z) = Λi zi + â bij z j
(1.30)
| j|=2
and S = (si j ).
The domain of convergence of the series (1.28) is
2
0
n
D = {z Î C / lim m â |Bk zk | < 1}
m
|k|=m
and S(D0 ) is a part of the domain of attraction Da (0) (see [BNBS87]).
(1.31)
38
Example 1.5.
1
2
1 1
2 2
; x˙1 = Βx -1 Αx +2 (x2 +
x22 )(Ρ2 x1 + Ρ1 x2 )
2
1
2
1
x˙ = -Αx - Βx + (x2 + x2 )(Ρ x - Ρ x )
Α>0
Β > 0 Ρ1 , Ρ2 Î R1
The isomorphism which diagonalises D f (0, 0) = K
-Α -Β
-i i
O is S = K 1 1 O.
Β -Α
The coefficients of W defined by (1.27) are: Bk,k =
22k-1 Ρ1
kΑk
k-1
(1.32)
for k ³ 1 and Bk1 ,k2 = 0 if k1 ¹ k2 .
Hence, if Ρ1 ¹ 0, the function W is given by
W (z1 , z2 ) = -
4Ρ
1
ln(1 - 1 z1 z2 )
2Ρ1
Α
(1.33)
The optimal Lyapunov function which corresponds to x = (0, 0) is
V (x1 , x2 ) = -
1
Ρ
ln(1 - 1 (x21 + x22 ))
2Ρ1
Α
(1.34)
and the region of attraction is
Da (0, 0) = {x = (x1 , x2 ) Î R2 : Α - Ρ1 (x21 + x22 ) > 0}
(1.35)
If Ρ1 = 0, the function W is given by
W (z1 , z2 ) =
2
zz
Α 1 2
(1.36)
The optimal Lyapunov function which corresponds to x = (0, 0) is
V (x1 , x2 ) =
1 2
(x + x22 )
2Α 1
(1.37)
and the region of attraction is
Da (0, 0) = R2
(1.38)
1.2.2 Determining the region of attraction by the gradual extension of the
optimal Lyapunov function’s embryo
When the function f is R-analytic, the real parts of the eigenvalues of the matrix D f (0) are
negative, and the matrix D f (0) is diagonalisable, the optimal Lyapunov function V can be found
theoretically by computing the coefficients of its power series expansion at 0, without knowing
the solutions of system (1.8). More precisely, in this way, the embryo V 0 (i.e. the sum of the
series) of the function V is found theoretically on the domain of convergence D0 of the power
series expansion. D0 is a part of Da (0) and if D0 is a strict part of Da (0), then the embryo V 0 can
be extended using the algorithm of extension for analytic functions:
If D0 is strictly contained in Da (0), then there exists a point x Î ¶D0 such that the function V 0
is bounded on a neighborhood of x. Let be a point x1 Î D0 close to x, and the power series
development of V 0 in x1 (the coefficients of this development are determined by the derivatives
of V 0 in x1 ). The domain of convergence D1 of the series centered in x1 gives a new part
39
D1 (D0 È D1 ) of the domain of attraction Da (0). The sum V 1 of the series centered in x1 is a
extension of the function V 0 to D1 and coincides with V on D1 . At this step, the part D0 Ç D1 of
Da (0) and the restriction of V to D0 Ç D1 are obtained.
If there exists a point x Î ¶(D0 Ç D1 ) such that the function V |D0 Ç D1 is bounded on a
neighborhood of x, then the domain D0 Ç D1 is strictly included in the domain of attraction
Da (0). In this case, the procedure described above is repeated, in a point x2 close to x.
The procedure cannot be continued in the case when it is found that on the boundary of the
domain D0 Ç D1 Ç .... Ç Dm obtained at step m, there are no points having neighborhoods on
which V |D0 Ç D1 Ç ... Ç Dm is bounded. We illustrate this process in the following example:
Example 1.6. Consider the following differential equation:
ẋ = x(x - 1)(x + 2)
(1.39)
x = 0 is an asymptotically stable steady state for this equation. The coefficients of the power
series development in 0 of the optimal Lyapunov function are computed using (1.13):
An =
2n-1 + (-1)n
3n2n-1
n³2
(1.40)
The domain of convergence of the series of V centered in 0 is D0 = (-1, 1). The embryo V 0 is
unbounded in 1 and bounded in -1, as V 0 (-1) = ln32 . Expanding V 0 in -0.9 close to -1, the
coefficients of the series centered in -0.9 are:
A¢n =
1
1
2(-1)n
[
+
]
3n (1.9)n
(1.1)n
n³2
(1.41)
The domain of convergence of the series of V centered in -0.9 is D1 = (-2, 0.2). So far, we have
obtained the part D = D0 Ç D1 = (-2, 1) of the domain of attraction Da (0). As the function V is
unbounded at both ends of the interval, we conclude that Da (0) = (-2, 1).
In practice, the following algorithm has to be used:
1. The isomorphism S : Cn ® Cn which reduces D f (0) to the diagonal form is found.
2. The coefficients Bk given by (1.29) of the function W are computed up to a finite degree
p = |k|, and the following Taylor polynomial of the embryo W 0 is built:
p
Wp0 (z) = â Bk zk
(1.42)
| j|=2
2
3. The set
D0p
n
= {z Î C /
p
â |Bk zk | < 1}
(1.43)
|k|=p
is considered.
4. The corresponding Taylor polynomial of the embryo V 0 of the optimal Lyapunov function
V is built:
Vp0 : S(D0p) Ì Rn ® R
Vp0 = Wp0 ë S-1
(1.44)
40
5. The first approximation of Da (0) is D0ap = L[Vp0 ] Ì S(D0p), the domain on which Vp0
satisfies Lyapunov’s conditions:
p
p
; XÑV
0
0
(x),
f
(x)\
<
0
for
any
x
Î
L[V
p
p ] {0}
V 0 (x) ³ 0
for any x Î L[V 0 ]
6. Let be z1 Î ¶S-1 (D0ap) such that |Wp0 (z1 )| =
W 1 in z1 :
min
zζS
-1
(D0ap )
|Wp0 (z)| and the Taylor polynomial of
p
p
Wp1 (z)
=
â B1k (z
|k|=0
1 k
-z )
2
7. The set:
D1p
(1.45)
n
= {z Î C /
p
where
B1k
¶W0 1
=
(z )
¶zk
â |B1k (z - z1 )k | < 1}
(1.46)
(1.47)
|k|=p
is considered.
8. The following Taylor polynomial of the optimal Lyapunov function V is built:
Vp1 : S(D1p) Ì Rn ® R
Vp1 = Wp1 ë S-1
(1.48)
9. The set D1ap = L[Vp1 ] Ì S(D1p), on which Vp1 satisfies Lyapunov’s conditions is a new, open
and nonempty part of Da (0).
10. This procedure is continued until an estimate D0ap Ç D1ap Ç ... Ç Dkap of Da (0) is obtained.
In the followings, we will present some examples of systems of two or three differential
equations, for which we can easily compute the region of attraction of the null solution. We
will apply the technique presented above to these examples, and we will show how the real
regions of attraction are gradually approximated. These examples are meant to illustrate the
procedure presented above [KBB03].
The computations were made using Mathematica 5.0, Wolfram Research. In our figures, the
thick black line represents the true boundary of the domain of attraction, the dark grey set
denotes the first estimate D0ap, while the further estimates Dkap are colored in lighter shades of
grey.
Example 1.7. The following decoupled system is considered:
; x˙1 = -x1 + x12
2
2
2
x˙ = -x + x2
(1.49)
The region of attraction of the (0, 0) solution is
Da (0, 0) = {(x1 , x2 ) Î R2 / x1 < 1 and x2 < 1}
The optimal Lyapunov function for this system is:
V (x1 , x2 ) = ln(1 - x1 ) + ln(1 - x2 ) - x1 - x2 for x1 < 1 and x2 < 1
The order of approximation is p = 300 and the gradual approximation of Da (0, 0) is presented
in Figures 1.1.1-1.1.2.
41
1.5
1
1
0
0.5
-1
0
-2
-0.5
-1
-3
-1.5
-4
-2
-4
-3
-1
-2
0
-2
1
-1.5
-1
-0.5
0
0.5
1
1.5
Figure 1.1.1: The estimate D0ap ÇD1ap ÇD2ap ÇD3ap Figure 1.1.2: The estimate of the Da (0, 0) after
of Da (0, 0) for system (1.49)
2 steps, in a point close to ¶Da (0, 0) for system
(1.49)
Example 1.8. Let be the following system of differential equations
; x˙1 = -x1 [4 - (x1 - 1)2 - x22 ]
2
2
1
2
x˙ = -x [4 - (x - 1)2 - x2 ]
(1.50)
The region of attraction of the null solution is the interior of the circle of radius 2 centered in
(1, 0):
Da (0, 0) = {(x1 , x2 ) Î R2 / (x1 - 1)2 + x22 < 4}
The order of approximation is p = 200 and the gradual approximation of Da (0, 0) is presented
in Figures 1.2.1-1.2.2.
2
2
1
1
0
0
-1
-1
-2
-2
-1
0
1
2
-1
3
Figure 1.2.1: D0ap for system (1.50)
0
1
2
3
Figure 1.2.2: D0ap Ç D1ap Ç D2ap for system (1.50)
Example 1.9. The following system is considered:
; x˙1 = -x1 + x1 x2
2
2
1 2
x˙ = -x + x x
(1.51)
The boundary of the Da (0, 0) of the null solution of this system is:
x1
x2
¶Da (0, 0) = {(x1 , x2 ) Î R*2
+ / x2 e - x1 e = 0, x1 ¹ x2 }
The order of approximation is p = 200 and the gradual approximation of Da (0, 0) is presented
in Figures 1.3.1-1.3.2.
42
4
4
2
2
0
0
-2
-2
-4
-4
-4
-2
0
2
4
Figure 1.3.1: D0ap for system (1.51)
-4
-2
0
2
4
Figure 1.3.2: D0ap Ç D1ap Ç D2ap Ç D3ap Ç D4ap for
system (1.51)
Example 1.10. The following system of three differential equations is considered:
ì
x˙1 = -x1 (1 - x21 - x22 + x23 )
ï
ï
ï
í
x˙2 = -x2 (1 - x21 - x22 + x23 )
ï
ï
ï x˙ = -x (1 - x2 - x2 + x2 )
î 3
3
1
2
3
(1.52)
The boundary of Da (0, 0) is:
¶Da (0, 0) = {(x1 , x2 , x3 ) Î R3 / 1 - x21 - x22 + x23 = 0}
The order of approximation is p = 100 and the gradual approximation of Da (0, 0) is presented
in Figures 1.4.1-1.4.2.
1
0.5
0
-0.5
-1
1
0.5
0
-0.5
-1
-1
-0.5
0
0.5
1
Figure 1.4.1: D0ap for system (1.52)
Figure 1.4.2: D0ap Ç D1ap for system (1.52)
In the followings, some systems of differential equations are presented for which we don’t
know the region of attraction. For these examples, we will apply the technique presented
above, and we will compare the results obtained by this technique with those obtained in
[GTV85, CTVG01, MSM82].
43
Example 1.11. In [CTVG01], the following example is considered:
2
; x˙1 = -2x
2
2
2
1 - 3x2 + x1 - x2 + x1 x2
x˙ = x
(1.53)
The order of approximation is set to p = 100. In Figures 1.5.1-1.5.4 the black ellipsis represents
the boundary of the estimate of the Da (0, 0) obtained in [CTVG01]. Figure 1.5.1 presents the
first estimate D0ap of the Da (0, 0), compared to the estimate given in [CTVG01]. In Figure 1.5.2,
a second estimate D1ap is also shown. In Figure 1.5.3, three third order second estimates D2ap
are also shown, obtained for three different points close to the boundary of the second estimate.
We observe that the estimate from Figure 1.5.3 covers the one presented in [CTVG01]. Figure
1.5.4 presents an estimate of Da (0, 0) obtained after 4 steps.
3
6
2
4
1
2
0
-1
0
-2
-2
-3
-3
-2
-1
1
0
2
-4
3
-3
-2
-1
1
0
2
3
Figure 1.5.1: The estimate of Da (0, 0) obtained Figure 1.5.2: The estimate of Da (0, 0) obtained
after 1 step for system (1.53)
after 2 steps for system (1.53)
10
8
8
6
6
4
4
2
2
0
0
-2
-2
-4
-2
0
2
-6
-4
-2
0
2
Figure 1.5.3: The estimate of Da (0, 0) after Figure 1.5.4: The estimate of Da (0, 0) obtained
3 steps for three different points close to the after 4 steps for system (1.53)
boundary of D1ap for system (1.53)
Example 1.12. In [MSM82], the following system is considered:
; x˙1 = -2x1 (9 - x1 ) + 0.1(x1 2- x )
2
1
2
1
2
x˙ = -2x (1 - x ) + 0.1x x
(1.54)
44
30
20
10
0
-10
-20
-30
-1
-0.5
0
0.5
1
Figure 1.6.1: The estimate of Da (0, 0) obtained Figure 1.6.2: The estimate of Da (0, 0) obtained
after 1 step for system (1.54)
after 6 steps for system (1.54)
The order of approximation is p = 200. In Figures 1.6.1-1.6.2 present the estimate of Da (0, 0)
obtained after one and six steps respectively. Compared to other estimates obtained by the
"trajectory reversing method" [GTV85], our estimate is larger.
Example 1.13. In [CTVG01], the following example is considered:
2
; x˙1 = -2x
3
4
5
2
1 - x2 - x1 + x1 x2 + x2
x˙ = x
(1.55)
The order of approximation is p = 300. Figure 1.7.1 shows the estimate of Da (0, 0) obtained
after one step. This first estimate is almost the same as the estimate of the Da (0, 0) given in
[CTVG01]. The estimate of the Da (0, 0) obtained after two steps is shown in Figure 1.7.2. In
these figures, the thick black line represents a numerical approximation of an unstable periodic
solution of system (1.55), which represents the boundary of the region of attraction of (0, 0).
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
Figure 1.7.1: The estimate D0ap for system Figure 1.7.2: The estimate D0ap Ç D1ap for system
(1.55)
(1.55)
Example 1.14. In [CTVG01], the following system of three differential equations is considered:
ì
x˙1 = x2
ï
ï
ï
í
x˙2 = x3
ï
ï
ï x˙ = -3x - 3x - 2x + x3 + x3 + x3
î 3
1
2
3
1
2
3
(1.56)
45
The order of approximation is p = 100 and the gradual approximation of Da (0, 0) is presented
in Figures 1.8.1-1.8.2.
Figure 1.8.1: The estimate D0ap for system Figure 1.8.2: The estimate D0ap Ç D1ap for system
(1.56)
(1.56)
1.2.3 Properties of Taylor polynomials Vp0 of the optimal Lyapunov function and other method for approximating the domains of attraction
For r > 0, we denote by B(r) = {x Î Rn : üxü < r} the hypersphere of radius r.
Theorem 1.10. For any p ³ 2, there exists r p > 0 such that for any x Î B(r p) {0} one has:
1. Vp0 (x) > 0
2. XÑVp0 (x), f (x)\ < 0
Proof. First, we will prove that for p = 2, the function V20 has the properties 1. and 2. For this,
write the function f as:
f (x) = Ax + g(x)
with A = D f (0)
(1.57)
and the equation
as
XÑV (x), f (x)\ = -üxü2
(1.58)
XÑV20 (x), Ax\ + XÑ(V - V20 )(x), Ax + g(x)\ + XÑV20 (x), g(x)\ = -üxü2
(1.59)
46
Equating the terms of second degree, we obtain:
XÑV20 (x), Ax\ = -üxü2
As V20 (0) = 0, it results that:
V20 (x) = à
¥
(1.60)
üeAt xü2 dt
(1.61)
0
This shows that V20 (x) > 0 for any x Î Rn {0}.
On the other hand, one has:
XÑV20 (x), f (x)\ = XÑV20 (x), Ax\ + XÑV20 (x), g(x)\ =
= -üxü2 + XÑV20 (x), g(x)\ =
XÑV20 (x), g(x)\
= -üxü2 [1 ]
üxü2
XÑV2 (x),g(x)\
= 0, there exists r2 >
üxü2
üxü®0
0
(x),g(x)\
| XÑV2üxü
| < 12 . Therefore, for any x Î B(r2 )
2
0
As lim
(1.62)
0 such that for any x Î B(r2 ) {0}, we have
{0}, we get that:
1
XÑV20 (x), f (x)\ £ - üxü2
2
(1.63)
We will show that for any p > 2, the function Vp0 satisfies conditions 1. and 2. Write the function
Vp0 as
Vp0 (x) - V20 (x)
0
0
Vp (x) = V2 (x)[1 +
]
x¹0
(1.64)
V20 (x)
0
As lim
üxü®0
Vp0 (x)-V2 (x)
0
V2 (x)
= 0, there exists r1p such that for any x Î B(r1p) {0}, we have |
Therefore, for any x Î B(r1p) {0}, we have:
0
Vp0 (x)-V2 (x)
0
V2 (x)
1
Vp0 (x) ³ V20 (x) > 0
2
| < 12 .
(1.65)
thus, Vp0 satisfies condition 1.
On the other hand, we have:
XÑVp0 (x), f (x)\ = XÑV20 (x), Ax\[1 +
= -üxü2 [1 XÑ(Vp0 -V2 )(x), f (x)\+XÑV2 (x),g(x)\
0
As lim
|
0
üxü
üxü®0
0
0
XÑ(Vp0 -V2 )(x), f (x)\+XÑV2 (x),g(x)\
2
üxü2
XÑ(Vp0 - V20 )(x), f (x)\ + XÑV20 (x), g(x)\
XÑV20 (x), Ax\
XÑ(Vp0 - V20 )(x), f (x)\ + XÑV20 (x), g(x)\
üxü2
]
]=
(1.66)
= 0, there exists r2p such that for any x Î B(r2p) {0}, we have
| < 12 . Therefore, for any x Î B(r2p) {0}, we have:
1
XÑVp0 (x), f (x)\ £ - üxü2
2
(1.67)
Therefore, for any x Î B(r p) {0}, where r p = min{r1p, r2p}, the function Vp0 satisfies conditions 1.
and 2.
47
Corollary 1.1. For any p ³ 2, there exists a maximal domain G p Ì Rn such that 0 Î G p and
for any x Î G p {0}, function Vp0 verifies 1. and 2. from Theorem 1.10. In other words, for any
p ³ 2 the function Vp0 is a Lyapunov function for (1.8) on the maximal domain G p.
Remark 1.10. Theorem 1.10 provides that the Taylor polynomials of degree p ³ 2 associated
to V in 0 are Lyapunov functions on G p. This sequence of Lyapunov functions is different of that
provided by Vanelli and Vidyasagar in [VV85].
Theorem 1.11. For any p ³ 2, there exists c > 0 and a closed and connected set S of points
from x Î Rn , with the following properties:
1. 0 Î Int(S)
2. Vp0 (x) < c for any x Î Int(S)
3. Vp0 (x) = c for any x Î ¶S
4. S is compact and included in the set G p.
Proof. Let be p ³ 2 and r p > 0 determined in Theorem 1.10. Let be c = min Vp0 (x) and
üxü=r p
ø
S = {x Î B(r p) :
< c}. It is obvious that c > 0 and that there exist x with üx ü = r p such
0 ø
that Vp (x ) = c. The set S¢ is open, 0 Î S¢ and S¢ Ì B(r p) Ì G p.
¢
Vp0 (x)
ø
We will prove that Vp0 (x) = c for any x Î ¶S¢ . Let be x̄ Î ¶S¢ . Thus, üx̄ü £ r p and
there exists a sequence xk Î S¢ such that xk ® x̄ as k ® ¥. As Vp0 (xk ) < c, we have that
Vp0 (x̄) = lim Vp0 (xk ) £ c. The case üx̄ü = r p and Vp0 (x̄) < c is impossible, because c = min Vp0 (x).
üxü=r p
k®¥
The case üx̄ü < r p and
< c is also impossible, because this would mean that x̄ belongs to
¢
the set S , and not to its boundary. Therefore, for any x̄ Î ¶S¢ we have Vp0 (x̄) = c.
Vp0 (x̄)
If the set S¢ is not connected (see Example 1.15 in this paper) then we denote by S¢¢ its connected
component which contains the origin, and let be S = S¢¢ . It is obvious that S is connected (being
the closure of the open connected set S¢¢ ), 0 Î Int(S) = S¢¢ , and for any x Î Int(S) = S¢¢ , we
have Vp0 (x) < c. More, as ¶S = ¶S¢¢ , we have Vp0 (x) = c for any x Î ¶S. As S¢¢ is bounded, we
obtain that its closure S is also bounded, thus S is compact. As S¢¢ Ì B(r p) Ì G p, we have that
S = S¢¢ Ì B(r p) Ì G p. Therefore, S possesses the properties 1-4.
Lemma 1.1. Let be p ³ 2 and c > 0. If a closed and connected set S satisfies 1-4 from Theorem
1.11, then for any x0 Î S, the solution x(t; 0, x0) of system (1.8) starting from x0 is defined on
[0, ¥) and belongs to Int(S) for any t > 0.
Proof. Let be x0 Î S. We denote by [0, Βx0 ) the right maximal interval of existence of the
solution x(t; 0, x0) of system (1.8) with starting state x0 .
First, if x0 Î Int(S) {0}, we show that x(t; 0, x0) Î Int(S), for all t Î [0, Βx0 ). Suppose
the contrary, i.e. there exists T Î (0, Βx0 ) such that x(t; 0, x0) Î Int(S), for t Î [0, T ) and
x(T ; 0, x0) Î ¶S (i.e. Vp0 (x(T ; 0, x0)) = c). As x(t; 0, x0) Î G p {0}, for t Î [0, T ), Vp0 (x(t; 0, x0))
is strictly decreasing, and it follows that Vp0 (x(t; 0, x0)) < Vp0 (x0 ) < c, for t Î (0, T ). Therefore
Vp0 (x(T ; 0, x0)) < c, which contradicts the supposition x(T ; 0, x0) Î ¶S. Thus, x(t; 0, x0) Î Int(S),
for all t Î [0, Βx0 ). (It is clear that for x0 = 0, the solution x(t; 0, 0) = 0 Î Int(S), for all t ³ 0.)
48
If x0 Î ¶S, we show that x(t; 0, x0) Î Int(S), for all t Î (0, Βx0 ). As the compact set S is a
subset of the domain G p, the continuity of x(t; 0, x0) provides the existence of Tx0 > 0 such that
x(t; 0, x0) Î G p {0} for any t Î [0, Tx0 ] Ì [0, Βx0 ). Therefore Vp0 (x(t; 0, x0)) is strictly decreasing
on [0, Tx0 ], and it follows that Vp0 (x(t; 0, x0)) < Vp0 (x0 ) = c, for any t Î (0, Tx0 ). This means
that Vp0 (x(t; 0, x0)) Î Int(S), for any t Î (0, Tx0 ]. The first part of the proof guarantees that
x(t; 0, x0) Î Int(S), for all t Î [Tx0 , Βx0 ), therefore, for all t Î (0, Βx0 ).
In conclusion, for any x0 Î S, we have that x(t; 0, x0) Î Int(S), for all t Î (0, Βx0 ).
As for any x0 Î S, the solution x(t; 0, x0) defined on [0, Βx0 ), belongs to the compact set S, we
obtain that Βx0 = ¥ and the solution x(t; 0, x0) is defined on [0, ¥), for each x0 Î S. More,
x(t; 0, x0) Î Int(S), for all t > 0.
Remark 1.11. Lemma 1.1 states that a closed and connected set S satisfying 1-4 from Theorem
1.11 is positively invariant to the flow of system (1.8).
Theorem 1.12. (LaSalle-type theorem) Let be p ³ 2 and c > 0. If a closed and connected set S
satisfies 1-4 from Theorem 1.11, then S is a part of the of the domain of attraction Da (0).
Proof. Let be x0 Î S {0}. In order to prove that lim x(t; 0, x0) = 0, it is sufficient to prove that
t®¥
lim x(tk ; 0, x0 ) = 0, for any sequence tk ® ¥.
k®¥
Consider tk ® ¥. The terms of the sequence x(tk ; 0, x0) belong to the compact set S. Thus, there
exits a convergent subsequence x(tk j ; 0, x0) ® y0 Î S.
It can be shown that
Vp0 (x(t; 0, x0)) ³ Vp0 (y0 ) for all t ³ 0
(1.68)
For this, observe that x(tk j ; 0, x0) ® y0 and Vp0 is strictly decreasing along the trajectories. It
follows that Vp0 (x(tk j ; 0, x0)) ³ Vp0 (y0 ) for any k j . On the other hand, for any t ³ 0, there exists k j
such that tk j ³ t, Therefore Vp0 (x(t; 0, x0)) ³ Vp0 (x(tk j ; 0, x0)) ³ Vp0 (y0 ) for any t ³ 0.
We show now that y0 = 0. Suppose the contrary, i.e. y0 ¹ 0. Inequality (1.68) becomes
Vp0 (x(t; 0, x0)) ³ Vp0 (y0 ) > 0 for all t ³ 0
(1.69)
As Vp0 (x(s; 0, y0)) is strictly decreasing on [0, ¥), we find that
Vp0 (x(s; 0, y0)) < Vp0 (y0 ) for all s > 0
(1.70)
For s̄ > 0, there exists a neighborhood Ux(s̄;0,y0 ) Ì S of x(s̄; 0, y0) such that for any x Î Ux(s̄;0,y0 )
we have 0 < Vp0 (x) < Vp0 (y0 ). On the other hand, for the neighborhood Ux(s̄;0,y0 ) there exists a
neighborhood Uy0 Ì S of y0 such that x(s̄; 0, y) Î Ux(s̄;0,y0 ) for any y Î Uy0 . Therefore:
Vp0 (x(s̄; 0, y)) < Vp0 (y0 ) for all y Î Uy0
(1.71)
As x(tk j ; 0, x0) ® y0 , there exists k j̄ such that x(tk j ; 0, x0) Î Uy0 , for any k j ³ k j̄ . Making
y = x(tk j ; 0, x0 ) in (1.71), it results that
Vp0 (x(s̄ + tk j̄ ; 0, x0)) = Vp0 (x(s̄; 0, x(tk j̄ ; 0, x0))) < Vp0 (y0 )
for k j ³ k j̄
(1.72)
49
which contradicts (1.69). This means that y0 = 0 and consequently, every convergent
subsequence of x(tk ; 0, x0) converges to 0. This provides that the sequence x(tk ; 0, x0) is
convergent to 0, for any tk ® ¥, thus lim x(t; 0, x0) = 0, and x0 Î Da (0).
t®¥
Therefore, the set S is contained in the domain of attraction of Da (0).
Corollary 1.2. For a p ³ 2 and a c > 0 there exists at most one closed and connected set
satisfying 1-4 from Theorem 1.11.
Proof. Suppose the contrary, i.e. for a p ³ 2 and a c > 0 there exist two different closed and
connected sets S1 and S2 satisfying 1-4 from Theorem 1.11. Assume for example that there
exists x0 Î S1 S2 . Due to Theorem 1.12, S1 Ì Da (0) and therefore lim x(t; 0, x0) = 0. As
t®¥
x0 Î/ S2 , and S2 is a closed and connected neighborhood of 0, there exists T > 0 such that
x(T ; 0, x0) Î ¶S2 . Therefore, Vp0 (x(T ; 0, x0)) = c which contradicts Lemma 1.1. Consequently,
we must have S1 Í S2 . By the same reasons, we must have S2 Í S1 . Finally, S1 = S2 .
Remark 1.12. If for a p ³ 2 and a c > 0 there exists a closed and connected set satisfying 1-4
from Theorem 1.11, then it is unique and it will be denoted by Npc . According to Theorem 1.11,
for any p ³ 2 there exists c > 0 such that the set Npc exists.
Corollary 1.3. Any set Npc is included in the domain of attraction Da (0).
¢
Lemma 1.2. If for a p ³ 2 and a c > 0 the set Npc exists, then for any c¢ Î (0, c] the set Npc
exists and coincides with the set {x Î Npc : Vp0 (x) £ c¢ }.
¢
Proof. Let be c¢ Î (0, c]. It is obvious that if Npc exists then it is included in the set
{x Î Npc : Vp0 (x) £ c¢ }. On the other hand, let be x0 Î Npc such that Vp0 (x0 ) £ c¢ . We know
that Vp0 (x(t; 0, x0)) < Vp0 (x0 ) £ c¢ , for any t > 0. Theorem 1.12 provides that Npc Ì Da (0) and
therefore, x0 is connected to 0 through the continuous trajectory x(t; 0, x0), along which Vp0 takes
¢
¢
values below c¢ . It follows that Npc exists and x0 Î Npc .
Theorem 1.13. If for a p ³ 2 and a c > 0 the set Npc exists, then for any c¢ Î (0, c) there exists
¢
c1
¢
c2
Npc and Npc Ì Npc . More, for any c1 , c2 Î (0, c) we have Np Ì Np if and only if c1 < c2 .
¢
Proof. Lemma 1.2 provides that for any c¢ Î (0, c) the set Npc = {x Î Npc : Vp0 (x) £ c¢ } exists. It
¢
is obvious that Npc Ì Npc .
c1
c2
Let’s show that for any c1 , c2 Î (0, c) we have Np Ì Np if and only if c1 < c2 .
c1
c2
c1
c2
Assume first that Np Ì Np and show that c1 < c2 . Suppose the contrary, i.e. Np Ì Np and
c2
c1 ³ c2 . Let be x0 Î ¶Np Ì G p. Then Vp0 (x0 ) = c2 and as x0 Î G p, we get that
Vp0 (x(t; 0, x0)) £ Vp0 (x0 ) = c2
for any t ³ 0
(1.73)
c2
c1
c2
Theorem 1.12 provides that x0 Î ¶Np Ì Da (0), therefore lim x(t; 0, x0) = 0. As Np and Np are
c1
Np
t®¥
c2
Np ,
c1
connected neighborhoods of 0 and
Ì
there exists T ³ 0 such that x(T ; 0, x0) Î ¶Np .
0
This means that Vp (x(T ; 0, x0 )) = c1 ³ c2 . On the other hand, relation (1.73) provides that
c1
c2
Vp0 (x(T ; 0, x0)) £ c2 and therefore c1 = c2 . As Np is strictly included in Np , there exists
50
c1
c2
x̄ Î ¶Np (i.e. Vp0 (x̄) = c1 = c2 ) such that x̄ Î Int(Np ). This contradicts the property 2 from
c2
Theorem 1.11 concerning Np . In conclusion, c1 < c2 .
Let’s suppose now that c1 < c2 and let be x0 Î Np {0}. As x0 Î Np Ì Da (0), we have that
lim x(t; 0, x0) = 0, so x0 is connected to 0 through the continuous trajectory x(t; 0, x0). More,
c1
t®¥
c1
as x0 Î Np {0}, we have Vp0 (x0 ) £ c1 < c2 . This means that x0 Î Np , therefore Np Í Np .
c1
c2
c1
c2
c1
c1
c2
c2
The inclusion is strict, because Np = Np means ¶Np = ¶Np , i.e. c1 = c2 , which contradicts
c1 < c2 .
Corollary 1.4. For a given p ³ 2, the family of all Npc sets is totally ordered and Ç Npc is
c
included in Da (0). Therefore, for a given p ³ 2, the largest part of Da (0) which can be found by
this method is Ç Npc .
c
For a p ³ 2 let be R p = {r > 0 : B(r) Ì G p}. For r Î R p we denote by crp = inf Vp0 (x).
üxü=r
crp
crp
Corollary 1.5. For any r Î R p the set Np exists and Np Í B(r).
Corollary 1.6. For a p ³ 2 and r¢ , r¢¢ Î R p, r¢ < r¢¢ , if Vp0 is radially increasing on B(r¢¢ ) then
¢
¢¢
crp < crp .
Remark 1.13. In some cases, it can be shown that the function Vp0 is radially increasing on G p:
a. V20 is radially increasing on Rn ;
b. If n = 1, then for any p ³ 2, Vp0 is radially increasing on G p.
This result is not true in general, provided by the following example:
Example 1.15. Let be the following system of differential equations:
; x˙1 = -x1 + x1 x2
2
2
1 2
x˙ = -x - x x
(1.74)
for which (0, 0) is an asymptotically stable steady state. For p = 3 the Lyapunov function
V30 (x1 , x2 ) is given by:
1
1
V30 (x1 , x2 ) = (x21 + x22 ) + (x1 x22 - x2 x21 )
(1.75)
2
3
0 0
Consider the point (3 5, 5) Î ¶G3 and let be 0g : [0, 1) ® G3 defined
by g(Λ) =
0
0 0
5
5
0
V3 (3 5Λ, 5Λ). The function g is increasing on [0, 3 ] and decreasing on ( 3 , 1), therefore,
0 0
the Lyapunov function V30 is not radially increasing on the direction (3 5, 5). In conclusion,
V30 is not radially increasing on G3 .
More, for c = 0.32 the set N3c exists, but the set {x = (x1 , x2 ) Î G3 : V30 (x1 , x2 ) £ c} is
not connected. The reason is that the point (x¯1 , x¯2 ) = ( 123
, 41 ) Î ¶G3 with V30 (x¯1 , x¯2 ) = 0
8 24
0
has a nonempty neighborhood U such that V3 (x1 , x2 ) £ c, for any (x1 , x2 ) Î G3 È U and
(G3 È U ) È N3c = Æ.
Theorem 1.14. For any p ³ 2 there exists Ρ p > 0 such that Vp0 is radially increasing on B(Ρ p).
51
Proof. It can be easily verified that V20 is radially increasing on Rn , using relation (1.61). This
provides that for any x Î Rn {0}, the function gx2 : R+ ® R+ defined by gx2 (Λ) = V20 (Λx) is
d x
strictly increasing on R+ , therefore dΛ
g2 (Λ) > 0 on Rø+ , i.e.
XÑV20 (Λx), x\ > 0
for any Λ > 0 and x Î Rn {0}
(1.76)
Let be p > 2, x Î Rn {0} and gxp : R+ ® R+ defined by gxp(Λ) = Vp0 (Λx). One has:
d x
g (Λ) = XÑVp0 (Λx), x\ = XÑV20 (Λx), x\ + XÑ(Vp0 - V20 )(Λx), x\ =
dΛ p
XÑ(Vp0 - V20 )(Λx), x\
= XÑV20 (Λx), x\(1 +
)
XÑV20 (Λx), x\
XÑ(Vp0 -V2 )(Λx),x\
0
As lim
0
XÑV2 (Λx),x\
XÑ(Vp0 -V2 )(Λx),x\
0
= 0, there exists Ρ p > 0 such that |
0
XÑV2 (Λx),x\
Relation (1.77) provides that for any x Î B(Ρ p) {0}, we have:
x®0
d x
1
g p(Λ) ³ XÑV20 (Λx), x\ > 0
dΛ
2
(1.77)
| £ 12 , for any x Î B(Ρ p) {0}.
for any Λ > 0
(1.78)
Therefore, for any x Î B(Ρ p) {0}, the function gxp is strictly increasing on Rn , i.e. Vp0 is radially
increasing on B(Ρ p).
¢
Theorem 1.15. Let be p ³ 2 and c > 0 such that the set Npc exists. If for any c¢ £ c, the sets Npc
¢
¢
have the star-property, i.e. for any x Î Npc and for any Λ Î [0, 1) one has Λx Î Int(Npc ), then
Vp0 is radially increasing on Npc .
Proof. Let be x0 Î ¶Npc and 0 < Λ1 < Λ2 £ 1. We have to show that Vp0 (Λ1 x0 ) < Vp0 (Λ2 x0 ).
Denote c1 = Vp0 (Λ1 x0 ) > 0, c2 = Vp0 (Λ2 x0 ) > 0 and suppose the contrary, i.e. c1 ³ c2 . Theorem
c1
c2
c2
c2
1.13 provides that Np Np . Lemma 1.2 guarantees that Λ2 x0 Î ¶Np . As Np has the starc2
property, then for Λ = ΛΛ1 Î (0, 1), we have that Λ(Λ2 x0 ) = Λ1 x0 Î Int(Np ), so c1 = Vp0 (Λ1 x0 ) < c2
2
which contradicts the supposition c1 ³ c2 . Therefore, Vp0 is radially increasing on Npc .
Remark 1.14.
a. For any x Î D0 , there exists px ³ 2 such that x Î G p for any p ³ px ;
b. If n = 1, there exists p0 ³ 2 such that D0 Ì G p, for any p ³ p0 .
c. If there exists r > 0 such that B(r) Ì G p for any p ³ 2, then there exists p0 ³ 2 such that
D0 Ì G p0 .
Example 1.16. For system (1.51), for any p ³ 2 and c > 0 the set D0 Npc is non-empty.
Example 1.17. We consider the following system of differential equations of Van der Pol
[KBB05a]:
x˙ = -x2
(1.79)
; x˙1 = x x2 + x21 x2
2
1
The (0, 0) steady state of (1.79) is asymptotically stable. The boundary of the domain of
attraction of (0, 0) is an unstable periodic solution of (1.79).
52
3
3
2
2
1
1
0
0
-1
-1
-2
-2
-3
-3
-3
-2
-1
1
0
2
3
-3
-2
-1
1
0
c20
2
3
c50
Figure 1.9.1: The sets N20 , G20 and Da (0, 0) for Figure 1.9.2: The sets N50 , G50 and Da (0, 0) for
system (1.79)
system (1.79)
For p = 20 we have computed that the largest value c > 0 for which there exists the set Npc
is c20 = 8.8466. For p = 50, the largest value c > 0 for which there exists the set Npc is
c50 = 13.887. In Figures 1.9.1-1.9.2, the thick black curve represents the boundary of Da (0, 0),
cp
the thin black curve represents the boundary of G p and the gray surface represents the set Np .
c50
The set Np approximates very well the domain of attraction of (0, 0).
Example 1.18. Let’s see the results obtained by this method in the case of the examples
considered in the previous section. In the following figures, the thick black line represents
the true boundary of the region of attraction (in the cases when it is known), the thin black line
represents the estimate D0ap of the region of attraction presented in the previous section, while
cp
the gray set represents Np .
2
1.5
1
1
0.5
0
0
-0.5
-1
-1
-1.5
-1.5
-1
-0.5
0
0.5
1
1.5
-2
-1
0
cp
1
2
cp
Figure 1.10: The set Np for (1.49) with Figure 1.11: The set Np
p = 100, c p = 4.18
p = 100, c p = 1.08
cp
3
for (1.50) with
We observe that for the systems (1.49) and (1.50), the sets Np shown in Figures 1.10-1.11
are almost identical to the first estimates obtained in the previous section, using the gradual
expansion technique.
53
1
0
4
-1
2
1
0
0
-2
-1
-4
-1
0
-4
-2
0
2
4
1
cp
cp
Figure 1.12: The set Np for (1.51) with Figure 1.13: The set Np
p = 50, c p = 2.9
p = 30, c p = 1.65
for (1.52) with
6
20
4
10
2
0
0
-10
-2
-20
-4
-3
-2
-1
0
1
2
-1
3
-0.5
0
cp
0.5
cp
Figure 1.14: The set Np for (1.53) with Figure 1.15: The set Np
p = 50, c p = 6.8
p = 50, c p = 70
1
for (1.54) with
1
0.5
1
0
-0.5
-1
0.5
1
0
0
-0.5
-1
-0.5
-1
0
-1
-0.5
0
0.5
cp
1
0.5
cp
Figure 1.16: The set Np for (1.55) with Figure 1.17: The set Np
p = 100, c p = 2.4
p = 4 , c p = 1.37
for (1.56) with
54
1.3 Methods for determining the region of attraction in the
case of non-exponential asymptotic stability, using Lyapunov functions
Consider the following dynamical system:
ẋ = f (x)
(1.80)
where f : Rn ® Rn is an R-analytic function on Rn such that f (0) = 0 and D f (0) has at least one
eigenvalue on the imaginary axis, i.e. we cannot have exponential asymptotic stability for x = 0.
In this section, we will study the estimation of the region of attraction of the asymptotically
stable steady state x = 0 under this hypothesis.
1.3.1 The P(q) property for flows
Definition 1.7. Let be q Î N* . We say that the flow of a system ẋ = f (x) (where f is of class C1
on a neighborhood of x = 0) has the P(q) property if there exist ∆ > 0 and c > 0 such that, for
any üx0 ü < ∆, there exists T = T (x0 ) ³ 0 such that
c
"t ³ T
(1.81)
üx(t, x0 )ü £ 0
2q
t+1
Actually, the idea of defining the P(q) property for flows, which will be useful in the construction of an optimal Lyapunov function for system (1.80), comes from the following result of
Zubov:
Proposition 1.1. (see [Zub64, Zub78]) If the steady state x = 0 of system (1.80) is asymptotically stable, then there exists ∆ > 0 and a strictly decreasing continuous function L : R+ ® R*+
with lim L(t) = 0 such that the flow of (1.80) has the following property:
t®¥
üx(t, x0 )ü £ L(t)
"t ³ 0 and üx0 ü £ ∆
(1.82)
Proof. As x = 0 is asymptotically stable, there exists ∆ > 0 such that for any üx0 ü £ ∆ we have
üx(t, x0 )ü < 1 for any t ³ 0 and üx(t, x0)ü ® 0 when t ® ¥.
Let be L̄(t) = sup üx(t, x0 )ü. It is clear that 0 £ L̄(t) £ 1 for any t ³ 0 and L̄(t) ® 0 when
t ® ¥.
üx0 ü£∆
Let be L(t) a strictly decreasing, continuous function defined on R+ with L(t) ® 0 when t ® ¥
such that L̄(t) £ L(t) for any t ³ 0. The function L verifies all the conditions requested by the
proposition.
Next, we will prove two useful lemmas for the results to come:
Lemma 1.3. Let be U , V Ì Rn two neighborhoods of the origin and a function f : U ® Rn of
class C1 with f (0) = 0. Consider a C1 -diffeomorphism Φ : V ® U , Φ(V ) = U , Φ(y) = O(üyü)
such that the change of coordinates x = Φ(y) transforms the system ẋ = f (x) into a system
ẏ = g(y) with g : V ® Rn of class C1 . Suppose that the flow of ẏ = g(y) has the P(q) property for
some q Î Nø . Then the flow of ẋ = f (x) has the P(q) property, as well.
55
Proof. As Φ(y) = O(üyü) there exists M > 0 and ∆ > 0 such that B(∆) Ì V and üΦ(y)ü £ Müyü
for any üyü < ∆.
As the flow of ẏ = g(y) has the P(q) property, it results that there exist ∆¢ > 0 and c > 0 such
that for any üy0 ü < ∆¢ there exists T = T (y0 ) > 0 such that
c
üy(t, y0)ü £ 0
2q
t+1
"t ³ T
As Φ-1 is continuous, it results that there exists ∆¢¢ > 0 such that for any üxü < ∆¢¢ one has
üΦ-1 (x)ü < ∆¢ .
Let be üx0 ü < ∆¢¢ . Therefore, y0 = Φ-1(x0 ) is in the ball B(∆¢ ). Hence, there exists T = T (x0 ) > 0
such that
c
"t ³ T
üy(t, y0)ü £ 0
2q
t+1
hence, üy(t, y0)ü ® 0 as t ® ¥. This implies that there exists T ¢ ³ T such that üy(t, y0 )ü < ∆ for
any t ³ T ¢ . Therefore:
Mc
üx(t, x0 )ü = üΦ(y(t, y0))ü £ Müy(t, y0)ü £ 0
2q
1+t
"t ³ T ¢
meaning that the flow of system ẋ = f (x) has the P(q) property.
Lemma 1.4. Let be the following autonomous differential equation:
ẋ = f (x)
(1.83)
where f : [-∆, ∆] ® R is a function of class C p+1 , p Î Nø with f (k) (0) = 0, k = 0, p - 1 and
f (p) (0) ¹ 0. The following hold:
i. There exists a continuous function g : [-∆, ∆] ® R with g(0) = 1 such that
f (x) = a px p g(x)
where a p =
f (p) (0)
p!
for any x Î R
(1.84)
¹ 0.
ii. The null solution of (1.83) is asymptotically stable if and only if p is odd and f (p) (0) < 0.
iii. If the null solution of (1.83) is asymptotically stable then the flow of the system (1.83) has
the P(q) property, where p = 2q + 1.
Proof. i. Let be g : [-∆, ∆] ® R given by
g(x) = ;
f (x)
,
a px p
1,
if x ¹ 0
if x = 0
Taylor’s formula of order p for f at x = 0 provides that g is continuous.
ii. Necessity. Suppose that the steady state x = 0 of (1.83) is asymptotically stable. There exists
∆¢ Î (0, ∆) such that for any |x0 | < ∆¢ we have x(t; x0) ® 0 when t ® ¥.
Let’s prove that for any |x| < ∆¢ , x ¹ 0 we have x f (x) < 0.
56
Suppose the contrary, that there exists x0 Î (-∆¢ , ∆¢ ) {0} such that x0 f (x0 ) ³ 0. The case
x0 f (x0 ) = 0 is not possible, as there are no steady states contained in Da (0) {0}. Therefore,
x0 f (x0 ) > 0. Let be j(t) = x(t; x0)2 . We have that j¢ (t) = 2x(t; x0)ẋ(t; x0) = 2x(t; x0) f (x(t; x0)).
As x(t, x0) Î Da (0) for any t ³ 0, there is no T > 0 such that x(T ; x0 ) f (x(T ; x0 )) = 0, therefore,
j¢(t) > 0 for any t ³ 0. We obtain that j(t) = x(t; x0 )2 is strictly increasing on [0, ¥), which
contradicts the fact that x(t; x0) ® 0 when t ® ¥.
Therefore, x f (x) < 0 for any |x| < ∆¢ , x ¹ 0. It results that a px p+1 g(x) < 0 for any |x| < ∆¢ , x ¹ 0.
As g(0) = 1, there exists ∆¢¢ £ ∆¢ such that g(x) > 0 for any |x| < ∆¢¢ . We obtain that a px p+1 < 0
for any |x| < ∆¢¢ , x ¹ 0. This is possible only if p is odd (p = 2q + 1) and a p < 0.
Sufficiency. Suppose that p is odd and a p < 0. As g is continuous and g(0) = 1 there exists
∆¢ Î (0, ∆) such that g(x) > 12 for any |x| < ∆¢ .
For |x0 | < ∆¢ we will show that |x(t, x0)| < ∆¢ for any t ³ 0. Suppose the contrary, that
there exists T > 0 such that |x(t, x0 )| < ∆¢ for any t Î [0, T ) and |x(T ; x0 )| = ∆¢ . As
|x(t, x0)| < ∆¢ for any t Î [0, T ), it results that dtd x(t; x0)2 = 2a px(t; x0) p+1 g(x(t; x0)) < 0 for
any t Î [0, T ), therefore, the function x(t; x0)2 is strictly decreasing on [0, T ). We obtain that
(∆¢ )2 = x(T , x0 )2 < x(0, x0 )2 = (x0 )2 , thus ∆¢ < |x0 |, contradiction.
Therefore, for any |x0 | < ∆¢ , we have |x(t, x0 )| < ∆¢ for any t ³ 0 and the function x(t, x0)2 is
strictly decreasing on [0, ¥).
We consider M = lim x(t, x0 )2 and we show that M = 0. Suppose the contrary, that M > 0.
t®¥
Therefore, x(t, x0 )2 > M for any t ³ 0, and
d
x(t, x0)2
dt
= 2a px(t, x0) p+1 g(x(t, x0)) £ a pM
2 t®¥
p+1
2
. It
t®¥
results that x(t, x0) -¥, absurd. Therefore, x(t, x0) 0 for any |x0 | < ∆¢ and x = 0 is
asymptotically stable.
iii. Suppose that x = 0 is asymptotically stable for (1.83). Obviously, p is odd (p = 2q + 1) and
a p < 0.
We have shown (see ii. Sufficiency) that there exists ∆¢ Î (0, ∆) such that g(x) >
|x| < ∆¢ and |x(t, x0)| < ∆¢ for any |x0 | < ∆¢ and t ³ 0.
1
2
for any
Let be x0 Î (-∆¢ , ∆¢ ) {0}. We know that x(t) = x(t, x0) verifies
ẋ = a px2q+1 g(x)
(1.85)
and therefore
1 ẋ(s)
1
= g(x(s)) >
2q+1
a p x(s)
2
"t ³ 0
(1.86)
Integrating this inequality on the interval [0, t], we obtain
t
1
1
ẋ(s)
ds ³ t
à
2q+1
a p 0 x(s)
2
and finally
-2q
x(t, x0) £ [x0
where c = max(∆¢ , (-a pq)
1
- 2q
).
1
- 2q
- a pqt]
"t ³ 0
c
£ 0
2q
t+1
(1.87)
"t ³ 0
(1.88)
57
1.3.2 The region of attraction in the case of flows with the P(q) property
Theorem 1.16. If there exists q Î N* such that the flow defined by the system (1.80) has the P(q)
property then the region of attraction Da (0) coincides with the natural domain of analyticity of
the R-analytical function V defined by
XÑV (x), f (x)\ = -üxü2q+2
V (0) = 0
(1.89)
The function V is strictly positive on Da (0) {0} and V (x) ® ¥ for x ® y, y Î ¶Da (0) or for
üxü ® ¥.
Proof. First, we will prove that there exists a unique analytic function V which satisfies (1.89).
Consider two analytic functions V1 and V2 satisfying (1.89) and let be V = V1 -V2 . The function
V is analytical on Da (0) and satisfies XÑV (x), f (x)\ = 0 on Da (0). Therefore, V is constant
lengthways the solutions of the system. Let be x0 Î Da (0). We have that V (x(t, x0)) = V (x0 ),
and for t ® ¥ we obtain V (x0 ) = V (0) = 0, for any x0 Î Da (0). Thus, V = 0 on Da (0) and
V1 = V2 .
As the flow defined by (1.80) has the P(q) property, there exist ∆ > 0 and c > 0 such that
c
üx(t, x̄)ü £ 0
"üx̄ü < ∆ and t ³ T = T (x̄)
(1.90)
2q
t+1
Let be x0 Î Da (0) and let’s define V in the point x0 by the formula
+¥
V (x0 ) = à üx(t, x0 )ü2q+2 dt
(1.91)
0
+¥
Let’s prove that this definition is correct, i.e. the integral Ù üx(t, x0 )ü2q+2 dt is convergent.
0
As x0 Î Da (0), there exists T = T (x0 ) such that üx(T , x0 )ü < ∆. From the P(q) property we
obtain that there exists T = T (x0 ) > T ¢ such that
¢
¢
¢
c
üx(t, x0 )ü £ 0
2q
t+1
+¥
c
As the integral Ù ( t+1
)
1+ 1q
"t ³ T
(1.92)
+¥
dt is convergent, it results that the integral Ù üx(t, x0 )ü2q+2 dt is also
T
0
convergent for any x0 Î Da (0) and consequently the function V is correctly defined.
Now, we will prove that V satisfies (1.89). Let be x0 Î Da (0) consider x(t, x0 ) the solution of
(1.80) with the initial condition x(0) = x0 . For any Τ ³ 0 we have that x(Τ, x0 ) Î Da (0) and
+¥
V (x(Τ, x0 )) = à üx(t, x(Τ, x0 ))ü
+¥
2q+2
dt = à üx(t + Τ, x0 )ü
2q+2
0
+¥
dt = à üx(s, x0 )ü2q+2 ds
0
Τ
Therefore, differentiating in this equality with respect to Τ, we have
+¥
d
d
V (x(Τ, x0 )) =
üx(s, x0 )ü2q+2 ds = -üx(Τ, x0 )ü2q+2
dΤ
dΤ à
Τ
58
On the other hand, we have:
d
V (x(Τ, x0 )) = XÑV (x(Τ, x0 )), ẋ(Τ, x0 )\ = XÑV (x(Τ, x0 )), f (x(Τ, x0 ))\
dΤ
Hence, from the last two relations, we get that:
XÑV (x(Τ, x0 )), f (x(Τ, x0 ))\ = -üx(Τ, x0 )ü2q+2
For Τ = 0 we obtain:
"Τ ³ 0
XÑV (x0 ), f (x0 )\ = -üx0 ü2q+2
and therefore, V satisfies (1.89).
By Lyapunov’s theorem concerning the analytical case, it results that V is analytical. It is clear
that V (x0 ) ¹ 0 for any x0 Î Da (0) {0}: supposing the contrary, it would result that x(t, x0 ) = 0
for any t ³ 0, and therefore x0 = 0, which is absurd.
Let be y Î ¶Da (0). There exists r > 0 such that üx(t, y)ü > r for any t ³ 0. Let be M > 0
q+2
and TM = 2rq+2M . Due to the continuous dependence on the initial conditions, it results that there
exists ∆M > 0 such that üx(t, x̄)ü > 2r for any üx̄ - yü < ∆M and t Î [0, TM ]. For any x̄ Î Da (0)
with üx̄ - yü < ∆M we have
V (x̄) = à
+¥
üx(t, x̄)ü
2q+2
0
dt ³ à
TM
üx(t, x̄)ü2q+2 dt ³ M
(1.93)
0
Therefore, for any M > 0 there exists ∆M > 0 such that for any x̄ Î Da (0) with üx̄ - yü < ∆M , we
have V (x̄) ³ M. In conclusion, lim V (x) = ¥ for any y Î ¶Da (0).
x®y
In a similar way, it can be proved that lim V (x) = ¥.
üxü®¥
Remark 1.15. Suppose that the dimension of the system (1.80) is n = 1, i.e. we consider the
following autonomous differential equation:
ẋ = f (x)
(1.94)
where f : R ® R is an R-analytic function with f (0) = 0 and f ¢ (0) = 0.
Lemma 1.4 provides that the null solution of the flow of the system (1.94) is asymptotically
stable if and only if the expansion of f : R ® R in 0 is:
¥
f (x) = â ak xk
(1.95)
k=2q+1
with q Î N* and a2q+1 =
f (2q+1) (0)
(2q+1)!
< 0.
Lemma 1.4 also provides has if the null solution of (1.94) is asymptotically stable, then the flow
of the system (1.94) has the P(q) property, therefore, the result of Theorem 1.16 holds.
Let be the expansion in 0 of the Lyapunov function V provided by Theorem 1.16 (which verifies
V ¢ (x) f (x) = -x2q+2 and V (0) = 0):
¥
V (x) = â Ak xk
k=2
(1.96)
59
The coefficients of the expansion (1.96) of V in 0 are given by the following relations:
A2 = Ak = -
1
2a2q+1
(1.97)
k-1
1
ka2q+1
â iAi a2q+1+k-i
for k ³ 3
(1.98)
i=2
Example 1.19. Consider the following differential equation
ẋ = -x3 - x6
(1.99)
Using the above formulae (with q = 1) for the coefficients of the power series expansion of the
optimal Lyapunov function which corresponds to the asymptotically stable steady state x = 0,
k
for any k ³ 0. Hence, the optimal
one finds that A3k = A3k+1 = 0 for any k ³ 1 and A3k+2 = (-1)
3k+2
Lyapunov function is:
Π
1
2x - 1 1
1
V (x) = 0 + 0 arctan 0 - ln(1 + x) + ln(1 - x + x2 )
3
6
6 3
3
3
(1.100)
and the region of attraction is Da (0) = (-1, ¥).
1.3.3 Center manifold theory
System (1.80) can be put into the following form:
; ẏ = Asy + F s (x, y)
ẋ = Ac x + F c (x, y)
(1.101)
where
1. the matrix Ac Î Mm (R) has all the eigenvalues on imaginary axis, m ³ 1
2. the matrix As Î Mn-m (R) has all the eigenvalues in the left half-plane.
3. the function F c : Rn ® Rm is R-analytic and F c (x, y) = O(ü(x, y)ü2 )
4. the function F s : Rn ® Rn-m is R-analytic and F s (x, y) = O(ü(x, y)ü2 )
We denote by F the R-analytic function defined by
(x, y) # F(x, y) = C
and by A = DF(0, 0) = C
Ac x + F c (x, y)
G
As y + F s (x, y)
F : Rn ® Rn
(1.102)
Ac 0
G. For the system (1.101), (x, y) = (0, 0) is a steady state.
0 As
Theorem 1.17. (see [Car81, Kuz98, Wig03])
(a) Center manifold theorem for analytic flows. Let be p Î N. There exists a locally defined
m-dimensional C p manifold W c (0, 0), called local center manifold of (x, y) = (0, 0), which is
tangent to the center subspace T c at (x, y) = (0, 0) and invariant to the flow of (1.101).
60
It can be represented locally as W c (0, 0) = {(x, h(x)) : üxü < ∆}, where the function h : B(∆) Ì
Rm ® Rn-m is of class C p, h(x) = O(üxü2 ).
(b) Stability. The equilibrium state (x, y) = (0, 0) of (1.101) is stable / asymptotically stable /
unstable if and only if the equilibrium state x = 0 is stable / asymptotically stable / unstable for
the restriction of (1.101) to the center manifold W c (0, 0):
ẋ = Ac x + F c (x, h(x))
(1.103)
If x = 0 is stable for (1.103) then there exist some constants r > 0 and Γ > 0 such that for any
solution (x(t), y(t)) of (1.101) with ü(x(0), y(0))ü < r there exists a solution xc (t) of (1.103) and a
constant M > 0 such that
üx(t) - xc (t)ü £ Me-Γt
üy(t) - h(xc (t))ü £ Me-Γt
"t ³ 0
"t ³ 0
(1.104)
(1.105)
To be able to apply Theorem 1.16 in order to compute the region of attraction Da (0, 0) of the
asymptotically stable null solution of (1.101), we need to know if the flow of the system (1.101)
has the P(q) property. The next corollary shows that in order to check if the system (1.101) has
the P(q) property, it is enough to verify that the flow of the restriction of (1.101) to one of its
center manifolds has the P(q) property.
Corollary 1.7. Suppose that there exists q Î N* such that the flow of the system on the center
manifold (1.103) has the P(q) property and that the null solution of (1.103) is stable. Then the
flow of the system (1.101) has the P(q) property as well.
Proof. Let be h : B(∆) Ì Rm ® Rn-m the function provided by Theorem 1.17(a). As
h(x) = O(üxü2 ), there exist M ¢ > 0 and ∆¢ Î (0, ∆) such that üh(x)ü < M ¢ üxü2 for üxü < ∆¢ .
The P(q) property of the flow of (1.103), provides that there exist ∆¢¢ Î (0, ∆¢) and c > 0 such
that, for any üx̄ü < ∆¢¢ , there exists T = T (x̄) ³ 0 such that
c
üxc (t, x̄)ü £ 0
2q
t+1
"t ³ T
where xc (t, x̄) denotes the solution of (1.103) with xc (0) = x̄. It is obvious that the above
inequality implies that x = 0 is asymptotically stable for (1.103), hence, the null solution of
(1.101) is also asymptotically stable.
Let be r > 0 and Γ > 0 given by Theorem 1.17(b). Let be z0 = (x0 , y0 ) belonging to the region
of attraction of the null solution of (1.101) such that üz0 ü < r. Denote by z(t, z0) = (x(t), y(t)) the
solution of (1.101) with z(0) = z0 . Therefore, there exists a solution xc (t) of (1.103) and M > 0
such that
üx(t) - xc (t)ü £ Me-Γt
üy(t) - h(xc (t))ü £ Me-Γt
"t ³ 0
"t ³ 0
As z0 is in the region of attraction of the null solution of (1.101), it is obvious that x(t) ® 0 as
t ® ¥. Hence, the first of the above inequalities provides that xc (t) ® 0 as t ® ¥. Therefore,
61
there exists T > 0 such that üxc (t)ü < ∆¢¢ and üh(xc (t))ü £ M ¢ üxc (t)ü2 for any t ³ T . By the P(q)
property of the flow of (1.103) we obtain that there exists T ¢ = T ¢ ³ T such that
üx(t)ü £
c
+ Me-Γt
0
2q
t+1
c2
üy(t)ü £ M ¢ 0q
+ Me-Γt
t +1
"t ³ T ¢
"t ³ T ¢
It is obvious that there exists T ¢¢ ³ T ¢ such that Me-Γt < (1 + t)-1 for t ³ T ¢¢ . Therefore, we
have that
üx(t)ü £ c(t + 1)
1
- 2q
+ (1 + t)-1 £ (1 + c)(t + 1)
üy(t)ü £ M ¢ c2 (t + 1)
We finally obtain that
- q1
1
- 2q
+ (1 + t)-1 £ (M ¢ c2 + 1)(t + 1)
c¢
üz(t, z0)ü £ 0
2q
t+1
"t ³ T ¢¢
1
- 2q
"t ³ T ¢¢
"t ³ T ¢¢
1
where c = (c + 1)2 + (M ¢ c2 + 1)2 . Choosing ∆¢¢¢ > 0 such that B(∆¢¢¢ ) Ì Da (0, 0) È B(r) we
obtain that the flow of (1.101) has the P(q) property (with the constants ∆¢¢¢ > 0 and c¢ > 0).
¢
1.3.4 Characterization of the region of attraction in the case of a simple
zero eigenvalue
Suppose that the matrix A of (1.101) has a simple eigenvalue Λ1 = 0 and no other eigenvalues
on the imaginary axis (i.e. Ac = 0 and m = 1). This is the case of fold or cusp critical points
[GH83, Kuz98].
We will see under which conditions (0, 0) is asymptotically stable, and whether it is possible to
apply the previous theoretical results in order to evaluate its domain of attraction.
The equation restricted to the one-dimensional center manifold provided by Theorem 1.17 is:
ẋ = F c (x, h(x)) = a px p + O(|x| p+1 )
(1.106)
where the function h can be considered of sufficient finite smoothness in some neighborhood
(-∆, ∆) of the origin. Actually, according to [Aul92], a one-dimensional center manifold of an
analytic system of differential equations is C¥ smooth. In equation (1.106), we have considered
that ak = k!1 dxd k (F c (x, h(x)))|x=0 = 0 for k = 0, p - 1 and a p = p!1 dxd p (F c (x, h(x)))|x=0 ¹ 0. The case
F c (x, h(x)) = 0, for all |x| < ∆ can be excluded, as this would mean that x = 0 is only stable and
not asymptotically stable for (1.106).
Lemma 1.4 provides the following result:
Proposition 1.2. The null solution of (1.106) is asymptotically stable if and only if the first
non-zero coefficient of the Taylor expansion of F c (x, h(x)) at x = 0 is negative and of odd order.
More, if the null solution of (1.106) is asymptotically stable then the flow of (1.106) has the P(q)
property.
Therefore, based on Theorem 1.16 and Corollary 1.7, the following result holds:
62
Corollary 1.8. If the matrix A of (1.101) has a simple zero eigenvalue and (0, 0) is asymptotically stable, then the domain of attraction Da (0, 0) coincides with the natural domain of
analyticity of the unique R-analytical function V which verifies
XÑV (x, y), F(x, y)\ = -ü(x, y)ü2q+2
V (0, 0) = 0
(1.107)
where 2q + 1 is the order of the first non-zero coefficient of the Taylor expansion of the right
hand side of the restriction of (1.101) to the one-dimensional center manifold W c (0, 0).
The coefficients for the optimal Lyapunov function in the two-dimensional case.
Consider the following system of differential equations:
1
; ẏ = -Λy
+ f2 (x, y)
ẋ = f (x, y)
(1.108)
where f1 and f2 are R-analytic functions of O(ü(x, y)ü2 ) and Λ > 0. Let be the power series
development of fi , i = 1, 2
¥
fi (x, y) = â aik1 k2 xk1 yk2
(1.109)
|k|=2
The restriction of (1.108) to the one-dimensional center manifold W c (0, 0) (here, at least of class
C4 ) has the form
a1 Λ + a11,1 a22,0 3
x + O(x4 )
(1.110)
ẋ = a12,0 x2 + 3,0
Λ
Lemma 1.4 and Theorem 1.17 provide that if the steady state (0, 0) of (1.108) is asymptotically
stable then a12,0 = 0. In the followings, we will assume that indeed, a12,0 = 0.
Proposition 1.3. If a13,0 Λ + a11,1 a22,0 < 0 then the steady state (0, 0) of (1.108) is asymptotically
stable.
If k2 ¹ 0, let be
R(k1 , k2 ) =
1
k2 Λ
Ú [(k1 - j1 + 1)a1j1 j2 Ak1 - j1 +1,k2 - j2 + (k2 - j2 + 1)a2j1 j2 Ak1 - j1 ,k2 - j2 +1 ]
|k|-1
| j|=2, j£k
For m ³ 3, let be E(m) = Ú [(m - k + 3)a1k,0 Am-k+3,0 + a2k,0 Am-k+2,1 ]
m+1
k=2
The coefficients Ak1 k2 of the power series development of the optimal Lyapunov function defined
by (1.107) (with q = 1) are found in the following order:
A0,2 = A1,1 = 0 A2,0 = - 2(a1
Λ
1
2
3,0 Λ+a1,1 a2,0 )
Ak1 ,k2 = R(k1 , k2 )
A0,4 =
1
4Λ
for |k| = 3, k2 ¹ 0
+ R(0, 4) A1,3 = R(1, 3) A2,2 =
1
Λ
+ R(2, 2) A3,1 = R(3, 1)
A3,0 is found from the equation E(3) = 0.
For m ³ 4: ;
Ak1 ,k2 = R(k1 , k2 ) for |k| = m + 1, k2 ¹ 0
Am,0 is found from the equation E(m) = 0
63
Example 1.20. Let be the following system:
x˙1 = x31 (x21 + 4x22 - 1)
; x˙ = x (x2 + 4x2 - 1)
2
2 1
2
(1.111)
The (0, 0) steady state of this system is asymptotically stable, provided by Proposition 1.3. Its
domain of attraction is
Da (0, 0) = {x = (x1 , x2 ) Î R2 : x21 + 4x22 < 1}
(1.112)
We compute the coefficients of the optimal Lyapunov function using the formulae given above.
By the same methods of approximation of the region of attraction as in the case of exponential
asymptotic stability, we find the approximations from Figures 1.18.1-1.18.2.
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
-1
-0.5
0
0.5
1
-1
-0.5
0
0.5
1
c50
Figure 1.18.1: D0ap with p = 200 and Da (0, 0) Figure 1.18.2: N50 with c50 = 0.08 and Da (0, 0)
for system (1.111)
for system (1.111)
1.3.5 Characterization of the region of attraction in the case of a pair of
pure imaginary eigenvalues
Suppose that the matrix A of (1.101) has a pair of pure imaginary eigenvalues ±iΩ, with Ω > 0
and no other eigenvalues on the imaginary axis (i.e. m = 2). This case corresponds to the Hopf
and Bautin (or degenerate Hopf) critical points [GH83, Kuz98].
In this case, one can assume that (eventually after a suitable transformation) the matrix Ac has
0 -Ω
the form Ac = K
O = W0 . Thus, the restriction of (1.101) to the two-dimensional center
Ω 0
manifold W c (0, 0) has the form
ẋ = W0 x + F c (x, h(x))
(1.113)
where the function h can be considered of sufficient finite smoothness on some neighborhood
of the origin, according to Theorem 1.17.
The normal form theory [GH83, Ver90, ETB+ 87] provides that, in some neighborhood of the
origin, there exists an polynomial change of variables x = u + Ψ(u), Ψ(u) = O(üuü2 ), which
transforms (1.113) into
p-1
u̇ = W0u + â üuü2k Wk u + O(üuü2p )
k=1
(1.114)
64
where Wk = K
ak -bk
O Î M2 (R). The coefficients ak (called Lyapunov coefficients) and bk
bk ak
can be expressed by means of the coefficients of the expansion in (0, 0) of the right hand side of
(1.113):
2p-1
k1 k2
A1
c
F (x, h(x)) = â K 2k1 k2 O x1 x2 + O(üxü2p )
(1.115)
Ak1 k2
|k|=2
For example, one obtains the first Lyapunov coefficient:
A11,1 (A12,0 + A10,2 ) - A21,1 (A22,0 + A20,2 ) + 2(A10,2A20,2 - A12,0 A22,0 )
+
8Ω
3A13,0 + A11,2 + A22,1 + 3A20,3
+
8
a1 =
(1.116)
From equation (1.114) it results that, in some neighborhood of the origin, one has:
p-1
d
üuü = â ak üuü2k+1 + O(üuü2p)
dt
k=1
(1.117)
Proposition 1.4. The null solution of (1.113) is asymptotically stable if and only if the first
non-zero Lyapunov coefficient from (1.114) is negative. If the null solution of (1.113) is
asymptotically stable and aq is the first non-zero Lyapunov coefficient, then the flow of system
(1.113) has the P(q) property.
Proof. The null solution of (1.113) is asymptotically stable if and only if the null solution of
(1.114) is asymptotically stable, which, based on (1.117) is equivalent to the first non-zero
Lyapunov coefficient being negative.
Suppose that the null solution of (1.113) is asymptotically stable and that aq < 0 is the first
non-zero Lyapunov coefficient from (1.114). Based on Lemma 1.4 and relation (1.117), the
flow of (1.114) has the P(q) property.
The system (1.114) has been obtained from (1.113) by the polynomial change of coordinates
x = u + Ψ(u), Ψ(u) = O(üuü2 ). Lemma 1.3 provides that the flow of system (1.113) has the P(q)
property.
Therefore, based on Theorem 1.16 and Corollary 1.7, the following result holds:
Corollary 1.9. If the matrix A of (1.101) has a pair of pure imaginary eigenvalues and (0, 0) is
asymptotically stable, then the domain of attraction Da (0, 0) coincides with the natural domain
of analyticity of the unique R-analytical function V which verifies
XÑV (x, y), F(x, y)\ = -ü(x, y)ü2q+2
V (0, 0) = 0
(1.118)
where q is the order of the first non-zero Lyapunov coefficient corresponding to the restriction
of (1.101) to the two-dimensional center manifold W c (0, 0).
The coefficients of the optimal Lyapunov function in the two-dimensional case.
Consider the following system of two differential equations:
ẋ = W0 x + g(x)
(1.119)
65
where x = (x1 , x2 )T and g : R2 ® R2 is an R-analytic function of O(üxü2 ). Suppose that the first
Lyapunov coefficient corresponding to (1.119) is a1 < 0. Therefore, the null solution of (1.119)
is asymptotically stable and, according to Corollary 1.9, its region of attraction is the natural
domain of analyticity of the optimal Lyapunov function V defined by
XÑV (x), W0x + g(x)\ = -üxü4
V (0, 0) = 0
Making the change of variables x = Sz in (1.120), with S = K
(1.120)
-i i
O, and denoting
1 1
W (z) = V (Sz) we obtain
XÑW (z), S-1W0 Sz + S-1 g(Sz)\ = -üSzü4
W (0, 0) = 0
(1.121)
Let be D = S-1 W0 S = diag(-iΩ, iΩ) and f : C2 ® C2 defined by f (z) = S-1 g(Sz)
( f (z) = O(üzü2 )). Equation (1.121) becomes
XÑW (z), Dz + f (z)\ = -üSzü4
W (0, 0) = 0
(1.122)
Consider the expansion of f in (0, 0)T :
f (z) = â K
¥
|k|=2
k1 k2
Ck11 k2
z1 z2
O
2
Ck1 k2
(1.123)
and the expansion of W in (0, 0)T :
¥
W (z) = â Bk1 k2 z1 z2
k1 k2
(1.124)
|k|=2
Knowing that a1 < 0, one can prove that
1
2
2
2
1
1
Α = Ω(C2,1
+ C1,2
) + i(C0,2
C1,1
- C2,0
C1,1
)¹0
Proposition 1.5. Let be
R(k1 , k2 ) =
i
(k2 -k1 )Ω
Ú [(k1 - j1 + 1)C1j1 j2 Bk1 - j1 +1,k2 - j2 + (k2 - j2 + 1)C2j1 j2 Bk1 - j1 ,k2 - j2 +1 ] if k1 ¹ k2
|k|-1
| j|=2, j£k
and
1
2
E(m) = Ú B j1 j2 ( j1Cmj1 +2,m- j2 +1 + j2Cm- j1 +1,m- j2 +2 ) if m ³ 2
2m+1
| j|=2
where we consider that Cm1 1 m2 = Cm2 1 m2 = 0 if m1 < 0 or m2 < 0.
The coefficients Bk1 k2 from (1.124) are found in the following order:
-16Ω
Α
B2,0 = B0,2 = 0
B1,1 =
Bk1 ,k2 = R(k1 , k2 )
for |k| = 3
ì
ï
Bk1 ,k2 = R(k1 , k2 ) for |k| = 2m, k1 ¹ k2
ï
ï
ï
Bk1 ,k2 = R(k1 , k2 ) for |k| = 2m + 1
For m ³ 2: í
ï
ï
ï
ï B is the solution of the equation E(m) = 0
î m,m
The coefficient of Bmm in the linear equation E(m) = 0 is kΑ
¹ 0, therefore, the solution of E(m)
Ω
exists.
66
Example 1.21. Consider the advertising diffusion model given in [Fei95]:
; x˙1 = x x2 - x1 2
2
1 2
2
x˙ = Α(1 - x x2 + Βx2 - Β)
with Α > 0, Β < 1
(1.125)
For any Α > 0 and Β < 1, the only steady state of this system is x = (1, 1). Making the
translation (y1 , y2 ) = (x1 , x2 ) - (1, 1) in system (1.125), one obtains
y˙1 = Α[-y1 + (Β - 2)y2 - 2y1 y2 - y22 - y1 y22 ]
; y˙ = y + y + 2y y + y2 + y y2
2
1
2
1 2
2
1 2
(1.126)
For any Β < 1,
0 at the value of the parameter Α = 1, a Hopf bifurcation occurs at (y1 , y2 ) = (0, 0).
Denote Ω = 1 - Β > 0.
Let be Α = 1. By the transformation z = SΩ y, SΩ = K
Ω -1
O, the system (1.126) becomes
0 1
; z˙1 = Ωz +2 2Ωz z - z2 + Ωz z2 - z3
2
1
1 2
2
1 2
2
z˙ = -Ωz
(1.127)
For any Ω > 0, one finds that the first Lyapunov coefficient is a1 = - 81 < 0, therefore, the steady
state z = (0, 0) of system (1.127) is asymptotically stable.
For Ω = 12 , we find the coefficients of the optimal Lyapunov function for the z = (0, 0) steady
state of (1.127) using the formulae given above. By the same methods of approximation of
the region of attraction as in the case of exponential asymptotic stability, we find the estimates
shown in Figures 1.19.1-1.19.2. In these figures, the thick black line represents the boundary of
the region of attraction.
0.6
0.6
0.4
0.4
0.2
0.2
0
0
-0.2
-0.2
-0.4
-0.4
-0.4
-0.2
0
0.2
0.4
0.6
-0.4
-0.2
c50
0
0.2
0.4
0.6
Figure 1.19.1: D0ap with p = 50 and Da (0, 0) for Figure 1.19.2: N50 with c50 = 0.2 and Da (0, 0)
for system (1.127) with Ω = 12
system (1.127) with Ω = 12
67
1.4 Implementation of Mathematica 5.0
This section includes a program written in Mathematica 5.0, which computes the Taylor
polynomial Vp0 , p = 50 of the optimal Lyapunov function, for the system (1.55). It is shown how
the first estimate D0ap of the region of attraction is obtained. The constant c p is also computed
cp
and the set Np is plotted.
The following entry gives the two dimensional system:
In[1]:= f[x1_, x2_] := {x2, -2 * x1 - x2 - x1ˆ3 + x1 * x2ˆ4 + x2ˆ5};
The maximal order of the polynomials in the system is precised:
In[2]:= ord = 5;
The following line builds the jacobian in (0, 0):
In[3]:= A = Transpose[{Derivative[1, 0][f][0, 0], Derivative[0, 1][f][0, 0]}];
The diagonalisation of the jacobian:
In[4]:= {S, J} = SetAccuracy[JordanDecomposition[A], 100];
T = Inverse[S];
Building the transformed system and the coefficients:
In[5]:= g[z1_, z2_] := T.f[(S.{z1, z2})[[1]], (S.{z1, z2})[[2]]];
P1 = Exponent[g[z1, z2][[1]], z1];
P2 = Exponent[g[z1, z2][[1]], z2];
Q1 = Exponent[g[z1, z2][[2]], z1];
Q2 = Exponent[g[z1, z2][[2]], z2];
b1[m_, n_] :=
Which[m £ P1 ß n £ P2, CoefficientList[g[z1, z2][[1]], {z1, z2}][[m + 1]]
[[n + 1]], True, 0];
b2[m_, n_] :=
Which[m £ Q1 ß n £ Q2, CoefficientList[g[z1, z2][[2]], {z1, z2}][[m + 1]]
[[n + 1]], True, 0];
Initializing the coefficients of the optimal Lyapunov function for the transformed system and
defining the function which gives the recurrence formula for the coefficients:
68
In[6]:= B1,0 = 0;
B0,1 = 0;
B1,1 = (-2/(J[[1]][[1]] + J[[2]][[2]]))*
(S[[1]][[1]] * S[[1]][[2]] + S[[2]][[1]] * S[[2]][[2]]);
B2,0 = (-1/(2 * J[[1]][[1]])) * (S[[1]][[1]]ˆ2 + S[[2]][[1]]ˆ2);
B0,2 = (-1/(2 * J[[2]][[2]])) * (S[[1]][[2]]ˆ2 + S[[2]][[2]]ˆ2);
R[j1_, j2_] := (-1/(j1 * J[[1]][[1]] + j2 * J[[2]][[2]]))*
Min[j1+j2-1,ord]
â
p=2
K
Min[p,j1]
â
((j1 - k + 1) * b1[k, p - k] * Bj1-k+1,j2-p+k +
k=Max[0,p-j2]
(j2 - p + k + 1) * b2[k, p - k] * Bj1-k,j2-p+k+1 )O;
The following steps provide the method for computing the optimal Lyapunov function:
In[7]:= ordin = 50;
Finding the coefficients of the transformed optimal Lyapunov function:
In[8]:= For[m = 3, m £ ordin, m + +, For[k = 0, k £ m, k + +, Bk,m-k = R[k, m - k]]];
m = .; k = .;
Applying the Cauchy Hadamard formula to approximate the region of convergence of the
optimal Lyapunov function:
p
In[9]:= g0[p_, z1_, z2_] := â(Abs[Bj,p-j * (z1)ˆ(j) * (z2)ˆ(p - j)]);
j=0
g0trans[p_, x1_, x2_] := g0[p, (T.{x1, x2})[[1]], (T.{x1, x2})[[2]]];
Defining the Taylor polynomials of the transformed optimal Lyapunov function and the optimal
Lyapunov function and their derivatives:
p
k
k=2
j=0
In[10]:= W[p_, z1_, z2_] := â K â(Bj,k-j * (z1)ˆ(j) * (z2)ˆ(k - j))O;
Wtrans[p_, x1_, x2_] := W[p, (T.{x1, x2})[[1]], (T.{x1, x2})[[2]]];
V[p_, x1_, x2_] :=
Expand[PolynomialReduce[Wtrans[p, x1, x2], ä, {x1, x2}][[1]][[1]]/(-ä)];
der[p_, x1_, x2_] := ¶x1 V[p, x1, x2] * f[x1, x2][[1]]+
¶x2 V[p, x1, x2] * f[x1, x2][[2]];
The plots of the approximate of the region of convergence of the optimal Lyapunov function
S(D0p), of the domain on which the derivative of Vp0 is negative and of the domain on which Vp0
is positive:
69
In[11]:= m1 = 1.2;
m2 = 1.2;
p0[p_] := ContourPlot[Evaluate[g0trans[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {1}, PlotRange ® All, Axes ® False,
Frame ® True, ContourStyle- > {GrayLevel[0.4]}, PlotPoints ® 200,
ColorFunction ® (GrayLevel[1 - (1 - #) * 0.6]&)];
pNEG[p_] := ContourPlot[Evaluate[der[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {0}, PlotRange ® All, Axes ® False,
Frame ® True, ContourShading ® False, PlotPoints ® 200,
ContourStyle ® Dashing[{0.01, 0.01}]];
pPOZ[p_] := ContourPlot[Evaluate[V[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {0}, PlotRange ® All, Axes ® False,
Frame ® True, ContourShading ® False, PlotPoints ® 200,
ContourStyle ® {Dashing[{0.01, 0.01}], GrayLevel[0.6]}];
The following entry shows these three plots, their intersection being the first approximate D0ap:
In[12]:= p = 50;
Show[p0[p], pNEG[p], pPOZ[p]]
p = .;
1
0.5
0
-0.5
-1
-1
-0.5
0
0.5
1
The plot of the set Npc :
In[13]:= pCN[p_, y_] := ContourPlot[Evaluate[V[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {y}, PlotRange ® All, Axes ® False,
Frame ® True, ContourStyle- > {GrayLevel[0.4]}, PlotPoints ® 200,
ColorFunction ® (GrayLevel[1 - (1 - #) * 0.6]&)];
Finding c p for p = 50:
70
In[14]:= p = 50;
c = 0.1;
dermax =
NMaximize[{der[p, x, y], V[p, x, y] == c}, {{x, -m1, m1}, {y, -m2, m2}},
Method ® "DifferentialEvolution"][[1]];
While[dermax £ 0,
c = c + 0.1;
dermax =
NMaximize[{der[p, x, y], V[p, x, y] == c}, {{x, -m1, m1}, {y, -m2, m2}},
Method ® "DifferentialEvolution"][[1]]];
c = c - 0.1;
dermax =
NMaximize[{der[p, x, y], V[p, x, y] == c}, {{x, -m1, m1}, {y, -m2, m2}},
Method ® "DifferentialEvolution"][[1]];
While[dermax £ 0,
c = c + 0.01;
dermax =
NMaximize[{der[p, x, y], V[p, x, y] == c}, {{x, -m1, m1}, {y, -m2, m2}},
Method ® "DifferentialEvolution"][[1]]];
cp = c - 0.01
Out[14]= 1.6
cp
Showing the plot of the set Np together with the plots of the sets where the derivative of the
Taylor polynomial Vp0 is negative and where Vp0 is positive:
In[15]:= Show[pCN[p, cp], pNEG[p], pPOZ[p]]
1
0.5
0
-0.5
-1
-1
-0.5
0
0.5
1
71
1.5 Control procedures using regions of attraction
In this section, it will be shown that for continuous dynamical systems with control, if two
steady states belong to an analytic path of asymptotically stable steady states, then there exists
a finite number of values of the control parameters such that, giving successively, at adequate
moments, these values for the control parameters, the steady states are gradually transferred one
in the other [KBGB05a, Bal85].
Let be the nonlinear continuous dynamical system with control defined by (1.1). Suppose that
the function f is R-analytic.
Definition 1.8. A change of control parameters from Α¢ to Α¢¢ in (1.1) is called maneuver and
is denoted by Α¢ ® Α¢¢ . The maneuver Α¢ ® Α¢¢ is successful on the path of steady states
j : W Ì Rm ® Rn of system (1.1) if Α¢ , Α¢¢ Î W and the solution of the initial value problem:
ẋ = f (x, Α¢¢ )
x(0) = j(Α¢)
(1.128)
tends to j(Α¢¢ ) when t ® ¥.
Theorem 1.18. Let be j : W Ì Rm ® Rn an R-analytic path of exponentially asymptotically
stable steady states of (1.1). There exist an open set G Ì Rn ´Rm and a non-negative R-analytic
function V defined on G satisfying the following conditions:
a. G É G = {(j(Α), Α)/ Α Î W}
b.
XÑxV (x, Α), f (x, Α)\ = -üx - j(Α)ü2
; V (j(Α), Α) = 0
(1.129)
c. For any Α Î W, Da (j(Α)) is the natural domain of analyticity of the function x # V (x, Α)
d. V (x, Α) ® +¥ for x ® y, y Î ¶Da (j(Α)) or for üxü ® ¥
Proof. Let be G = Ç (DA(j(Α)) ´ {Α}) and V : G ® R+ defined by
ΑÎW
V (x0 , Α) = à
¥
üxΑ (t, x0) - j(Α)ü2 dt
(1.130)
0
where xΑ (t, x0) is the solution of (1.1) which satisfies x(0) = x0 .
The set G and the function V (x, Α) satisfy the conditions a-d (see Theorem 1.7).
Corollary 1.10. If j : W Ì Rm ® Rn is an R-analytic path of asymptotically stable steady states
of (1.1) then for any Α Î W there is an open neighborhood UΑ of Α and an open neighborhood
Uj(Α) of j(Α) such that:
1. j(Α¢ ) Î Uj(Α) , for any Α¢ Î UΑ
2. Uj(Α) Ì Da (j(Α¢)) for any Α¢ Î UΑ
72
Proof. For Α Î W and x Î Da (j(Α)), the function V (x, Α) from Theorem 1.18 is considered.
The real and non-negative function V is defined on the open set G = Ç (Da (j(Α)) ´ {Α}), it is
continuous and equal to zero on the set G = {(j(Α), Α)/ Α Î W} Ì G.
ΑÎW
As V is continuous and it is equal to zero in (j(Α), Α) Î G, there is an open neighborhood G¢ of
(j(Α), Α) such that for any (x¢ , Α¢ ) Î G¢ , the inequality V (x¢ , Α¢ ) < 1 holds. Let be UΑ an open
neighborhood of Α and Uj(Α) of j(Α) such that Uj(Α) ´ UΑ Ì G¢ . As the function j is continuous,
it can be admitted that for any Α¢ Î UΑ , we have j(Α¢ ) Î Uj(Α) (contrarily, the neighborhood UΑ
can be replaced with a smaller neighborhood UΑ¢ Ì UΑ , for which we have j(Α¢ ) Î Uj(Α) , for any
Α¢ Î UΑ¢ ).
Thus, for any (x¢ , Α¢ ) Î Uj(Α) ´ UΑ , we have V (x¢ , Α¢ ) < 1. This means that for any x¢ Î Uj(Α)
and any Α¢ Î UΑ , we have that x¢ Î Da (j(Α¢ )). Thus, Uj(Α) Ì Da (j(Α¢)), for any Α¢ Î UΑ .
Remark 1.16. Corollary 1.10 states that for any Α¢ Î UΑ , both maneuvers Α ® Α¢ and Α¢ ® Α
are successful on the path j.
Theorem 1.19. For two steady states j(Αø ) and j(Αøø) belonging to the R-analytic path
j : W Ì Rm ® Rn of exponentially asymptotically stable steady states of (1.1), there exist a
finite number of values of the control parameters Α1 , Α2 , ..., Α p Î W such that all the maneuvers
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
(1.131)
are successful on the path j.
Proof. Let be P Ì W a polygonal line which joins Αø and Αøø . For any Α Î P we consider the
neighborhoods UΑ and Uj(Α) given by Corollary 1.10.
The family of neighborhoods {UΑ}ΑÎP is a covering with open sets of the compact polygonal line
P. From this covering we can subtract a finite covering of P, i.e., there exist Ᾱ1 , Ᾱ2 , ..., Ᾱq Î P
q
such that P Ì Ç UᾹk . More, it can be assumed that Αø Î UᾹ1 and Αøø Î UᾹq and that the
k=1
intersections UᾹk È P are open and connected sets in P, and
(UᾹk È P) È (UᾹk+2 È P) = Æ for any k = 1, 2, ..., q - 2.
Taking into account Remark 1.16, as Αø Î UᾹ1 and Αøø Î UᾹq , it comes naturally that the
maneuvers Αø ® Ᾱ1 and Ᾱq ® Αøø are successful on the path j.
We still have to prove that each maneuver Ᾱk ® Ᾱk+1 is successful for any k = 1, 2, ..., q - 1.
If Ᾱk Î UᾹk+1 , Remark 1.16 provides that the maneuver Ᾱk ® Ᾱk+1 is successful on the path j.
If Ᾱk Î/ UᾹk+1 , a point Ᾱk,k+1 Î (UᾹk È P) È (UᾹk+1 È P) is considered. Remark 1.16 provides that
both maneuvers Ᾱk ® Ᾱk,k+1 and Ᾱk,k+1 ® Ᾱk+1 are successful on the path j.
Thus, eventually considering supplementary control parameters Ᾱk,k+1 between Ᾱk and Ᾱk+1 , we
come to find (after changing the notation and re-numbering) a finite sequence Α1 , Α2, ..., Α p Î W
such that all the maneuvers
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
are successful on the path j.
73
Remark 1.17. Theorem 1.19 states that two steady states belonging to an analytic path j of
asymptotically stable steady states can be transferred one in the other using a finite number
of successful maneuvers. In fact, the transfer is made through the regions of attraction of the
states j(Α1 ), ..., j(Αn), j(Αøø).
Example 1.22. Consider the dynamical system with control:
ẋ = (x - Α)(x - Α - 1)(x - Α + 1)
(1.132)
where x Î R1 is the state parameter and Α Î R is the control parameter.
There are three analytic paths of steady states for (1.132): j1 (Α) = Α, j2(Α) = Α - 1 and
j3 (Α) = Α + 1; all paths are defined for Α Î R. The path j1 is an analytic path of asymptotically
stable steady states while j2 and j3 are analytic paths of unstable steady states.
For Α Î R, the domain of attraction of the asymptotically stable steady state j1 (Α) = Α is the
interval (Α - 1, Α + 1).
For Αø = -1 and Αøø = 1, let’s consider the asymptotically stable steady states j1 (Αø ) = -1
and j1 (Αøø ) = 1. The maneuver Α : Αø = -1 ® 1 = Αøø is not successful, because
j1 (Αø ) = -1 Î/ Da (j1 (Αøø )) = (0, 2). Though, a finite number of maneuvers can be found,
which transfer the steady state j1 (Αø ) = -1 to the steady state j1 (Αøø ) = 1, for example:
Α : Αø = -1 ® -0.5 ® 0 ® 0.5 ® 1 = Αøø
(1.133)
74
Chapter 2
Regions of attraction in the case of discrete
semi-dynamical systems
2.1 Introduction
2.1.1 Discrete semi-dynamical systems in Rn
Definition 2.1. A function Ψ : N ´ Rn ® Rn , continuous with respect to x Î Rn is called discrete
semi-dynamical system in Rn if it satisfies the following properties:
1. Ψ(0, x) = x for any x Î Rn
2. Ψ(k, Ψ(l, x)) = Ψ(k + l, x) for any k, l Î N and x Î Rn
Proposition 2.1. If Ψ : N ´ Rn ® Rn is a discrete semi-dynamical system in Rn then there exists
a continuous function f : Rn ® Rn such that
Ψ(k, x) = f k (x)
"k Î N, x Î Rn
(2.1)
where f 0 = idRn and f k+1 = f ë f k for any k Î N.
Proof. The function f (x) = Ψ(1, x) satisfies the demanded relation.
Proposition 2.2. Let be f : Rn ® Rn a continuous function. Then the function Ψ : N ´ Rn ® Rn
defined by
x
for k = 0
Ψ(k, x) = ; k
(2.2)
f (x) for k ³ 1
is a discrete semi-dynamical system in Rn (where f k+1 = f ë f k for any k Î N).
Proof. It can be easily verified that the function Ψ defined above satisfies the conditions from
Definition 2.1.
The previous propositions show that a discrete semi-dynamical system Ψ : N ´ Rn ® Rn is
defined by a continuous function f : Rn ® Rn with Ψ(k, x) = f k (x). For this reason, a discrete
semi-dynamical system is traditionally denoted as follows:
xk+1 = f (xk )
75
"k Î N
(2.3)
76
Definition 2.2. Let be W Ì Rn and f : W ® W a continuous function. The system
xk+1 = f (xk )
"k Î N
(2.4)
is a discrete semi-dynamical system in W.
2.1.2 Paths of steady states
Consider the following discrete semi-dynamical system with parameters:
xk+1 = f (xk , Α)
"k Î N
(2.5)
where f : W ´ D ® W is continuous. The state parameters of the system are x = (x1 , x2 ...xn ) Î
W Ì Rn and Α = (Α1 , Α2 ...Αm ) Î D Ì Rm are the control parameters. The steady states of system
(2.5) are the solutions of the system of algebraic equations:
f (x, Α) = x
(2.6)
Definition 2.3. A path of steady states of the system (2.5) is a function j : D¢ Ì D ® W
satisfying
f (j(Α), Α) = j(Α),
for any Α Î D¢
(2.7)
If the partial derivatives of j exist and are continuous until order k then the path j is said to
be Ck . The path j is C0 if j is just continuous. If j is R-analytic then the path j is said to be
analytical. If j(Α0) = x0 then the path j passes through (x0 , Α0).
In the followings, Ρ(A) will denote the spectral radius of the square matrix A.
Theorem 2.1. If the function f in the system (2.5) satisfies:
1. there exists x0 Î W and Α0 Î D such that f (x0 , Α0 ) = x0
2. f is of class C1 and Ρ(Dx f (x0 , Α0)) < 1
then there exists a path j of steady states satisfying the following properties:
a. j is C1 and j(Α0) = x0 ;
b. Ρ(Dx f (j(Α), Α)) < 1 for any Α Î D¢ .
Theorem 2.2. The function j : D¢ Ì D ® W is a path of steady states of system (2.5) if and
only if the function Ψ : D¢ Ì D ® (W - j(Α)) defined by Ψ(Α) = 0 for any Α Î D¢ is a path of
steady states for the system
yk+1 = g(yk , Α)
"k Î N
(2.8)
where the function g : (W - j(Α)) ´ D ® (W - j(Α)) is defined by g(y, Α) = f (y + j(Α), Α) - j(Α).
77
2.1.3 Asymptotic stability and regions of attraction
Assume that the function f in the system (2.5) is of class C1 , j : D¢ Ì D ® W is a path of steady
states of class C1 for (2.5).
Definition 2.4. For a given Α Î D¢ the steady state x = j(Α) of the system (2.5) is stable if for
every ¶ > 0 there exists ∆Α = ∆Α (¶) > 0 such that üx - j(Α)ü < ∆Α implies üxΑ (k, x) - j(Α)ü < ¶
for k ³ 0.
Remark 2.1. In Definition 2.4, xΑ (k, x) is given by:
xk+1 = f (xk , Α)
x0 = x
(2.9)
The following statement holds: the steady state x = j(Α) of the system (2.5) is stable if and only
if the steady state y = 0 of the system (2.8) is stable.
Definition 2.5. For a given Α Î D¢ the steady state x = j(Α) of the system (2.5) is attractive if
there exists rΑ > 0 such that üx - j(Α)ü < rΑ implies lim xΑ (k, x) = j(Α).
k®¥
Remark 2.2. The steady state x = j(Α) of the system (2.5) is attractive if and only if the steady
state y = 0 of the system (2.8) is attractive.
The definition of attractiveness requires only the existence of rΑ > 0 satisfying its condition
independent of whether rΑ is large or small. It is important to derive, or at least to estimate well
the set of all initial states x for which lim xΑ (k, x) = j(Α).
k®¥
Stability and attraction are in general mutually independent properties. Both properties are often
desired, which leads to the concept of asymptotic stability.
Definition 2.6. For a given Α Î D¢ the steady state x = j(Α) of the system (2.5) is asymptotically
stable if it is both stable and attractive.
Proposition 2.3. (see [Ela96]) If Ρ(Dx f (j(Α), Α)) < 1 then the steady state x = j(Α) of the
system (2.5) is asymptotically stable.
Remark 2.3. The condition Ρ(Dx f (j(Α), Α)) < 1 is only a sufficient condition for the asymptotic
stability of the steady state x = j(Α) of the system (2.5).
Definition 2.7. Suppose that for a given Α, xΑ is an asymptotically stable steady state for (2.5).
If Ρ(Dx f (xΑ , Α)) < 1 then xΑ is called strongly asymptotically stable. Otherwise, it is called
weakly asymptotically stable.
Remark 2.4. The steady state x = j(Α) of the system (2.5) is asymptotically stable if and only
if the steady state y = 0 of the system (2.8) is asymptotically stable.
Definition 2.8. The region of attraction of the asymptotically stable steady state x = j(Α) of the
system (2.5) is defined by:
Da (j(Α)) = {x Î W/ lim xΑ (k, x) = j(Α)}
k®¥
(2.10)
Proposition 2.4. The region of attraction Da (j(Α)) is an open neighborhood of x = j(Α) and it
is invariant to the flow defined by (2.5).
78
Remark 2.5. The region of attraction of an asymptotically stable steady state of a discrete semidynamical system is not necessarily connected (which is the case for continuous dynamical
systems). This fact is shown by the following example.
Example 2.1. Consider the discrete semi-dynamical system defined by the function function
f : R ® R given by f (x) = 21 x - 14 x2 + 12 x3 + 14 x4 . The region of attraction of the asymptotically
stable steady state x = 0 is Da (0) = (-2.79, -2.46) Ç (-1, 1) which is not connected.
Remark 2.6. The region of attraction Da (j(Α)) satisfies:
Da (j(Α)) = DΑa (0) + j(Α)
(2.11)
where DΑa (0) is the region of attraction of the asymptotically stable steady state y = 0 of the
system (2.8).
Due to (2.11) the computation or estimation of Da (j(Α)) reduces to the computation or estimation of DΑa (0).
2.1.4 Comments on the use of Lyapunov functions
In the followings, we will consider the following discrete semi-dynamical system:
xk+1 = f (xk )
"k Î N
(2.12)
where f : W Ì Rn ® W is an R-analytic function on W 0 with f (0) = 0 (i.e. x = 0 is a steady
state of (2.12)). It is assumed that the steady state x = 0 of the system (2.12) is asymptotically
stable and let be Da (0) its region of attraction.
Theoretical research shows that the region of attraction of an asymptotically stable steady state
of a discrete semi-dynamical system and its boundary are complicated sets [Koc90, LQVY91,
LT88, LaS97, LaS86]. In most cases, they do not admit an explicit elementary representation.
Different procedures are used for the approximation of the Da (0) with domains having a simpler
shape. For example, in the case of the Theorem 4.20 pg. 170 [KP01] the domain which
approximates Da (0) is defined by the Lyapunov function V (x) = Ú üD f (0)k xü2 built using
¥
the matrix D f (0), under the assumption Ρ(D f (0)) < 1.
k=0
In [KBBB03], a method of computing a Lyapunov function V is presented in the case when the
matrix D f (0) is a contraction, i.e. üD f (0)ü < 1. The Lyapunov function V is built using
the whole nonlinear system, not only the matrix D f (0). The function V is defined on the
whole region of attraction Da (0), and more, Da (0) is the natural domain of analyticity of V .
In [KBGB05b], this result is extended for the more general case when Ρ(D f (0)) < 1. This last
result is the following:
Theorem 2.3. If the function f satisfies Ρ(D f (0)) < 1 then the region of attraction Da (0)
coincides with the natural domain of analyticity of the unique solution V of the iterative first
order functional equation
V ( f (x)) - V (x) = -üxü2
(2.13)
; V (0) = 0
The function V is strictly positive on Da (0) {0} and V (x) ® ¥ for x ® y, y Î ¶Da (0) or for
üxü ® ¥.
79
The function V is given by
¥
V (x) = â ü f k (x)ü2
for any x Î Da (0)
(2.14)
k=0
Proof. Let be A = D f (0) and r = Ρ(D f (0)).
We will prove that the series Ú ü f k (x)ü2 is convergent for any x Î Da (0).
¥
k=0
We can decompose the function f as follows:
f (x) = Ax + h(x)
(2.15)
where h : W ® Rn is an R-analytical function which satisfies h(0) = 0.
As r < 1, there exists c > 0 such that
üAk xü < crk üxü
Let be c̄ = max{1, c} and ¶ =
exists ∆ > 0 such that
1-r
2c̄
"x Î W, k Î N
(2.16)
> 0. As h is continuous and h(0) = 0, it follows that there
üh(x)ü < ¶üxü
"x Î B(0, ∆).
(2.17)
Let be x Î Da (0) and xk = f k (x). This provides the existence of kx Î N such that xk Î B(0, ∆),
for any k ³ kx . Let be yk = xk+kx . This sequence satisfies yk Î B(0, ∆) for any k Î N and it also
satisfies yk+1 = f (yk ).
The formula of variation of constants gives:
k-1
yk = Ak y0 + â Ak-i-1 h(yi )
"k Î Nø
(2.18)
i=l
Relations (2.16) and (2.18) provide
k-1
üyk ü £ crk üy0 ü + â crk-i-1 üh(yi )ü
"k Î Nø
(2.19)
i=0
and using (2.17) and c £ c̄, the following inequality results:
k-1
üyk ü £ c̄r üy0 ü + â c̄rk-i-1 ¶üyi ü
k
"k Î Nø
(2.20)
i=0
Relation (2.20) can be written as
k-1
r üyk ü £ c̄üy0 ü + â c̄r-1 ¶(r-i üyi ü)
-k
"k Î Nø
(2.21)
i=0
Gronwall’s inequality for the discrete case provides
k-1
r üyk ü £ c̄üy0 ü ä(1 + c̄r-1 ¶)
-k
i=0
"k Î Nø
(2.22)
80
thus
üyk ü £ rk c̄üy0 ü(1 + c̄r-1 ¶)k = c̄üy0 ü(r + c̄¶)k = c̄üy0 ü(
Denoting
r+1
2
r+1 k
)
2
"k Î Nø
(2.23)
= Α < 1, the relation (2.23) gives
üxk ü £ c̄üxkx üΑk
"k ³ kx
(2.24)
Thus, for any x Î Da (0) there exists kx ³ 0 such that
ü f k (x)ü £ c̄ü f kx (x)üΑk
"k ³ kx
(2.25)
which assures that the series Ú ü f k (x)ü2 is convergent for any x Î Da (0).
¥
k=0
Let be V = V (x) the function defined by
¥
V (x) = â ü f k (x)ü2
"x Î Da (0)
(2.26)
k=0
The above function defined on Da (0) is analytical, strictly positive on Da (0) {0} and satisfies
(2.13). In order to show that the function V defined by (2.26) is the unique function which
satisfies (2.13) we consider V ¢ = V ¢ (x) satisfying (2.13) and we denote by V ¢¢ the difference
V ¢¢ = V - V ¢ . It is easy to see that V ¢¢ ( f (x)) - V ¢¢ (x) = 0, for any x Î Da (0). Therefore, we have
V ¢¢ (x) = V ¢¢ ( f k (x)) for any x Î Da (0) and any k Î N. It follows that V ¢¢ (x) = lim V ¢¢ ( f k (x)) = 0
k®¥
for any x Î Da (0). In other words, V (x) = V ¢ (x), for any x Î Da (0), so V defined by (2.26) is
the unique function which satisfies (2.13).
x®y
In order to show that V (x) ¥ for any y Î ¶Da (0) we consider y Î ¶Da (0) and M > 0 such
that ü f k (y)ü > M, for any k Î N. For an arbitrary positive number N > 0 we consider the first
2N
M
k
0
natural number k1 which satisfies k1 ³ M
for any
2 + 1. Let be r1 > 0 such that ü f (x)ü ³
2
k = 1, 2, .., k1 and x Î B(y, r1). For any x Î B(y, r1 )ÈDa (0) we have Ú ü f k (x)ü2 > N. Therefore,
k1
k=0
x®y
V (x) ¥.
üxü®¥
In the same way it can be proved that V (x) ¥.
Definition 2.9. The unique R-analytical function V which satisfies (2.13) is called optimal
Lyapunov function.
81
2.2 Methods for determining the region of attraction with
strong asymptotic stability conditions, using Lyapunov
functions
2.2.1 Determining the region of attraction by the gradual extension of the
optimal Lyapunov function’s embryo
When the function f is R-analytic and Ρ(D f (0)) < 1, the optimal Lyapunov function V can be
found theoretically using (2.14). More precisely, in this way, the embryo V 0 (i.e. the sum of the
series) of the function V is found theoretically on the domain of convergence D0 of the power
series expansion. If D0 is a strict part of Da (0), then the embryo V 0 can be prolonged using the
algorithm of prolongation of analytic functions:
If D0 is strictly contained in Da (0), then there exists a point x Î ¶D0 such that the function V
is bounded on a neighborhood of x. Let be a point x1 Î D0 close to x and the power series
expansion of V in x1 . The domain of convergence D1 of the series centered in x1 gives a new
part D1 (D0 È D1 ) of the region of attraction Da (0). The sum V 1 of the series centered in x1 is a
prolongation of the function V 0 to D1 and coincides with V on D1 . At this step, the part D0 Ç D1
of Da (0) and the restriction of V to D0 Ç D1 are obtained.
If there exists a point x Î ¶(D0 Ç D1 ) such that the function V |D0 Ç D1 is bounded on a
neighborhood of x, then the domain D0 Ç D1 is strictly included in the region of attraction
Da (0). In this case, the procedure described above is repeated, in a point x2 close to x.
The procedure cannot be continued when it is found that on the boundary of the domain
D0 Ç D1 Ç ... Ç Dm obtained at step m, there are no points having neighborhoods on which
V |D0 Ç D1 Ç ... Ç Dm is bounded.
The procedure described above gives an open and connected estimate D = D0 Ç D1 Ç ... Ç Dm
of the region of attraction Da (0).
In the followings, if D is a set contained in W, we will denote by f -k (D) the pre-image of D by
f k , i.e.
f -k (D) = {x Î W/ f k (x) Î D}
(2.27)
Note that, for any k Î N, the set f -k (D) is also an estimate of Da (0), which is not necessarily
connected.
We illustrate this procedure by the following examples:
k
Example 2.2. Let be f : R ® R defined by f (x) = x2 . Due to the equality f k (x) = x2 the
region of attraction of the strongly asymptotically stable steady state x = 0 is Da (0) = (-1, 1).
The optimal Lyapunov function is V (x) = Ú x2 . The domain of convergence of the series is
¥
k+1
k=0
D0 = (-1, 1) which coincides with Da (0).
x
Example 2.3. Let be f : W = (-¥, 1) ® W defined by f (x) = e+(1-e)x
. Due to the equality
x
k
f (x) = ek +(1-ek )x the region of attraction of the asymptotically stable steady state x = 0 is
Da (0) = (-¥, 1). The power series expansion of the Lyapunov function V (x) = Ú | f k (x)|2 in 0
¥
k=0
82
is
¥
¥
m=2
k=0
V (x) = â(m - 1) â e-2k (1 - e-k )m-2 xm
(2.28)
The radius of convergence of the series (2.28) is
3
¥
r0 = lim
(m - 1) â e-2k (1 - e-k )m-2 = 1
m
m®¥
(2.29)
k=0
therefore the domain of convergence of the series (2.28) is D0 = (-1, 1) Ì Da (0). More,
V (x) ® ¥ as x ® 1 and V (-1) < ¥. The radius of convergence of the power series expansion
of V in -1 is
3
¥
r-1 = lim
m®¥
m
ek (ek - 1)m-2 [(m - 3)ek + 2]
=1
â
k
m+2
(2e
1)
k=1
(2.30)
therefore, the domain of convergence of the power series development of V in -1 is D1 = (-2, 0)
which gives a new part of Da (0).
In practice, the following algorithm has to be used:
1. The coefficients Bk , k = (k1 , k2 ...kn ) Î Nn , of the optimal Lyapunov function V given by
2.14 are computed up to a finite degree P = |k|, and the following Taylor polynomial of
the embryo V 0 is built:
p
VP0 (x)
= â Bk xk
(2.31)
| j|=2
Contrary to the case of continuous dynamical systems (see Chapter 1), in general, we
do not have a formula for the computation of the coefficients Bk of the power series
development of the optimal Lyapunov function V . Therefore, we have to approximate the
optimal Lyapunov function V by the finite sum Vp(x) = Ú ü f k (x)ü2 , for a given p Î N, and
p
k=0
p
the coefficients Bk of the series of V by the coefficients Bk of the series of Vp. The results
obtained by this method are highly dependent on the accuracy of this approximation.
2
2. The set
D0P
n
= {x Î R /
p
â |Bk xk | < 1}
(2.32)
|k|=p
is considered.
3. The first approximation of Da (0) is D0aP = L[VP0 ] Ì D0P , the domain on which VP0 satisfies
Lyapunov’s conditions:
; VP0 ( f (x)) - V 0 (x)\ < 0 for any x Î L[VP0 ] {0}
P
P
P
V 0 (x) ³ 0
for any x Î L[V 0 ]
(2.33)
4. Let be x1 Î ¶D0aP such that |VP0 (x1 )| = min |VP0 (x)| and the Taylor polynomial of V 1 in x1 :
xζD0aP
P
VP1 (x) = â B1k (x - x1 )k
|k|=0
where B1k =
¶V0P 1
(x )
¶xk
(2.34)
83
2
5. The set:
D1P
n
= {x Î R /
P
â |B1k (x - x1 )k | < 1}
(2.35)
|k|=P
is considered.
6. The set D1aP = L[VP1 ] Ì D1P , on which VP1 satisfies Lyapunov’s conditions is a new, open
and nonempty part of Da (0).
7. This procedure is continued until an estimate D0aP Ç D1aP Ç ... Ç DkaP of Da (0) is obtained.
In the followings, some examples will be presented. These examples are meant to illustrate the
procedure presented above [KBBB03, KBGB05b].
The computations were made using Mathematica 5.0. In our figures, the thick black line
represents the true boundary of the region of attraction (if it is known), the dark grey set denotes
the first estimate D0aP , while the further estimates DkaP are colored in lighter shades of grey. The
black points represent the steady states of the system.
Example 2.4. Let be the following discrete semi-dynamical system in R:
1
xk+1 = xk - x2k + 2x3k - 4x4k
2
kÎN
(2.36)
There are two steady states: x = 0 (strongly asymptotically stable) and x = -0.271845
(unstable). It can be proved (by the staircase method) that the region of attraction of x = 0
is Da (0) = (-0.271845, 0.653564).
We have applied the algorithm described above to obtain an estimate of the Da (0). For P = 256
we obtained:
1. After the first step, we obtained D0aP = (-0.27184, 0.27184).
2. After the second step applied in x1 = 0.2718, we obtained D1aP = (0.01345, 0.53015).
3. After the third step applied in x2 = 0.5, we obtained D2aP = (0.38378, 0.61622).
4. After the forth step applied in x3 = 0.61, we obtained D3aP = (0.59785, 0.622175).
Therefore, the estimate of Da (0) obtained after four steps is the interval (-0.27184, 0.622175).
Example 2.5. Let be the following discrete semi-dynamical system
xk+1 = 12 xk (1 + x2k + 2y2k )
; y = 1 y (1 + x2 + 2y2 )
k+1
k
k
2 k
kÎN
(2.37)
There exists an infinity of steady states for this system: (0, 0) (strongly asymptotically stable, as
üD f (0)ü = 21 ) and all the points (x, y) belonging to the ellipsis x2 + 2y2 = 1 (all unstable). The
region of attraction of (0, 0) is Da (0, 0) = {(x, y) Î R2 : x2 + 2y2 < 1}.
For P = 162, we obtain the estimate D0aP of the region of attraction presented in Figure 2.1,
which is a good approximation of Da (0, 0).
84
1
1
0.75
0.5
0.5
0.25
0
0
-0.25
-0.5
-0.5
-0.75
-1
-1
-0.5
0
1
0.5
-20
-10
0
10
20
Figure 2.1: The estimate D0aP , P = 162 of Figure 2.2: The estimate D0aP , P = 72 of
Da (0, 0) after 1 step for system (2.37)
Da (0, 0) after 1 step for system (2.38)
Example 2.6. The following discrete semi-dynamical system is considered:
; yk+1 = yk3
k+1
k
x
= x yk + yk
kÎN
(2.38)
We have that Ρ(D f (0)) = 0, therefore (0, 0) is a strongly asymptotically stable steady state of the
system (2.38). It can be proved theoretically that its region of attraction is Da (0, 0) = R´(-1, 1).
For P = 72, we obtain the estimate D0aP of the region of attraction presented in Figure 2.2, which
is a good approximation of Da (0, 0).
Example 2.7. Discrete predator-prey system. We consider the discrete semi-dynamical system:
; yk+1 = 1 xk y
k+1
b k k
x
= ax (1 - xk ) - xk yk
1
with a = , b = 1, k Î N
2
(2.39)
The steady states of this system are: (0, 0) (strongly asymptotically stable), (-1, 0) and (1, -1)
(both unstable).
For P = 32, we obtain the estimate D0aP of the region of attraction presented in Figure 2.3.
Example 2.8. The following discrete semi-dynamical system is considered:
; yk+1 = - 21 yk + xk yk
k+1
k k
2 k
x
= -1x + x y
kÎN
(2.40)
The steady states of system (2.40) are: (0, 0) (strongly asymptotically stable, üD f (0)ü = 12 ) and
( 23 , 32 ) (unstable).
For P = 64, after two steps, we obtain the estimate D0aP Ç D1aP of the region of attraction
presented in Figure 2.4.
85
3
20
2
10
1
0
0
-1
-10
-2
-20
-3
-1
-0.5
0
0.5
1
-3
-2
-1
0
1
2
3
Figure 2.3: The estimate D0aP , P = 32 of Figure 2.4: The estimate D0aP Ç D1aP , P = 64 of
Da (0, 0) after 1 step for system (2.39)
Da (0, 0) after 2 steps for system (2.40)
2.2.2 Properties of partial sums Vp of the optimal Lyapunov function and
other methods for approximating the regions of attraction
In the followings, we will consider the partial sums of the series of the optimal Lyapunov
function given by (2.14):
p
Vp(x) = â ü f k (x)ü2
for any x Î W
(2.41)
k=0
Case I: the matrix A = D f (0) is a contraction, i.e. üAü < 1
The function f can be written as
f (x) = Ax + g(x)
for any x Î W
(2.42)
= 0.
where A = D f (0) and g : W ® Rn is an R-analytic function such that g(0) = 0 and lim üg(x)ü
üxü
x®0
Proposition 2.5. If üAü < 1, then there exists r > 0 such that B(r) = B(0, r) Ì W and
ü f (x)ü < üxü for any x Î B(r) {0}.
Proof. Due to the fact that lim üg(x)ü
= 0 there exists r > 0 such that B(r) Ì W and
üxü
x®0
üg(x)ü < (1 - üAü)üxü
for any x Î B(r) {0}
(2.43)
Let be x Î B(r) {0}. Inequality (2.43) provides that
ü f (x)ü = üAx + g(x)ü £ üAüüxü + üg(x)ü < (üAü + 1 - üAü)üxü = üxü
therefore, ü f (x)ü < üxü.
(2.44)
Definition 2.10. Let be R > 0 the largest number such that B(R) Ì W and ü f (x)ü < üxü for any
x Î B(R) {0}.
If for any r > 0 we have that B(r) Ì W and ü f (x)ü < üxü for any x Î B(r) {0}, then R = +¥
and B(R) = W = Rn .
86
Lemma 2.1.
a. B(R) is invariant to the flow of system (2.12).
b. For any x Î B(R), the sequence (ü f k (x)ü)kÎN is decreasing.
c. For any p ³ 0 and x Î B(R) {0}, DVp(x) = Vp( f (x)) - Vp(x) < 0.
Proof. a. If x = 0, then f k (0) = 0, for any k Î N. For x Î B(R) {0}, we have ü f (x)ü < üxü,
which implies that f (x) Î B(R), i.e. B(R) is invariant to the flow of system (2.12).
b. By induction, it results that for x Î B(R) we have f k (x) Î B(R) and ü f k+1 (x)ü £ ü f k (x)ü,
which means that the sequence (ü f k (x)ü)kÎN is decreasing.
c. In particular, for p ³ 0 and x Î B(R), we have ü f p+1 (x)ü £ ü f (x)ü < üxü and therefore,
DVp(x) = ü f p+1 (x)ü2 - üxü2 < 0.
Corollary 2.1. For any p ³ 0, there exists a maximal domain G p Ì W such that 0 Î G p and for
x Î G p {0}, the (positive definite) function Vp verifies DVp(x) < 0. In other words, for any p ³ 0
the function Vp defined by (2.41) is a Lyapunov function for (2.12) on G p. More, B(R) Ì G p for
any p ³ 0.
Theorem 2.4. B(R) is an invariant set included in the region of attraction Da (0).
Proof. Let be x Î B(R) {0}. We have to prove that lim f k (x) = 0.
k®¥
The sequence ( f k (x))kÎN is bounded: f k (x) belongs to B(R). Let be ( f k j (x)) jÎN a convergent
subsequence and let be lim f k j (x) = y0 . It is clear that y0 Î B(R).
j®¥
It can be shown that
ü f k (x)ü ³ üy0 ü
kj
for any k Î N
(2.45)
kj
For this, observe first that f (x) ® y0 and (ü f (x)ü)kÎN is decreasing (Lemma 2.1). These imply
that ü f k j (x)ü ³ üy0 ü for any k j . On the other hand, for any k Î N, there exists k j Î N such that
k j ³ k. Therefore ü f k (x)ü ³ ü f k j (x)ü ³ üy0 ü.
We show now that y0 = 0. Suppose the contrary, i.e. y0 ¹ 0.
Inequality (2.45) becomes
ü f k (x)ü ³ üy0 ü > 0
for any k Î N
(2.46)
Since y0 Î B(R), we have that ü f (y0 )ü < üy0 ü.
Therefore, there exists a neighborhood U f (y0 ) Ì B(R) of f (y0 ) such that for any z Î U f (y0 ) we
have üzü < üy0 ü. On the other hand, for the neighborhood U f (y0 ) there exists a neighborhood
Uy0 Ì B(R) of y0 such that for any y Î Uy0 , we have f (y) Î U f (y0 ) . Therefore:
ü f (y)ü < üy0 ü
for any y Î Uy0
(2.47)
As f k j (x) ® y0 , there exists j̄ such that f k j (x) Î Uy0 , for any j ³ j̄. Making y = f k j (x) in (2.47),
it results that
ü f k j +1 (x)ü = ü f ( f k j (x))ü < üy0 ü
for j ³ j̄
(2.48)
which contradicts (2.46). This means that y0 = 0, consequently, every convergent subsequence
of ( f k (x))kÎN converges to 0. This provides that the sequence ( f k (x))kÎN is convergent to 0, and
x Î Da (0).
Therefore, the ball B(R) is contained in the region of attraction of Da (0).
87
For p ³ 0 and c > 0 let be Npc the set defined by
Npc = {x Î W : Vp(x) < c}
(2.49)
If c = +¥, then Npc = W.
Theorem 2.5. Let be p ³ 0. For any c Î (0, (p + 1)R2 ], the set Npc is included in the region of
attraction Da (0).
Proof. Let be c Î (0, (p+1)R2] and x Î Npc . Then Vp(x) = Ú ü f k (x)ü2 < c £ (p+1)R2 , therefore,
p
there exists k Î {0, 1, .., p} such that ü f k (x)ü2 < R2 . It results that f k (x) Î B(R) Ì Da (0),
therefore, x Î Da (0).
k=0
¢
¢¢
Remark 2.7. It is obvious that for p ³ 0 and 0 < c¢ < c¢¢ one has Npc Ì Npc . Therefore,
cp
for a given p ³ 0, the set Np , where c p = (p + 1)R2 , is the largest part in the family Npc ,
c Î (0, (p + 1)R2) which is a part of Da (0). In the followings, we will use the notation Np instead
cp
of Np . Shortly, Np = {x Î W : Vp(x) < (p + 1)R2 } is a part of Da (0). Let’s note that N0 = B(R).
Remark 2.8. If R = +¥ (i.e. W = Rn and ü f (x)ü < üxü, for any x Î R {0}), then Np = Rn for
any p ³ 0 and Da (0) = Rn .
Theorem 2.6. For the sets (Np) pÎN , the following properties hold:
a. For any p ³ 0, one has Np Ì Np+1 ;
b. For any p ³ 0 the set Np is invariant to f ;
c. For any x Î Da (0) there exists px ³ 0 such that x Î Npx .
Proof. a. Let be p ³ 0 and x Î Np. Then Vp(x) = Ú ü f k (x)ü2 < (p + 1)R2 , therefore,
p
there exists k Î {0, 1, .., p} such that ü f k (x)ü2 < R2 . It results that f k (x) Î B(R) and
therefore f m (x) Î B(R), for any m ³ k. For m = p + 1 we obtain ü f p+1 (x)ü < R, hence
Vp+1 (x) = Vp(x) + ü f p+1 (x)ü2 < (p + 1)R2 + R2 = (p + 2)R2 . Therefore, x Î Np+1 .
k=0
b. Let be x Î Np. If üxü < R then ü f m (x)ü < R for any m ³ 0 (by means of Lemma 2.1). This
implies that Vp( f (x)) = Ú ü f k ( f (x))ü2 = Ú ü f k (x)ü2 < (p + 1)R2 , meaning that f (x) Î Np.
p
p+1
k=0
k=1
Let’s suppose that üxü ³ R. As x Î Np, we have that Vp(x) = Ú ü f k (x)ü2 < (p + 1)R2, therefore,
p
there exists k Î {0, 1, .., p} such that ü f k (x)ü < R. It results that f k (x) Î B(R) and therefore
f m (x) Î B(R), for any m ³ k. For m = p + 1 we obtain ü f p+1 (x)ü < R. This implies that
k=0
Vp( f (x)) = Vp(x) + ü f p+1 (x)ü2 - üxü2 < (p + 1)R2 + R2 - R2 = (p + 1)R2
(2.50)
therefore f (x) Î Np.
c. Suppose the contrary, i.e. there exist x Î Da (0) such that for any p ³ 0, x Î/ Np. Therefore,
Vp(x) ³ (p + 1)R2 for any p ³ 0. Passing to the limit for p ® ¥ in this inequality, provides that
V (x) = ¥, absurd. In conclusion, there exists px ³ 0 such that x Î Npx .
88
For p ³ 0 let be M p = f -p(B(R)) = {x Î W : f p(x) Î B(R)}, the pre-image of B(R) by f p.
Theorem 2.7. The following properties hold:
a. M p Ì Da (0) for any p ³ 0;
b. For any p ³ 0, M p is invariant to f ;
c. M p Ì M p+1 for any p ³ 0;
d. For any x Î Da (0) there exists px ³ 0 such that x Î M px .
Proof. a. As M p = f -p(B(R)) and B(R) Ì Da (0) (see Theorem 2.4) it is clear that M p Ì Da (0).
b. and c. follow easily by induction, using Lemma 2.1.
d. x Î Da (0) provides that f p(x) ® 0 as p ® ¥. Therefore, there exists px Î N such that
f p(x) Î B(R), for any p ³ px . This provides that x Î M p for any p ³ px .
Both sequences of sets (M p) pÎN and (Np) pÎN are increasing, and are made up of estimates of
Da (0). From the practical point of view, it is important to know which sequence converges more
quickly. The next theorem provides that the sequence (M p) pÎN converges more quickly than
(Np) pÎN , meaning that for p ³ 0, the set M p is a larger estimate of Da (0) than Np.
Theorem 2.8. For any p ³ 0 one has Np Ì M p.
Proof. Let be p ³ 0 and x Î Np. We have that Vp(x) = Ú ü f k (x)ü2 < (p + 1)R2 , therefore, there
p
exists k Î {0, 1, .., p} such that ü f k (x)ü < R. This implies that f m (x) Î B(R), for any m ³ k. For
m = p we obtain f p(x) Î B(R), meaning that x Î M p.
k=0
Example 2.9. In the case of system (2.37), as üD f (0, 0)ü = 12 , we compute the largest number
R > 0 such that ü f (x)ü < üxü for any x Î B(R) {0}, and we find R = 0.707.
For p = 1, 4 we find the Np sets shown in Figure 2.5.1, parts of Da (0, 0) (Np Ì Np+1 , for p ³ 0).
In Figure 2.5.1, the thick-contoured ellipsis represents the boundary of Da (0, 0).
In Figure 2.5.2, the sets M p are represented, for p = 1, 7 (M p Ì M p+1 , for p ³ 0). Note that M7
approximates with a good accuracy the region of attraction.
Example 2.10. Discrete predator-prey system. In the case of system (2.39), we have
üD f (0, 0)ü = 12 , and the largest number R > 0 such that ü f (x)ü < üxü for any x Î B(R) {0} is
R = 0.648.
Figure 2.6.1 presents the Np sets for p = 0, 5, parts of Da (0, 0) (Np Ì Np+1 , for p ³ 0). The
black points in Figure 2.6.1 represent the steady states of the system.
In Figure 2.6.2, the sets M p are represented, for p = 1, 7 (M p Ì M p+1 , for p ³ 0). Note that
the boundary of M7 approaches very much the steady states (-1, 0) and (1, -1), which suggests
that M7 is a good approximation of Da (0, 0).
89
1
1
0.75
0.75
0.5
0.5
0.25
0.25
0
0
-0.25
-0.25
-0.5
-0.5
-0.75
-0.75
-1
-0.5
0
1
0.5
-1
-0.5
0
1
0.5
Figure 2.5.1: The sets Np, p = 1, 4 and ¶Da (0, 0) Figure 2.5.2: The sets M p, p = 1, 7 for (2.37)
for (2.37)
1.5
10
1
5
0.5
0
0
-0.5
-5
-1
-10
-1.5
-1
-0.5
0
0.5
1
1.5
Figure 2.6.1: The sets Np, p = 0, 5 for (2.39)
-4
-2
0
2
4
Figure 2.6.2: The sets M p, p = 1, 7 for (2.39)
1.5
4
1
2
0.5
0
0
-0.5
-2
-1
-4
-1.5
-1.5
-1
-0.5
0
0.5
1
1.5
Figure 2.7.1: The sets Np, p = 1, 5 for (2.40)
-4
-2
0
2
4
Figure 2.7.2: The sets M p, p = 1, 7 for (2.40)
90
Example 2.11. In the case of system (2.40), we have üD f (0, 0)ü = 12 , and the largest number
R > 0 such that ü f (x)ü < üxü for any x Î B(R) {0} is R = 0.707.
Figure 2.7.1 presents the Np sets for p = 1, 5, parts of Da (0, 0) (Np Ì Np+1 , for p ³ 0). The
black points represent the steady states of the system.
In Figure 2.7.2, the sets M p are represented, for p = 1, 7 (M p Ì M p+1 , for p ³ 0). The set M7 is
a good approximation of Da (0, 0).
Case II: the matrix A = D f (0) is a convergent non-contractive matrix, i.e. Ρ(A) < 1 £ üAü
Proposition 2.6. If Ρ(A) < 1 £ üAü, then there exists p̃ ³ 2 such that üA pü < 1 for p ³ p̃ and
r p̃ > 0 such that B(r p̃) Ì W and ü f p (x)ü < üxü for any p Î { p̃, p̃+ 1, .., 2 p̃- 1} and x Î B(r p̃) {0}.
Proof. We have that Ρ(A) < 1 if and only if lim A p = 0 (see [HJ85]), which provides (together
with üAü ³ 1) that there exists p̃ ³ 2 such that üA pü < 1 for any p ³ p̃. Let be p̃ ³ 2 fixed with
this property.
p®¥
The formula of variation of constants for any p gives:
p-1
f (x) = A x + â A p-k-1 g( f k (x))
p
p
for all x Î W and p Î Nø
(2.51)
k=0
Due to the fact that for any k Î N we have lim üg( füxü(x))ü = 0, there exists r p̃ > 0 such that for any
x®0
p Î { p̃, p̃ + 1, .., 2 p̃ - 1} the following inequality holds:
k
p-1
â üA p-k-1 üüg( f k (x))ü < (1 - üA pü)üxü
for x Î B(r p̃) {0}
(2.52)
k=0
Let be x Î B(r p̃) {0} and p Î { p̃, p̃ + 1, .., 2 p̃ - 1}. Using (2.51) and (2.52) we have
p-1
ü f (x)ü = üA x + â A p-k-1 g( f k (x))ü £
p
p
k=0
p-1
£ üA üüxü + â üA p-k-1 üüg( f k (x))ü <
p
k=0
< (üA ü + 1 - üA pü)üxü = üxü
p
Therefore, ü f p(x)ü < üxü for p Î { p̃, p̃ + 1, .., 2 p̃ - 1} and x Î B(r p̃) {0}.
(2.53)
Definition 2.11. Let be p̃ ³ 2 the smallest number such that üA pü < 1 for any p ³ p̃ (see
Proposition 2.6). Let be R̃ > 0 the largest number such that B(R̃) Ì W and ü f p(x)ü < üxü for
p Î { p̃, p̃ + 1, .., 2 p̃ - 1} and x Î B(R̃) {0}.
If for any r > 0 we have that B(r) Ì W and ü f p(x)ü < üxü for any p Î { p̃, p̃ + 1, .., 2 p̃ - 1} and
x Î B(r) {0}, then R̃ = +¥ and B(R̃) = W = Rn .
Lemma 2.2.
a. For any x Î B(R̃) and p Î { p̃, p̃ + 1, .., 2 p̃ - 1} the sequence (ü f kp (x)ü)kÎN is
decreasing.
91
b. For any p ³ p̃ and x Î B(R̃) {0}, ü f p(x)ü < üxü.
c. For any p ³ p̃ and x Î B(R̃) {0}, DVp(x) = Vp( f (x)) - Vp(x) < 0, where Vp is defined by
(2.41).
Proof. a. If x = 0, then f p(0) = 0, for any p ³ 0.
Let be x Î B(R̃) {0}. We know that ü f p(x)ü < üxü for any p Î { p̃, p̃ + 1, .., 2 p̃ - 1}. It results
that f p(x) Î B(R̃) for any p Î { p̃, p̃ + 1, .., 2 p̃ - 1} . This implies that for any k Î Nø we
have ü f kp (x)ü < üxü and ü f (k+1)p (x)ü £ ü f kp (x)ü, meaning that the sequence (ü f kp (x)ü)kÎN is
decreasing.
b. Let be x Î B(R̃) {0}. Inequality ü f p (x)ü < üxü is true for any p Î { p̃, p̃ + 1, .., 2 p̃ - 1}.
Let be p ³ 2 p̃. There exists q Î Nø and p¢ Î { p̃, p̃ + 1, .., 2 p̃ - 1} such that p = q p̃ + p¢ . Using
¢
a., we have that f p (x) Î B(R̃) and f q p̃(y) £ üyü, for any y Î B(R̃), therefore
ü f p (x)ü = ü f q p̃ ( f p (x))ü £ ü f p (x)ü < üxü
¢
¢
(2.54)
c. results directly from b.
Corollary 2.2. For any p ³ p̃, there exists a maximal domain G p Ì W such that 0 Î G p and for
any x Î G p {0}, the (positive definite) function Vp verifies DVp(x) < 0. In other words, for any
p ³ p̃ the function Vp is a Lyapunov function for (2.12) on G p. More, B(R̃) Ì G p for any p ³ p̃.
Lemma 2.3. For any k ³ p̃ there exists qk Î N such that
ü f (qk +3) p̃(x)ü £ ü f k (x)ü £ ü f qk p̃(x)ü
for any x Î B(R̃)
(2.55)
Proof. Let be k ³ p̃. There exists a unique qk Î N and a unique pk Î { p̃, p̃ + 1, .., 2 p̃ - 1} such
that k = qk p̃ + pk . Lemma 2.2 provides that for any x Î B(R̃) we have that f qk p̃(x) Î B(R̃) and
ü f pk (x)ü £ üxü. It results that
ü f k (x)ü = ü f pk ( f qk p̃(x))ü £ ü f qk p̃(x)ü
for any x Î B(R̄)
(2.56)
On the other hand, we have (qk +3) p̃ = k+(3 p̃- pk ). As (3 p̃- pk ) Î { p̃+1, p̃+2, .., 2 p̃} and k ³ p̃,
Lemma 2.2 provides that for any x Î B(R̃) we have that f k (x) Î B(R̃) and ü f 3 p̃-pk (x)ü £ üxü.
Therefore
ü f (qk +3) p̃(x)ü = ü f 3 p̃-pk ( f k (x))ü £ ü f k (x)ü
for any x Î B(R̃)
(2.57)
Combining the two inequalities, we get that
ü f (qk +3) p̃(x)ü £ ü f k (x)ü £ ü f qk p̃(x)ü
for any x Î B(R̃)
which concludes the proof.
Theorem 2.9. B(R̃) is included in the region of attraction Da (0).
(2.58)
92
Proof. Let be x Î B(R̃) {0}. We have to prove that lim f k (x) = 0.
k®¥
The sequence ( f k (x))kÎN is bounded (see Lemma 2.2). Let be ( f k j (x)) jÎN a convergent subsequence and let be lim f k j (x) = y0 .
j®¥
We suppose, without loss of generality, that k j ³ p̃ for any j Î N. Lemma 2.3 provides that for
any j Î N there exists q j Î N such that
ü f (q j +3) p̃(x)ü £ ü f k j (x)ü £ ü f q j p̃(x)ü
(2.59)
As (ü f q j p̃(x)ü) jÎN and (ü f (q j +3) p̃(x)ü) jÎN are subsequences of the convergent sequence (ü f q p̃(x)ü)qÎN
(decreasing, according to Lemma 2.2), it results that they are convergent. The double inequality
(2.59) provides that lim ü f q j p̃(x)ü = üy0 ü. Therefore, lim ü f q p̃ (x)ü = üy0 ü.
j®¥
It can be shown that
q®¥
ü f k (x)ü ³ üy0 ü
for any k ³ p̃
(2.60)
For this, remark that lim ü f q p̃ (x)ü = üy0 ü and (ü f q p̃ (x)ü)qÎN is decreasing (Lemma 2.2), which
q®¥
implies that ü f q p̃(x)ü ³ üy0 ü for any q Î N. On the other hand, Lemma 2.3 provides that for any
k ³ p̃ there exists qk such that ü f (qk +3) p̃(x)ü £ ü f k (x)ü. Therefore, ü f k (x)ü ³ ü f (qk +3) p̃(x)ü ³ üy0 ü,
for any k ³ p̃.
We show now that y0 = 0. Suppose the contrary, i.e. y0 ¹ 0.
Inequality (2.60) becomes
ü f k (x)ü ³ üy0 ü > 0
for any k ³ p̃
(2.61)
By means of Lemma 2.2, we have that ü f p̃(y0 )ü < üy0 ü.
There exists a neighborhood U f p̃ (y0 ) Ì B(R̃) of f p̃(y0 ) such that for any z Î U f p̃(y0 ) we have
üzü < üy0 ü. On the other hand, for the neighborhood U f p̃(y0 ) there exists a neighborhood
Uy0 Ì B(R̃) of y0 such that for any y Î Uy0 , we have f p̃(y) Î U f p̃(y0 ) . Therefore:
ü f p̃(y)ü < üy0 ü
for any y Î Uy0
(2.62)
As f k j (x) ® y0 , there exists j̄ such that f k j (x) Î Uy0 , for any j ³ j̄. Making y = f k j (x) in (2.62),
it results that
ü f k j + p̃(x)ü = ü f p̃( f k j (x))ü < üy0 ü
for j ³ j̄
(2.63)
which contradicts (2.61). This means that y0 = 0, consequently, every convergent subsequence
of ( f k (x))kÎN converges to 0. This provides that the sequence ( f k (x))kÎN is convergent to 0, and
x Î Da (0).
Therefore, the ball B(R̃) is contained in the region of attraction of Da (0).
Theorem 2.10. Let be p ³ 0. For any c Î (0, (p + 1)R̃2 ], the set Npc is included in the region of
attraction Da (0).
Proof. Let be c Î (0, (p+1)R̃2] and x Î Npc . Then Vp(x) = Ú ü f k (x)ü2 < c £ (p+1)R̃2 , therefore,
p
there exists k Î {0, 1, .., p} such that ü f (x)ü < R̃ . It results that f k (x) Î B(R̃) Ì Da (0),
therefore, x Î Da (0).
k=0
k
2
2
93
¢
¢¢
Remark 2.9. It is obvious that for p ³ 0 and 0 < c¢ < c¢¢ one has Npc Ì Npc . Therefore, for
c̃ p
a given p ³ 0, the largest part of Da (0) which can be found by this method is Np , where
c̃ p
c̃ p = (p + 1)R̃2. In the followings, we will use the notation Ñp instead of Np . Shortly,
Ñp = {x Î W : Vp(x) < (p + 1)R̃2 } is a part of Da (0). Let’s note that Ñ0 = B(R̃).
Remark 2.10. If R̃ = +¥ (i.e. W = Rn and ü f p(x)ü < üxü, for any p Î { p̃, p̃ + 1, .., 2 p̃ - 1} and
x Î R {0}), then Ñp = Rn for any p ³ 0 and Da (0) = Rn .
Theorem 2.11. For any x Î Da (0) there exists px ³ 0 such that x Î Ñpx .
Proof. Let be x Î Da (0). Suppose the contrary, i.e. x Î/ Ñp for any p ³ 0. Therefore,
Vp(x) ³ (p + 1)R̃2 for any p ³ 0. Passing to the limit when p ® ¥ in this inequality provides
that V (x) = ¥, contradiction. In conclusion, there exists px ³ 0 such that x Î Ñpx .
Open Question: Is the sequence of sets (Ñp) p³ p̃ increasing?
For p ³ 0 let be M̃ p = f -p(B(R̃)) = {x Î W : f p(x) Î B(R̃)}, the pre-image of B(R̃) by f p.
Theorem 2.12. For the sets (M̃ p) pÎN the following properties hold:
a. M̃ p Ì Da (0), for any p ³ 0;
b. M̃kp Ì M̃(k+1)p for any k Î N and p Î { p̃, p̃ + 1, .., 2 p̃ - 1};
c. For any x Î Da (0) there exists px ³ 0 such that x Î M̃ px .
Proof. a. As M̃ p = f -p(B(R̃)) and B(R̃) Ì Da (0) (see Theorem 2.9) it is clear that M̃ p Ì Da (0).
b. follows easily by induction, using Lemma 2.2.
c. x Î Da (0) provides that f p(x) ® 0 as p ® ¥. Therefore, there exists px ³ 0 such that
f p(x) Î B(R̃), for any p ³ px . This provides that x Î M̃ p for any p ³ px .
Remark 2.11. The sequence of sets (M̃ p) pÎN is generally not increasing (see the Van der Pol
system).
Both sequences of sets (M̃ p) pÎN and (Ñp) pÎN are made up of estimates of Da (0). From the
practical point of view, it would be important to know which one of the sets M̃ p or Ñp is a larger
estimate of Da (0) for a fixed p ³ p̃. Such result could not be established, but the following
theorem holds:
Theorem 2.13. For any p ³ 0 one has Ñp Ì M̃ p+ p̃.
Proof. Let be p ³ 0 and x Î Ñp. We have that Vp(x) = Ú ü f k (x)ü2 < (p + 1)R̃2 , therefore, there
p
exists k Î {0, 1, .., p} such that ü f k (x)ü < R̃. This implies that f k+m (x) Î B(R̃), for any m ³ p̃.
For m = p - k + p̃ we obtain f p+ p̃(x) Î B(R̃), meaning that x Î M̃ p+ p̃.
k=0
94
Example 2.12. In the case of system (2.38), as üD f (0, 0)ü = 1, we compute the smallest integer
p̃ such that ü(D f (0, 0)) pü < 1 for any p ³ p̃, and we find p̃ = 2.
The largest number R̃ > 0 such that ü f p(x)ü < üxü for p Î { p̃, p̃ + 1, .., 2 p̃ - 1} = {2, 3} and
x Î B(R̃) {0} is R̃ = 0.748.
For p = 1, 7 we find the Ñp sets shown in Figure 2.8.1, parts of Da (0, 0) (Ñp Ì Ñp+1 , for p ³ 1).
In Figure 2.8.2, the sets M p are represented, for p = 1, 7 (M p Ì M p+1 , for p ³ 0). Note that M7
approximates with a good accuracy the region of attraction.
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-20
-1
-2
-1
0
1
2
-10
0
10
20
Figure 2.8.1: The sets Np, p = 1, 7 and ¶Da (0, 0) Figure 2.8.2: The sets M p, p = 1, 7 for (2.38)
for (2.38)
Remarks on a general method for finding a sphere included in the region of attraction
Let’s consider the discrete semi-dynamical system (2.12) where the function f : W Ì Rn ® Rn
is R-analytic and f (0) = 0. We will suppose that the steady state x = 0 is asymptotically stable.
Proposition 2.7. Let be p ³ 2. The region of attraction Da (0) of the asymptotically stable
steady state x = 0 of the system (2.12) coincides with the region of attraction Dap(0) of the
asymptotically stable steady state y = 0 of the discrete semi-dynamical system
yk+1 = f p(yk )
"k Î N
(2.64)
Proof. We will first prove that the steady state y = 0 of (2.64) is asymptotically stable.
As x = 0 is stable for (2.12), for any ¶ > 0 there exists ∆ > 0 such that for any üxü < ∆ we have
that ü f k (x)ü < ¶ for any k ³ 0. Therefore, for any üyü < ∆ we have that ü f kp (y)ü < ¶ for any
k ³ 0, which means that y = 0 is stable for (2.64).
As x = 0 is attractive for (2.12), there exists r > 0 such that lim f k (x) = 0 for any üxü < r.
Therefore, lim f kp(y) = 0 for any üyü < r, which means that y = 0 is attractive for (2.64).
k®¥
k®¥
In order to prove that Da (0) Ì Dap(0), let be x Î Da (0). It results that lim f k (x) = 0, therefore,
k®¥
lim f kp (x) = 0, which means that x Î Dap(0).
k®¥
For the inverse inclusion, let be x Î Dap(0), hence lim f kp(y) = 0. Therefore, there exists k0 ³ 0
such that ü f k0 p(x)ü < r. It results that f k0 p(x) Î Da (0), hence x Î Da (0).
k®¥
95
Theorem 2.14. If there exist p̄ ³ 1 and R̄ > 0 such that ü f p̄(x)ü < üxü for any x Î B(R̄) {0}
then B(R̄) is included in the region of attraction Da (0).
Proof. Direct consequence of Propositions 2.7 and 2.4.
Remark 2.12. If Ρ(A) < 1 there exists p̄ ³ 1 such that üA p̄ü < 1. Therefore, there exists R̄ > 0
such that ü f p̄(x)ü < üxü for any x Î B(R̄) {0} (see Proposition 2.6), and due to Theorem 2.14
we have that B(R̄) Ì Da (0).
Defining Np = {x Î W : Vp(x) < (p + 1)R̄2 } where Vp(x) = Ú ü f k (x)ü2 and M p = f -p(B(R̄)) =
p
k=0
{x Î W : f p(x) Î B(R̄)} it results that Np Ì Da (0) and M p Ì Da (0) for any p ³ 0.
Example 2.13. Discrete Van der Pol system. Let be the following discrete semi-dynamical
system, obtained from the continuous Van der Pol system:
; yk+1 = xk + (1k - a)y + ax2 y
k+1
k
k
k k
=x -y
x
with a = 2, k Î N
(2.65)
The only steady state of this system is (0, 0) which is asymptotically stable. There are many
periodic points for this system, the periodic points of order 2, 5 being represented in Figure
2.9.1 by the black points.
We have that üD f (0, 0)ü = 2 but Ρ(D f (0, 0)) = 0. We find p̄ = 2.
The largest number R̄ > 0 such that ü f p̄(x)ü < üxü for x Î B(R̄) {0} is R̄ = 0.411.
For p = 1, 5, the connected components which contain (0, 0) of the Np sets are shown in Figure
2.9.1. We have that N1 Ì N2 Ì N3 Ì N4 Ì N5 .
In Figure 2.9.2, the sets M p are represented, for p = 1, 7. Note that the inclusions M p Ì M p+1 do
not hold. Also remark that, as the periodic points of the system are very close to the boundary
of the set M7 , the set M7 is a good approximation of the region of attraction.
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-0.75 -0.5 -0.25
0
0.25
0.5
0.75
Figure 2.9.1: The sets Np, p = 1, 5 for (2.65)
-0.75 -0.5 -0.25
0
0.25
0.5
0.75
Figure 2.9.2: The sets M p, p = 1, 7 for (2.65)
96
2.3 Methods for determining the region of attraction with
weak asymptotic stability conditions, using Lyapunov
functions
Let be the following discrete semi-dynamical system:
xk+1 = f (xk )
"k Î N
(2.66)
where f : Rn ® Rn is an R-analytic function on Rn with f (0) = 0 and Ρ(D f (0)) = 1. Suppose
that x = 0 is weakly asymptotically stable steady state of In this section, we will study the
computation of the region of attraction of x = 0 under this hypothesis.
2.3.1 The P(q) property for maps
Definition 2.12. Let be q Î N* . We say that a map f : B(∆) Ì Rn ® B(∆) has the P(q) property
if and only if there exist ∆¢ Î (0, ∆) and c > 0 such that, for any üxü < ∆¢ , there exists kx ³ 0
such that
c
ü f k (x)ü £ 0
"k ³ kx
(2.67)
2q
k+1
In the followings, we will prove four useful lemmas for the results to come.
Lemma 2.4. Let be U , V Ì Rn two neighborhoods of the origin, a continuous map f : U ® U
with f (0) = 0 and a homeomorphism Φ : V ® U , Φ(V ) = U , Φ(y) = O(üyü). If the map
g = Φ-1 ë f ë Φ has the P(q) property for some q Î Nø , then the map f has the P(q) property, as
well.
Proof. As Φ(y) = O(üyü) there exists M > 0 and ∆ > 0 such that B(∆) Ì V and üΦ(y)ü £ Müyü
for any üyü < ∆.
As the map g has the P(q) property, it results that there exist ∆¢ > 0 and c > 0 such that for any
üyü < ∆¢ there exists ky > 0 such that
c
ügk (y)ü £ 0
2q
k+1
"k ³ ky
As Φ-1 is continuous, it results that there exists ∆¢¢ > 0 such that for any üxü < ∆¢¢ one has
üΦ-1 (x)ü < ∆¢ .
Let be üxü < ∆¢¢ . Therefore, y = Φ-1 (x) is in the ball B(∆¢ ). Hence, there exists kx > 0 such that
c
ügk (y)ü £ 0
2q
k+1
"k ³ kx
hence, ügk (y)ü ® 0 as k ® ¥. This implies that there exists kx¢ ³ kx such that ügk (y)ü < ∆ for any
k ³ kx¢ . Therefore:
Mc
ü f k (x)ü = üΦ(gk (y))ü £ Mügk (y)ü £ 0
2q
k+1
meaning that the map f has the P(q) property.
"k ³ kx¢
97
Lemma 2.5. Let be hq : (0, 1) ® (0, 1) defined by hq (y) = y(1-y2q ) and Βq =
point. Then
1
hkq (Βq) £ 0
"k Î N
2q
2q + k + 1
01
2q+1
2q
its maximum
(2.68)
where hkq denotes the k-times composition of the function hq .
Proof. The function hq is strictly increasing on (0, Βq) and (0, Βq ) is invariant to hq . By
1
1
mathematical induction, one can show that hkq (Βq) £ ( 2q+k+1
) 2q , for any k Î N.
Lemma 2.6. Let be the following discrete semi-dynamical system:
xk+1 = f (xk )
"k Î N
(2.69)
where f : [-∆, ∆] ® [-∆, ∆] is a function of class C p+1 , p ³ 2 with f ¢ (0) = 1, f (0) = f (k) (0) = 0
for k = 2, p - 1 and f (p) (0) ¹ 0. The following hold:
i. There exists a continuous function g : [-∆, ∆] ® R with g(0) = 1 such that
f (x) = x + a px pg(x)
where a p =
f (p) (0)
p!
for any x Î R
(2.70)
¹ 0.
ii. The null solution of (2.69) is asymptotically stable if and only if p is odd and f (p) (0) < 0.
iii. If the null solution of (2.69) is asymptotically stable then the map f has the P(q) property,
where p = 2q + 1.
Proof. i. Let be g : [-∆, ∆] ® R given by
g(x) = ;
f (x)-x
,
a px p
1,
if x ¹ 0
if x = 0
Taylor’s formula of order p for f at x = 0 provides that g is continuous.
ii. Necessity. Suppose that the steady state x = 0 of (2.69) is asymptotically stable. Suppose
that p is even. As g is continuous and g(0) = 1, there exists ∆ > 0 such that a2px pg(x) ³ 0, for
any |x| < ∆. Therefore, a p f (x) ³ a px for any |x| < ∆. As x = 0 is asymptotically stable, there
exists ∆¢ < ∆ such that for any |x| < ∆¢ and k Î N one has | f k (x)| < ∆ and lim f k (x) = 0. Using
k®¥
the relation from above, one obtains that a p f k (x) ³ a px, for any |x| < ∆¢ and k Î N. Passing to
the limit when k ® ¥ in this inequality, one gets that a px £ 0, for any |x| < ∆¢ , absurd.
Therefore, p must be odd, and we denote p = 2q + 1 and a p = a2q+1 . Consider the function
V (x) = x2 . The function V verifies
V ( f (x)) - V (x) = 2a2q+1 x2q+2 + O(|x|2q+3 )
(2.71)
Hence, x = 0 is unstable if a2q+1 > 0 and asymptotically stable if a2q+1 < 0.
Sufficiency. If p = 2q + 1 and a p = a2q+1 < 0, then the function V (x) = x2 satisfies
V ( f (x)) - V (x) = 2a2q+1 x2q+2 + O(|x|2q+3 )
(2.72)
98
which is negatively defined in a neighborhood of the origin, therefore x = 0 is asymptotically
stable.
iii. Suppose that null solution of (2.69) is asymptotically stable and let be Α = -a2q+1 > 0.
0
2q
Let us introduce the map f̃ defined by f̃ (y) = c-1
f
(c
y)
where
c
=
2/ Α. One can
q
q
q
2q+1
show that f̃ (y) = y - 2y g̃(y), where g̃ is defined by g̃(y) = g(cq y), and therefore it is
continuous and satisfies g̃(0) = 1. It results that there exists ∆¢ Î (0, Βq) È (0, c-1
q ∆) such
1
1
that |g̃(y) - 1| < 12 for any |y| < ∆¢ (we remind that Βq = ( 2q+1
) 2q is the maximum point of
the function hq considered in Lemma 2.5). If |y| < ∆¢ it follows that 12 < g̃(y) < 32 , therefore,
3
2q
2q
y2q < 2y2q g̃(y) < 3y2q < 3(∆¢ )2q < 3Β2q
q = 2q+1 £ 1. We obtain that 0 < 1 - 2y g̃(y) < 1 - y ,
¢
for any |y| < ∆ . Hence, we have that
| f̃ (y)| = |y|(1 - 2y2q g̃(y)) < |y|(1 - y2q ) = hq (|y|) < hq (Βq)
"|y| < ∆¢
(2.73)
It also results that the interval (-∆¢ , ∆¢) is invariant to the function f̃ . Using the fact that the
function hq is strictly increasing on the interval (0, Βq ), one can prove by mathematical induction
that
| f̃ k (y)| < hkq (|y|) < hkq (Βq )
"|y| < ∆¢ and k Î N
(2.74)
k
k -1
¢¢
¢
As f̃ (y) = c-1
q f (cq y) it results that f (x) = cq f̃ (cq x), for any k Î N. Choosing ∆ = ∆ cq < ∆
and using Lemma 2.5, one obtains
cq
cq
£
0
| f k (x)| < cq hkq (Βq ) £ 0
2q
2q
2q + k + 1
k+1
"|x| < ∆¢¢ and k Î N
(2.75)
Lemma 2.7. Let be the following discrete semi-dynamical system:
xk+1 = f (xk )
"k Î N
(2.76)
where f : [-∆, ∆] ® [-∆, ∆] is smooth with f (0) = 0 and f ¢ (0) = -1 (hence ( f ë f )¢ (0) = 1).
i. The null solution of (2.76) is asymptotically stable if and only if y = 0 is asymptotically
stable for
yk+1 = f 2 (yk )
"k Î N
(2.77)
where f 2 = f ë f .
ii. The map f has the P(q) property if and only if f 2 has the P(q) property, as well.
Proof. i. The necessity is obvious, it remains to prove the sufficiency. Suppose that y = 0 is
asymptotically stable for (2.77). Therefore, for any ¶ > 0 there exists ∆ > 0 such that for any
|y| < ∆ and k Î N one has | f 2k (y)| < ¶ and lim f 2k (y) = 0. As f is continuous, there exists ∆¢ < ∆
k®¥
such that for any |y| < ∆¢ one has | f (y)| < ∆.
Therefore, for any |x| < ∆¢ and k Î N, one has on one hand
| f 2k (x)| < ¶ and lim f 2k (x) = 0
(2.78)
| f 2k+1 (x)| < ¶ and lim f 2k+1 (x) = 0
(2.79)
k®¥
and on the other hand
k®¥
99
This proves that for any |x| < ∆¢ and k Î N one has | f k (x)| < ¶ and lim f k (x) = 0, therefore x = 0
k®¥
is asymptotically stable for (2.76).
ii. Again, the necessity is obvious, it remains to prove the sufficiency. As the map f 2 has the
P(q) property, there exists c > 0 and ∆¢ Î (0, ∆) such that
c
| f 2k (x)| < 0
2q
k+1
"|x| < ∆¢ and k ³ k0 = k0 (x)
(2.80)
¯ one has | f (x)| < ∆¢ . Therefore,
As f is continuous, there exists ∆¯ £ ∆¢ such that for any |x| < ∆,
for any |x| < ∆¯ one has
0
0
2q
2q
c
c
2
c
2
2k
| f (x)| < 0
= 0
< 0
"k ³ k0
(2.81)
2q
2q
2q
k+1
2k + 2
2k + 1
and
0
2q
c
2
2k+1
2k
|f
(x)| = | f ( f (x))| < 0
= 0
2q
2q
k+1
2k + 2
c
Choosing c̄ = c
0
2q
2 one gets
c̄
| f k (x)| < 0
2q
k+1
"k Î N
"|x| < ∆¯ and k ³ k0
(2.82)
(2.83)
2.3.2 The region of attraction of maps with the P(q) property
Theorem 2.15. If the null solution of (2.66) is weakly asymptotically stable and if there exists
q Î N* such that f has the P(q) property, then the region of attraction Da (0) of the zero solution
of the equation (2.66) coincides with the natural domain of analyticity of the R-analytical
function V defined by
V ( f (x)) - V (x) = -üxü2q+2
V (0) = 0
(2.84)
The function V is strictly positive on Da (0) {0} and V (x) ® ¥ for x ® y, y Î ¶Da (0) or for
üxü ® ¥.
The function V is given by
¥
V (x) = â ü f k (x)ü2q+2
for any x Î Da (0)
(2.85)
k=0
Proof. First, we will prove that there exists a unique analytic function V which satisfies (2.84).
Assume that there exist two analytic functions V1 and V2 satisfying (2.84) and let be V = V1 -V2 .
Then V is analytic on Da (0) and satisfies V ( f (x)) = V (x) on Da (0). Therefore, V ( f k (x)) = V (x)
for any x Î Da (0) and any k Î N. Making k ® ¥, it results that V = 0 on Da (0), therefore,
V1 = V2 .
As the map f has the P(q) property, there exists ∆ > 0 and c > 0 such that, for any üxü < ∆ there
exists kx ³ 0 such that
c
ü f k (x)ü £ 0
"k ³ kx
(2.86)
2q
k+1
100
Let’s define V in the point x Î Da (0) by the formula
¥
V (x) = â ü f k (x))ü2q+2
(2.87)
k=0
Let’s prove that this definition is correct, i.e. the series Ú ü f k (x)ü2q+2 is convergent.
¥
k=0
Let be x0 Î Da (0). Then, there exists k1 Î N such that ü f k1 (x0 )ü < ∆. From the P(q) property of
the map f , it results that there exists k0 ³ k1 such that
c
ü f k (x0 )ü £ 0
2q
k+1
1
As the series Ú ( k+1
)
¥
1+ q1
k=k0
"k ³ k0
(2.88)
is convergent, we obtain that Ú ü f k (x0 )ü2q+2 is convergent, therefore,
¥
k=0
the function V is correctly defined.
The function V is R-analytic, positive and satisfies (2.84).
In order to show that lim V (x) = ¥ for any y Î ¶Da (0), we consider y Î ¶Da (0) and r > 0 such
x®y
that ü f (y)ü > r, for any k Î N. Let be M > 0 and kM = [ 2r2q+2M ] Î N ([a] denotes the integer part
of a). There exists ∆M > 0 such that ü f k (x)ü ³ 2r for any k Î {0, 1, .., kM } and any x Î B(y, ∆M ).
For any x Î Da (0) with üx - yü < ∆M we have
2q+2
k
¥
V (x) = â ü f (x)ü
k
kM
2q+2
³ â ü f (x)ü
k=0
k
k=0
2q+2
r2q+2
³ (kM + 1) 2q+2 ³ M
2
(2.89)
Therefore, for any M > 0 there exists ∆M > 0 such that for any x Î Da (0) with üx - yü < ∆M , we
have V (x) ³ M. In conclusion, lim V (x) = ¥ for any y Î ¶Da (0).
x®y
In a similar way, it can be proved that lim V (x) = ¥ for üxü ® ¥.
x®y
Remark 2.13. Suppose that the dimension of the system (2.66) is n = 1, i.e. we consider the
following discrete semi-dynamical system:
xk+1 = f (xk )
"k Î N
(2.90)
where f : R ® R is an R-analytic function on R with f (0) = 0 and | f ¢ (0)| = 1.
Lemmas 2.6 and 2.7 provide that if the null solution of (2.90) is asymptotically stable then the
map f has the P(q) property, for some q Î Nø . Therefore, the result of Theorem 2.15 holds.
Lemmas 2.6 and 2.7 also provide necessary and sufficient conditions for the asymptotic stability
of the null solution of (2.90).
Example 2.14. Let be the following discrete semi-dynamical system in R:
x
xk+1 = 1 k
1 + x2k
"k Î N
(2.91)
We consider the function f : R ® R defined by f (x) = 0 x 2 . We have that f ¢ (0) = 1, f (2) (0) = 0
1+x
and f (3) (0) = -3 < 0. Lemma 2.6 provides that the null solution of (2.91) is asymptotically
101
stable and the map f has the P(1) property. One can prove by mathematical induction that
f k (x) = 0 x 2 for any k Î N, therefore, the steady state x = 0 is globally asymptotically stable.
1+kx
In order to construct the optimal Lyapunov function, we use relation (2.85) with q = 1:
¥
¥
V (x) = â f (x) = â
k
4
k=0
k=0
x4
1
= Ψ1 ( 2 )
2 2
(1 + kx )
x
(2.92)
2
where Ψ1 is the Trigamma function: Ψ1 (z) = dzd 2 ln G(z). As the natural domain of analyticity of
the function V is R, it results that Da (0) = R.
Remark 2.14. Suppose that the function f of (2.66) has the P(q) property. If there exist p̄ ³ 1
and R̄ > 0 such that ü f p̄(x)ü < üxü for any x Î B(R̄) {0}, due to Theorem 2.14 we have that
B(R̄) Ì Da (0).
Defining
p
n
Np = {x Î R : Vp(x) < (p + 1)R̄
2q+2
} where Vp(x) = â ü f k (x)ü2q+2
k=0
and
M p = f -p(B(R̄)) = {x Î Rn : f p(x) Î B(R̄)}
where M p is the pre-image of the set B(R̄) by the function f p, it results that Np Ì Da (0) and
M p Ì Da (0) for any p ³ 0.
2.3.3 Center manifold theory
In the followings, consider the following discrete semi-dynamical system:
; yk+1 = As y k + F s (x k, y k)
k+1
k
k k
x
= Ac x + F c (x , y )
(2.93)
where
1. the matrix Ac Î Mm (R) has all the eigenvalues on the unit circle, m ³ 1
2. the matrix As Î Mn-m (R) has all the eigenvalues within the unit circle
3. the function F c : Rn ® Rm is R-analytic and F c (x, y) = O(ü(x, y)ü2 )
4. the function F s : Rn ® Rn-m is R-analytic and F s (x, y) = O(ü(x, y)ü2 )
We denote by F the R-analytic function defined by
(x, y) # F(x, y) = C
In the followings: A = DF(0, 0) = C
Ac x + F c (x, y)
G
As y + F s (x, y)
F : Rn ® Rn
Ac 0
G, B = D2 F(0, 0) and C = D3 F(0, 0).
0 As
The system (2.93) has the steady state (x, y) = (0, 0).
(2.94)
102
Theorem 2.16. (see [Car81, Kuz98, Wig03])
(a) Center manifold theorem for analytic maps. Let be p Î N. There exists a locally defined
m-dimensional C p manifold W c (0, 0), called local center manifold of (x, y) = (0, 0), which is
tangent to the center subspace T c at (x, y) = (0, 0) and invariant with respect to (2.93).
It can be represented locally as W c (0, 0) = {(x, h(x)) : üxü < ∆}, where h : B(∆) Ì Rm ® Rn-m is
a C p function, h(x) = O(üxü2 ).
(b) The steady state (x, y) = (0, 0) of (2.93) is stable / asymptotically stable / unstable if and
only if the steady state x = 0 is stable / asymptotically stable / unstable for the restriction of
(2.93) to the center manifold W c (0, 0):
xk+1 = Ac xk + F c (xk , h(xk ))
"k Î N
(2.95)
If x = 0 is stable for (2.95), there exist some constants r > 0 and Β Î (0, 1) such that for any
solution (xk , yk ) of (2.93) with ü(x0 , y0 )ü < r there exists a solution xck of (2.95) and a constant
M > 0 such that
üxk - xck ü £ MΒk
üyk - h(xck )ü £ MΒk
"k ³ 0
"k ³ 0
(2.96)
(2.97)
To be able to apply Theorem 2.15 in order to compute the region of attraction Da (0, 0) of the
asymptotically stable null solution of (2.93), we need to know if the map F of the system (2.93)
has the P(q) property. The next corollary shows that in order to check if the system (2.93) has
the P(q) property, it is enough to verify that the map of the restriction of (2.93) to one of its
center manifolds has the P(q) property.
Corollary 2.3. Suppose that there exists q Î N* such that the map x # Ac x+F c (x, h(x)) of (2.95)
has the P(q) property and that the null solution of (2.95) is stable. Then the map (x, y) # F(x, y)
of (2.93) has the P(q) property as well.
Proof. Let be h : B(∆) Ì Rm ® Rn-m the function provided by Theorem 2.16(a). As
h(x) = O(üxü2 ), there exist M ¢ > 0 and ∆¢ Î (0, ∆) such that üh(x)ü < M ¢ üxü2 for üxü < ∆¢ .
The P(q) property of the map x # Ac x + F c (x, h(x)) provides that there exist ∆¢¢ Î (0, ∆¢) and
c > 0 such that, for any üxü < ∆¢¢ , there exists kx ³ 0 such that the solution xck of (2.95) with
xc0 = x satisfies
c
üxck ü £ 0
"k ³ kx
2q
k+1
The above P(q) property implies the asymptotic stability of the null solution of (2.95), and
hence, the asymptotic stability of the null solution of (2.93). Let be r > 0 and Β Î (0, 1) the
constants given by Theorem 2.16 (b).
Let be z = (x, y) belonging to the region of attraction of the null solution of (2.93) such that
üzü < r. Denote by zk = (xk , yk ) the solution of (2.93) with z0 = z. Therefore, according to
Theorem 2.16 (b), there exists a solution xck of (2.95) and M > 0 such that
üxk - xck ü £ MΒk
üyk - h(xck )ü £ MΒk
"k ³ 0
"k ³ 0
103
As z is in the region of attraction of the null solution of (2.93), it is obvious that xk ® 0 as
k ® ¥. Hence, the first of the above inequalities provides that xck ® 0 as k ® ¥. Therefore,
there exists k0 > 0 such that üxck ü < ∆¢¢ and üh(xck )ü £ M ¢ üxck ü2 for any k ³ k0 . By the P(q)
property of the map x # Ac x + F c (x, h(x)) we obtain that there exists k1 ³ k0 such that
üxk ü £
c
0
+ MΒk
2q
k+1
c2
üyk ü £ M ¢ 0q
+ MΒk
k+1
"k ³ k1
"k ³ k1
It is obvious that there exists k2 ³ k1 such that MΒk £ (1 + k)-1 for k ³ k2 . Therefore, we have
that
üxk ü £ c(k + 1)
1
- 2q
+ (1 + k)-1 £ (1 + c)(k + 1)
üyk ü £ M ¢ c2 (k + 1)
We finally obtain that
- q1
1
- 2q
+ (1 + k)-1 £ (M ¢ c2 + 1)(k + 1)
c¢
üzk ü £ 0
2q
k+1
"k ³ k2
1
- 2q
"k ³ k2
"k ³ k2¢
1
where c¢ = (c + 1)2 + (M ¢ c2 + 1)2 . Choosing ∆¢¢¢ > 0 such that B(∆¢¢¢ ) Ì Da (0, 0) È B(r) we
obtain that the map F of (2.93) has the P(q) property (with the constants ∆¢¢¢ > 0 and c¢ > 0).
2.3.4 Weak asymptotic stability and regions of attraction for codimension
1 singularities
In the followings, we will consider that the steady state (x, y) = (0, 0) of (2.93) is a codimension
1 singularity: fold, flip or Neimark-Sacker point [GH83, Kuz98]. We will see under which
conditions (0, 0) is weakly asymptotically stable, and whether it is possible to apply the previous
theoretical results in order to evaluate its region of attraction.
Fold
The steady state (x, y) = (0, 0) of (2.93) is a fold if the jacobian matrix A has a simple eigenvalue
Λ1 = 1 and no other eigenvalues on the unit circle (i.e. m = 1 and Ac = 1), and if
a = 2Xz1 , B(z2 , z2)\ ¹ 0
(2.98)
where Az2 = z2 , AT z1 = z1 and Xz1 , z2\ = 1. We remind that B = D2 F(0, 0).
In this case, the restriction of (2.93) to the one-dimensional W c (0, 0) (which is considered, in
this case, at least of class C3 in a neighborhood of the origin) has the form
xk+1 = xk + ax2k + O(x3k )
"k Î N
(2.99)
As a ¹ 0, Lemma 2.6 provides that in this case, 0 is not asymptotically stable for (2.99). From
Theorem 2.16 it results that (0, 0) is not weakly asymptotically stable for (2.93).
Flip
104
The steady state (x, y) = (0, 0) of (2.93) is a flip if the jacobian matrix A has a simple eigenvalue
Λ1 = -1 and no other eigenvalues on the unit circle (i.e. m = 1 and Ac = -1), and if
b = 6Xz1 , C(z2, z2 , z2) + 3B(z2, (I - A)-1 B(z2 , z2 ))\ ¹ 0
(2.100)
where Az2 = -z2 , AT z1 = -z1 and Xz1 , z2\ = 1. We remind that B = D2 F(0, 0).
In this case, the restriction of (2.93) to the one-dimensional W c (0, 0) (which is considered, in this
case, at least of class C4 in a neighborhood of the origin) can be transformed by an polynomial
transformation x = u + Ψ(u), Ψ(u) = O(u2 ), to the normal form
uk+1 = -uk + bu3k + O(u4k )
"k Î N
(2.101)
Proposition 2.8. If the steady state (x, y) = (0, 0) of (2.93) is a flip, it is weakly asymptotically
stable if and only if b > 0, where b is given by (2.100). More, if b > 0, then the map which
defines the center manifold W c (0, 0) of (2.93) has the P(1) property.
Proof. The function f (u) = -u + bu3 + O(u4 ) from the right hand side of (2.101) satisfies
f 2 (u) = u - 2bu3 + O(u4 )
(2.102)
As b ¹ 0, Lemmas 2.6 and 2.7 provide that u = 0 is asymptotically stable for (2.101) if and
only if b > 0. Using Theorem 2.16(b), one obtains that the flip (x, y) = (0, 0) of (2.93) is
asymptotically stable if and only if b > 0.
Lemmas 2.6 and 2.7 also provide that if b > 0 than the map f has the P(1) property. By Lemma
2.4, it results that the map defining the center manifold of (2.93) has the P(1) property, as well.
Therefore, based on Corollary 2.3 and Theorem 2.15, we obtain:
Corollary 2.4. If the steady state (x, y) = (0, 0) of (2.93) is an asymptotically stable flip, then
its region of attraction Da (0, 0) coincides with the natural domain of analyticity of the unique
positively defined R-analytical function V which verifies
V (F(x, y)) - V (x, y) = -ü(x, y)ü4
V (0, 0) = 0
(2.103)
The function V is given by
¥
V (x, y) = â üF k (x, y)ü4
for any (x, y) Î Da (0, 0)
(2.104)
k=0
Remark 2.15. Suppose that the steady state (x, y) = (0, 0) of (2.93) is an asymptotically stable
flip. If there exist p̄ ³ 1 and R̄ > 0 such that üF p̄(x, y)ü < ü(x, y)ü for any (x, y) Î B(R̄) {(0, 0)},
due to Theorem 2.14 we have that B(R̄) Ì Da (0, 0).
Defining
p
n
4
Np = {(x, y) Î R : Vp(x, y) < (p + 1)R̄ }
where Vp(x, y) = â üF k (x, y)ü4
k=0
and
M p = F -p(B(R̄)) = {(x, y) Î Rn : F p(x, y) Î B(R̄)}
it results that Np Ì Da (0, 0) and M p Ì Da (0, 0) for any p ³ 0.
105
Example 2.15. Let’s consider the decoupled discrete semi-dynamical system in R2 :
; yk+1 = y2 k
k+1
k
= -x + x2k
x
kÎN
(2.105)
The steady state (0, 0) of this system is a weakly asymptotically stable flip and its region of
attraction is Da (0, 0) = (-1, 2) ´ (-1, 1). The other steady states of the system are represented
in Figure 10.1 by the black points.
We have that for p̄ = 2 there exists R̄ = 0.839 such that üF p̄(x, y)ü < ü(x, y)ü for any
(x, y) Î B(R̄) {(0, 0)}.
For p = 1, 4, the Np sets are shown in Figure 10.1. In Figure 10.2, the sets M p are represented,
for p = 1, 7. The set M7 is a good approximation of the region of attraction.
1
1
0.5
0.5
0
0
-0.5
-0.5
-1
-1
-0.5
0
0.5
1
1.5
2
Figure 10.1: The sets Np, p = 1, 4 for (2.105)
-1
-0.5
0
0.5
1
1.5
2
Figure 10.2: The sets M p, p = 1, 7 for (2.105)
Neimark-Sacker
The steady state (x, y) = (0, 0) of (2.93) is a Neimark-Sacker singularity if the jacobian matrix
A has a pair of eigenvalues Λ1,2 = e±iΘ with Θ Î [0, Π] {0, Π2 , 2Π3 , Π} and no other eigenvalues on
the unit circle (i.e. m = 2 and the eigenvalues of Ac are e±iΘ ), and if
c = 2e-iΘ Xz1 , C(z2, z2 , z¯2 ) + 2B(z2 , (I - A)-1 B(z2, z¯2 )) + B(z¯2 , (e2iΘ I - A)-1 B(z2 , z2 ))\
(2.106)
satisfies Re(c) ¹ 0, where Az2 = eiΘ z2 , AT z1 = e-iΘ z1 and Xz1 , z2 \ = 1. We remind that
B = D2 F(0, 0) and C = D3 F(0, 0).
In this case, the restriction of (2.93) to the two-dimensional W c (0, 0) (considered at least of
class C4 in a neighborhood of the origin) can be transformed by an polynomial transformation
x = u + Ψ(u), Ψ(u) = O(üuü2 ), to the normal form written in complex coordinate z = u1 + iu2 ,
u = (u1 , u2 )T :
zk+1 = eiΘ zk + ceiΘ |zk |2 zk + O(|zk |4 )
"k Î N
(2.107)
In the followings, let be f (z) = eiΘ z + ceiΘ |z|2 z + O(|z|4 ), the map from the right hand side of
(2.107), defined in a neighborhood of the origin (in the complex plane).
Proposition 2.9. If the steady state (x, y) = (0, 0) of (2.93) is a Neimark-Sacker singularity, it is
weakly asymptotically stable if and only if Re(c) < 0, where c is given by (2.106). If Re(c) < 0,
then the map which defines the center manifold of (2.93) has the P(1) property.
106
Proof. One has
| f (z)|2 = |z|2 + 2Re(c)|z|4 + O(|z|5 )
"|z| < ∆
Let be the positively defined function V (z) = |z|2 . We have that V ( f (z)) - V (z) = 2Re(c)|z|4 +
O(|z|5), therefore, the steady state z = 0 of (2.107) is asymptotically stable if Re(c) < 0 and
unstable if Re(c) > 0. Based on Theorem 2.16(b), it results that the Neimark-Sacker point
(x, y) = (0, 0) of (2.93) is weakly asymptotically stable if and only if Re(c) < 0.
Suppose that Re(c) < 0. We will prove that the map f has the P(1) property. It is clear that there
exists ∆1 Î (0, ∆) and M > 0 such that
| f (z)|2 £ |z|2 + 2Re(c)|z|4 + M|z|5
"|z| < ∆1
Let be ∆2 Î (0, min(∆1 , -2Re(c)
)) and Α = -2Re(c) - M∆2 > 0. Based on the above inequality, one
M
has
| f (z)|2 £ |z|2 + 2Re(c)|z|4 + M∆2 |z|4 = |z|2 - Α|z|4
"|z| < ∆2
1
1
Consider the function g : [0, 2Α
] ® [0, 4Α
] given by g(x) = x - Αx2 . The function g is strictly
increasing.
Let be ∆3 = min(∆2 , 012Α ). Based on the above inequality, we have that B(∆3 ) is invariant to f
and:
| f (z)|2 £ g(|z|2 )
"|z| < ∆3
1
By mathematical induction, using the fact that g is increasing on [0, 2Α
], one can prove that
| f k (z)|2 £
Α-1
k+1
"|z| < ∆3 and k ³ 0
which means that f has the P(1) property.
By Lemma 2.4, it results that the map defining the center manifold of (2.93) has the P(1)
property, as well.
Therefore, based on Theorem 2.15 and Corollary 2.3, the following result holds:
Corollary 2.5. If the steady state (x, y) = (0, 0) of (2.93) is an asymptotically stable NeimarkSacker singularity, then its region of attraction Da (0, 0) coincides with the natural domain of
analyticity of the unique R-analytical function V which verifies
V (F(x, y)) - V (x, y) = -ü(x, y)ü4
V (0, 0) = 0
(2.108)
The function V is given by
¥
V (x, y) = â üF k (x, y)ü4
for any (x, y) Î Da (0, 0)
(2.109)
k=0
Remark 2.16. The sets Np and M p defined in Remark 2.15 can be used in order to obtain
estimates of the region of attraction Da (0, 0).
Example 2.16. Delayed logistic model. Consider the following discrete semi-dynamical system
[GH83]:
x =y
Μ Î R, k Î N
(2.110)
; yk+1 = Μyk (1 - x )
k+1
k
k
107
For Μ = 2, the eigenvalues of corresponding to the steady state ( 12 , 12 ) are e±iΘ with Θ = Π3 . More,
computations show that Re(c) = - 12 < 0. Therefore, the steady state ( 12 , 12 ) of this system is a
weakly asymptotically stable Neimark-Sacker singularity. The steady state (0, 0) of the system
is unstable.
u
x
Making the change of variables K O = S K O where
v
y
0
x
S:K O#K 3
y
1
-1
the system becomes
02
3
O K y - 12 O
0
2
x-
1
2u v +
u
u
K vk+1 O = M(eiΘ ) K v k O - K k k
0
k+1
k
02 v2
3 k
O
(2.111)
For system (2.111), (0, 0) is a weakly asymptotically stable Neimark-Sacker singularity. Let’s
denote by
2uv + 023 v2
u
u
iΘ
F : K O # M(e ) K O - K
O
v
v
0
We find that for p̄ = 6 there exists R̄ = 0.346 such that üF p̄(u, v)ü < ü(u, v)ü for any
(u, v) Î B(R̄) {(0, 0)}.
It is obvious that if E is a part of the region of attraction of the steady state (0, 0) of (2.111) then
S-1(E) is a part of the region of attraction of the steady state ( 12 , 12 ) of (2.110).
For p = 1, 4, the sets S-1 (Np) are shown in Figure 2.11.1. In Figure 2.11.2, the sets S-1(M p) are
represented, for p = 1, 7.
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
-1
-0.5
0
0.5
1
Figure 2.11.1: The sets S-1 (Np), p = 1, 4 for Figure 2.11.2: The sets S-1 (M p), p = 1, 7 for
(2.110)
(2.110)
108
2.4 Implementation of Mathematica 5.0
This section includes a program written in Mathematica 5.0 for the system (2.38). It is shown
how to obtain the first estimate D0aP , P = 72 of the region of attraction as well as the sets Np and
M p, for p = 1, 7.
The following entry gives the two dimensional discrete semi-dynamical system:
In[16]:= f[x_, y_] := {x * y + y, yˆ3};
fvec[z_] := f[z[[1]], z[[2]]];
The following line builds the jacobian in (0, 0):
In[17]:= A = Transpose[{Derivative[1, 0][f][0, 0], Derivative[0, 1][f][0, 0]}];
The norme of the jacobian A is computed:
In[18]:= NMaximize[{Norm[A.{x, y}], xˆ2 + yˆ2 == 1}, {x, y}][[1]]
Out[18]= 1.
as well as the absolute values of its eigenvalues:
In[19]:= N[Abs[Eigenvalues[A]]]
Out[19]= {0., 0.}
Computing the smallest degree tildep for which the matrix Atildep is a contraction:
In[20]:= matnormA[m_] :=
NMaximize[{Norm[MatrixPower[A, m].{x, y}], xˆ2 + yˆ2 == 1}, {x, y}][[1]];
tildep = 1;
While[matnormA[tildep] ³ 1 , tildep + +];
tildep
Out[20]= 2
Building the optimal Lyapunov function:
In[21]:= V[p_, x_, y_] :=
p
ExpandA â(Nest[fvec, {x, y}, m][[1]]ˆ2 + Nest[fvec, {x, y}, m][[2]]ˆ2)E;
m=0
Finding the coefficients of the optimal Lyapunov function:
In[22]:= ord = 10;
P1 = Exponent[V[ord, x, y], x];
P2 = Exponent[V[ord, x, y], y];
B[m_, n_] :=
Which[m £ P1 ß n £ P2, CoefficientList[V[ord, x, y], {x, y}][[m + 1]][[n + 1]],
True, 0];
109
Applying the Cauchy Hadamard formula in order to approximate the region of convergence of
the optimal Lyapunov function:
p
In[23]:= g0[p_, x_, y_] := â(Abs[B[j, p - j] * xˆ(j) * yˆ(p - j)]);
j=0
Defining the Taylor polynomials of the optimal Lyapunov function and the operator D:
p
k
k=2
j=0
In[24]:= W[p_, x_, y_] := â K â(B[j, p - j] * xˆ(j) * yˆ(k - j))O;
dif[p_, x_, y_] := W[p, f[x, y][[1]], f[x, y][[2]]] - W[p, x, y];
The plots of the estimate of the region of convergence of the optimal Lyapunov function D0P , of
the domain on which the DVP0 is negative and of the domain on which VP0 is positive:
In[25]:= m1 = 20;
m2 = 1.2;
p0[p_] := ContourPlot[Evaluate[g0[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {1}, PlotRange ® All, Axes ® False,
Frame ® True, ContourStyle- > {GrayLevel[0.4]}, PlotPoints ® 200,
ColorFunction ® (GrayLevel[1 - (1 - #) * 0.6]&)];
pNEG[p_] := ContourPlot[Evaluate[dif[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {0}, PlotRange ® All, Axes ® False,
Frame ® True, ContourShading ® False, PlotPoints ® 200,
ContourStyle ® Dashing[{0.01, 0.01}]];
pPOZ[p_] := ContourPlot[Evaluate[W[p, x1, x2]], {x1, -m1, m1},
{x2, -m2, m2}, Contours ® {0}, PlotRange ® All, Axes ® False,
Frame ® True, ContourShading ® False, PlotPoints ® 200,
ContourStyle ® {Dashing[{0.1, 0.1}], GrayLevel[0.6]}];
The following entry shows these three plots, their intersection being the first approximate D0aP :
In[26]:= P = 72;
Show[p0[P], pNEG[P], pPOZ[P]]
1
0.5
0
-0.5
-1
-20
-10
0
10
20
110
Finding tildeR:
In[27]:= For[k = tildep, k £ 2 * tildep - 1, k + +,
Rk = NMinimize[
{xˆ2 + yˆ2,
Nest[fvec, {x, y}, k][[1]]ˆ2 + Nest[fvec, {x, y}, k][[2]]ˆ2xˆ2 - yˆ2 == 0 ß xˆ2 + yˆ2 ³ 0.01}, {{x, -m1, m1}, {y, -m2, m2}},
Method- > "DifferentialEvolution"][[1]]ˆ(1/2)]
R = Min[Table[Rk , {k, tildep, 2 * tildep - 1}]]
Out[27]= 0.748983
The plot of the set Ñp:
In[28]:= pN[p_] := ContourPlot[Evaluate[V[p, x, y]], {x, -2, 2}, {y, -m2, m2},
Contours ® {(p + 1) * Rˆ2}, Axes ® False, ContourStyle ® Hue[(p - 1)/10],
Frame ® True, ContourShading ® False, PlotPoints ® 200];
Showing the plot of the sets Ñp, p = 1, 6:
In[29]:= Show[Table[pN[p], {p, 1, 6}]]
0.5
0
-0.5
-1
-2
-1
0
1
2
The plot of the set M̃ p:
In[30]:= pM[p_] :=
ContourPlot[
Evaluate[Nest[fvec, {x, y}, p][[1]]ˆ2 + Nest[fvec, {x, y}, p][[2]]ˆ2],
{x, -m1, m1}, {y, -m2, m2}, Contours ® {Rˆ2}, Axes ® False,
Frame ® True, ContourStyle ® Hue[(p - 1)/10], ContourShading ® False,
PlotPoints ® 500];
Showing the plot of the sets M̃ p, p = 1, 7:
In[31]:= Show[Table[pM[p], {p, 1, 7}]]
111
1
0.5
0
-0.5
-1
-20
-10
0
10
20
112
2.5 Control procedures using regions of attraction
In this section, it will be shown that for discrete semi-dynamical systems with control, if two
steady states belong to an analytic path of asymptotically stable steady states, then there exists
a finite number of values of the control parameters such that, giving successively, at adequate
moments, these values for the control parameters, the steady states are gradually transferred one
in the other [KBGB05a].
Let be the nonlinear discrete semi-dynamical system with control defined by (2.5). Suppose
that the function f is R-analytic.
Definition 2.13. A change of control parameters from Α¢ to Α¢¢ in (2.5) is called maneuver and
it is denoted by Α¢ ® Α¢¢ . The maneuver Α¢ ® Α¢¢ is successful on the path of steady states
j : D¢ Ì D ® W of system (2.5) if Α¢ , Α¢¢ Î D¢ and the solution of:
xk+1 = f (xk , Α¢¢ )
"k Î N
x0 = j(Α¢)
(2.112)
tends to j(Α¢¢ ) when k ® ¥.
Theorem 2.17. Let be j : D¢ Ì D ® W an R-analytic path of strongly asymptotically stable
steady states of (2.5). There exist an open set G Ì W´D and a non-negative R-analytic function
V defined on G satisfying the following conditions:
a. G É G = {(j(Α), Α)/ Α Î D¢ }
b.
; V (j(Α), Α) = 0
V ( f (x, Α), Α) - V (x, Α) = -üx - j(Α)ü2
(2.113)
c. For any Α Î D¢ , Da (j(Α)) is the natural domain of analyticity of x ® V (x, Α)
d. V (x, Α) ® +¥ for x ® y, y Î ¶Da (j(Α)) or for üxü ® ¥
Proof. Let be G = Ç (DA(j(Α)) ´ {Α}) and V : G ® R+ defined by
¢
ΑÎD
¥
V (x, Α) = â üxΑ (k, x) - j(Α)ü2 dt
(2.114)
k=0
where xΑ (k, x) is the solution of (2.5) which satisfies xΑ (0, x) = x.
The set G and the function V (x, Α) satisfy the conditions a-d (see Theorem 2.3).
Corollary 2.6. If j : D¢ Ì D ® W is an R-analytic path of asymptotically stable steady states
of (2.5) then for any Α Î D¢ there is an open neighborhood UΑ of Α and an open neighborhood
Uj(Α) of j(Α) such that:
1. j(Α¢ ) Î Uj(Α) , for any Α¢ Î UΑ
2. Uj(Α) Ì DA(j(Α¢ )) for any Α¢ Î UΑ
113
Proof. For Α Î D¢ and x Î Da (j(Α)), the function V (x, Α) from Theorem 2.17 is considered.
The real and non-negative function V is defined on the open set G = Ç (Da (j(Α)) ´ {Α}), it is
continuous and equal to zero on the set G = {(j(Α), Α)/ Α Î D¢ } Ì G.
ΑÎD¢
As V is continuous and it is equal to zero in (j(Α), Α) Î G, there is an open neighborhood G¢ of
(j(Α), Α) such that for any (x¢ , Α¢ ) Î G¢ , the inequality V (x¢ , Α¢ ) < 1 holds. Let be UΑ an open
neighborhood of Α and Uj(Α) of j(Α) such that Uj(Α) ´ UΑ Ì G¢ . As the function j is continuous,
it can be admitted that for any Α¢ Î UΑ , we have j(Α¢ ) Î Uj(Α) (contrarily, the neighborhood UΑ
can be replaced with a smaller neighborhood UΑ¢ Ì UΑ , for which we have j(Α¢ ) Î Uj(Α) , for any
Α¢ Î UΑ¢ ).
Thus, for any (x¢ , Α¢ ) Î Uj(Α) ´ UΑ , we have V (x¢ , Α¢ ) < 1. This means that for any x¢ Î Uj(Α)
and any Α¢ Î UΑ , we have that x¢ Î Da (j(Α¢ )). Thus, Uj(Α) Ì Da (j(Α¢)), for any Α¢ Î UΑ .
Remark 2.17. Corollary 2.6 states that for any Α¢ Î UΑ, both maneuvers Α ® Α¢ and Α¢ ® Α
are successful on the path j.
Theorem 2.18. For two steady states j(Αø ) and j(Αøø) belonging to the R-analytic path
j : D¢ Ì D ® W of stronlgy asymptotically stable steady states of (2.5), there exist a finite
number of values of the control parameters Α1 , Α2, ..., Α p Î D¢ such that all the maneuvers
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
(2.115)
are successful on the path j.
Proof. Let be P Ì D¢ a polygonal line which joins Αø and Αøø . For any Α Î P we consider the
neighborhoods UΑ and Uj(Α) given by Corollary 2.6.
The family of neighborhoods {UΑ}ΑÎP is a covering with open sets of the compact polygonal line
P. From this covering we can subtract a finite covering of P, i.e., there exist Ᾱ1 , Ᾱ2 , ..., Ᾱq Î P
q
such that P Ì Ç UᾹk . More, it can be assumed that Αø Î UᾹ1 and Αøø Î UᾹq and that the
k=1
intersections UᾹk È P are open and connected sets in P, and
(UᾹk È P) È (UᾹk+2 È P) = Æ for any k = 1, 2, ..., q - 2.
Taking into account Remark 2.17, as Αø Î UᾹ1 and Αøø Î UᾹq , it comes naturally that the
maneuvers Αø ® Ᾱ1 and Ᾱq ® Αøø are successful on the path j.
We still have to prove that each maneuver Ᾱk ® Ᾱk+1 is successful for any k = 1, 2, ..., q - 1.
If Ᾱk Î UᾹk+1 , Remark 2.17 provides that the maneuver Ᾱk ® Ᾱk+1 is successful on the path j.
If Ᾱk Î/ UᾹk+1 , a point Ᾱk,k+1 Î (UᾹk È P) È (UᾹk+1 È P) is considered. Remark 2.17 provides that
both maneuvers Ᾱk ® Ᾱk,k+1 and Ᾱk,k+1 ® Ᾱk+1 are successful on the path j.
Thus, eventually inserting control parameters Ᾱk,k+1 between Ᾱk and Ᾱk+1 , we come to find (after
changing the notation and re-numbering) a finite sequence Α1 , Α2 , ..., Α p Î D¢ such that all the
maneuvers
Αø ® Α1 ® Α2 ® ... ® Α p ® Αøø
are successful on the path j.
114
Remark 2.18. Theorem 2.18 states that two steady states belonging to an analytic path j of
asymptotically stable steady states can be transferred one in the other using a finite number
of successful maneuvers. In fact, the transfer is made through the domains of attraction of the
states j(Α1 ), ..., j(Αn), j(Αøø).
Example 2.17. Consider the discrete demi-dynamical system with control:
k
xk = (x0 - Α)3 + Α,
kÎN
(2.116)
There are three analytic paths of steady states: j1 (Α) = Α, j2 (Α) = Α - 1 and j3 (Α) = Α + 1,
defined for Α Î R. The path j1 is an analytic path of asymptotically stable steady states while
j2 and j3 are analytic paths of unstable steady states.
For any Α Î R, the region of attraction of the asymptotically stable steady state j1 (Α) is
Da (j1 (Α)) = (Α - 1, Α + 1).
For Αø = 0 and Αøø = 2, let’s consider the asymptotically stable steady states j1 (Αø ) = 0
and j1 (Αøø ) = 2. The maneuver Α : Αø = 0 ® 2 = Αøø is not successful, because
j1 (Αø ) = 0 Î/ Da (j1 (Αøø ) = 2) = (1, 3). Though, a finite number of maneuvers can be found,
which transfer the steady state j1 (Αø ) = 0 to the steady state j1 (Αøø ) = 2, for example:
Α : Αø = 0 ® 0.7 ® 1.4 ® 2 = Αøø
(2.117)
Chapter 3
Control procedure for the flight of the
ALFLEX model plane during its final
approach and landing phases using
domains of attraction
3.1 Introduction
The vehicle subjected to the analysis in this chapter is the Automatic Landing Flight Experiment
(ALFLEX ) model plane. This is a reduced scale model of the H-II Orbiting Plane (HOPE),
an unmanned reusable orbiting spacecraft. It has been built in order to study the flight of
the spacecraft during its final approach and landing phases. This flight is made possible due
to complicated Automatic Flight Control Systems, designed to perform quick responses to
commands. That is because in this case, and in general in the case of modern high-speed
airplanes, including spinning missiles, designed in such a way that their masses are concentrated
in their fuselages, inertial coupling may occur. This phenomenon is a gyroscopic effect, due to
which small perturbations or small changes of the control surface angles may lead to dramatic
changes in roll rate.
Landing experiments have been conducted in Australia in 1997, where the model plane has
been released from a helicopter at the hight of 1500m. According to the flight data given in
[ALF97], the landing flight can be divided into four phases: glide "path capture" phase, steady
descent phase, pre-flare and shallow glide slope phase, final flare phase and ground roll phase.
During the phase of "path capture", strong variations of the state parameters occur; the pitch
angle Θ declines from 0ë to -50ë . In the steady descent phase, the vehicle establishes itself in a
Θ > -30ë steady glide. During this phase, the elevator angle ∆e is higher than 3ë and the vehicle
flies at constant dynamic pressure towards the runway. Approaching the runway threshold, it
executes a 0.5G pre-flare and final flare before touching down. The flight of ALFLEX during
its final approach and landing phases is controlled by an adequate and complex variation of the
control surface angles: the aileron and rudder angles ∆a and ∆r vary around 0ë while the elevator
angle ∆e varies around 3ë .
In [GM00, GK04], a simplified mathematical model of the motion around the center of gravity
of the ALFLEX vehicle is presented (the speed break angle ∆SB is fixed at a certain value, the
115
116
forward velocity V , the weight W of the vehicle and the air density Ρ are constant, the yaw angle
Ψ is not included in the model). This model is defined by a system of seven ordinary differential
equations. The authors use this model to determine the steady states (equilibrium states, trim
points) corresponding to different combinations of the control angles ∆e and ∆a (while ∆r = 0ë )
including those which correspond to ∆a = ∆r = 0ë and ∆e = 3ë . For nine different fixed values
of ∆e , changing ∆a in the interval -20ë ~ 20ë , nine steady state contours are found, using the
continuation method. Stability analysis is then undertaken along these contours, by means of
Lyapunov’s first method. The authors call "level flight" that steady state for which the roll rate
is equal to zero. It is pointed out that for ∆e larger than -2ë there are no stable "level flights".
Therefore, if the elevator angle is fixed at such a value, the roll rate has to be stabilized. For
this reason, the authors seek control techniques which can bring the vehicle to a stable "level
flight". It is suggested that only those "level flights" which are asymptotically stable can be
achieved. Successive changes of the ailerons and the elevator angles are found which lead the
vehicle from a high roll rate steady state to an asymptotically stable "level flight".
In this chapter, using the same mathematical model as in [GM00, GK04], with different
mathematical tools, a system of four algebraic equations is determined, which implicitly defines
the whole set of steady states. This system of four algebraic equations permits to establish some
global properties of the set of steady states (in the case when numerical data are not taken into
account), to identify all the zero roll rate steady states including those which correspond to
desired descent flights, to establish the values of the control surface angles which have to be
used and to clarify the stability of these states.
Steady states corresponding to different combinations of the values of the control angles ∆e
and ∆a (while ∆r = 0ë ) are found. For ∆e Î [-7ë , 7ë ] and ∆a Î [-10ë , 10ë ] two-dimensional
continuous surfaces of steady states in the seven-dimensional phase-space are computed. These
surfaces are called paths of steady states. A stability analysis of the steady states belonging to
each path is undertaken, and the asymptotically stable as well as the unstable parts of each path
are found. Bifurcation analysis along some constant elevator angle contours is undertaken.
Zero roll rate descent flight solutions are identified. The effect of the perturbations and of the
change of control surface angles in the moment of release is evaluated. It is shown that some
unstable steady states are very sensitive to perturbations and to the change of the control angles.
Some maneuvers proposed in [GK04, GK04] along and between the paths of steady states are
presented. It is emphasized that according to the mathematical model, ALFLEX belongs to the
category of planes for which the roll rate may not decay to zero even after the aileron is centered
(i.e. ∆a = 0ë ) [Hac78]. A control technique of the roll rate based on the evaluation of the region
of attraction of a zero roll rate asymptotically stable steady state is presented.
Using the region of attraction of the asymptotically stable zero roll rate steady state x̃1 ,
corresponding to the control angles ∆e = -2.2ë , ∆a = -0.68ë and ∆r = 0ë , it is shown that
any high roll rate state can be transferred to x̃1 by a single maneuver. This result is based on the
evaluation of the region of attraction of x̃1 , by a method described in Chapter 1.
Finally, in the framework of the simplified mathematical model, a technique for the control of
the "path capture" and "steady descent" flight phases of the ALFLEX reentry vehicle (during its
final approach and landing flight) is presented. The technique consists of successive, prescribed
and quick changes of the values of the aileron and elevator angles and the lead of the state
parameters of the vehicle along the stable manifolds of the saddle points corresponding to the
"path capture" and "steady descent" flight phases, to these steady states. The obtained results
are compared to those reported in the experimental flight data.
117
3.2 The mathematical model
According to [GM00, GK04], the assumptions for the ALFLEX model plane are:
i. Forward velocity V , weight W and air density Ρ are constant.
ii. Angle of attack Α and sideslip angle Β are small.
iii. The initial six degrees of freedom equations of motion of a rigid airplane with respect
to an xyz body-axis system (where xz is the plane of symmetry) are equivalent to the
following five degrees of freedom equations:
g
Β̇ = p sin Α - r cos Α + ŷ + sin Φ cos Θ
V
g
Α̇ = -pΒ + q + ẑ + cos Φ cos Θ
V
(3.1)
(3.2)
Ixz
ṙ = i1 (-qr + ia pq + l̂)
Ix
q̇ = i2 (rp + ib(-p2 + r2 ) + m̂)
I
ṙ - xz ṗ = i3 (-pq - ic qr + n̂)
Iz
ṗ -
(3.3)
(3.4)
(3.5)
iv. The Euler angles, roll angle j and pitch angle Θ, are determined by the kinematic relations:
Φ̇ = p + q sin Φ tan Θ + r cos Φ tan Θ
Θ̇ = q cos Φ - r sin Φ
(3.6)
(3.7)
v. In (3.1)-(3.7) we have:
i1 =
Iz -Iy
Ix
i2 =
ŷ =
g
Y
WV a
Iz -Ix
Iy
ẑ =
i3 =
g
Z
WV a
Iy -Ix
Iz
l̂ =
ia =
Ixz
i1 Ix
ib =
Ixz
i2 Iy
ic =
L
i1 Ix
m̂ =
M
i2 Iy
n̂ =
N
i3 Iz
Ixz
i3 Iz
where
Ix , Iy , Iz : moments of inertia about the x-,y- and z-axis, respectively
Ixz : product of inertia
Ya , Za : aerodynamic forces
L, M, N: aerodynamic moments about the center of gravity
p, q, r: angular velocities about the x-, y- and z-axis, respectively
g: gravitational acceleration.
vi. Aerodynamic forces and moments in (3.1)-(3.7) are linearly related to motion variables
and control surface angles: ∆e elevator angle, ∆a aileron angle, ∆r rudder angle:
ŷ = ŷΒ Β + ŷr r + ŷ∆r ∆r
ẑ = ẑ0 + ẑΑ (Α - Α0 ) + ẑ∆e (∆e - ∆e0 )
l̂ = l̂Β Β + l̂ p p + l̂r r + l̂∆a ∆a + l̂∆r ∆r
m̂ = m̂Α (Α - Α0 ) + m̂Α̇ Α̇ + m̂q q + m̂∆e (∆e - ∆e0 )
n̂ = n̂Β Β + n̂ p p + n̂r r + n̂∆a ∆a + n̂∆r ∆r
(3.8)
118
The numerical data are:
W=760g g=9.81
∆e0 = 3ë
S=9.45
Q=3.154 B=3.295
Ix =407
Iy =1366 Iz =1634
CyΒ =-0.6849 CL =0.2387
Cy∆r =0.1907 CLΑ = 2.016
Cyr =0
CL∆e =0.6355
CD =0.0745
CDΑ =0.2714
CD∆e =0.1019
Α0 = 8.18ë
V=73.84
Ixz =10.4
ClΒ =-0.1774
Cl p=-0.007
Clr =0.004
Cl∆a =0.1488
Cl∆r =0.0788
Θ0 = -9.16ë
Ρ=1.156
CmΑ =-0.0134
CmΑ. =0
Cmq =-0.0474
Cm∆e =-0.2152
CnΒ =-0.0657
Cnp=0.0032
Cnr =-0.006
Cn∆a =-0.0266
Cn∆r =-0.099
A = ΡV 2 S/ 2
Cz0 = - IW cos Θ0 M / A
CzΑ = -CLΑ cos Α0 + CL sin Α0 - CD cos Α0 - CDΑ sin Α0
Cz∆e = -CL∆e cos Α0 - CD∆e sin Α0
gAC
ŷΒ = WVyΒ
gAC
ẑ0 = WVz0
ABC
l̂Β = i I lΒ
1 x
m̂Α =
n̂Β =
AQCmΑ
i2 Iy
ABCnΒ
i3 Iz
gAC
gAC
ŷr = WVyr
gAC
ẑΑ = WVzΑ
ABC
l̂ p = i I l p
1 x
m̂Α. =
n̂ p =
ŷ∆r = WV∆r
gAC
ẑ∆e = WVz∆e
ABC
l̂r = i I lr
AQCmΑ.
i2 Iy
ABCnp
i3 Iz
1 x
m̂q =
n̂r =
AQCmq
i2 Iy
ABCnr
i3 Iz
ABCl∆a
i1 Ix
AQCm∆e
m̂∆e = i I
2 y
ABCn∆a
n̂∆a = i I
3 z
l̂∆a =
l̂∆r =
n̂∆r =
ABCl∆r
i1 Ix
ABCn∆r
i3 Iz
The equations of motion (3.1-3.7) can be represented in the form:
ẋ = F(x, ∆)
(3.9)
x = [ Β Α p q r j Θ ]T
(3.10)
∆ = [ ∆e ∆a ∆r ]T
(3.11)
where:
119
3.3 The set of steady states
Definition 3.1. The set E of the steady states of the system (1-7) is the set of the real solutions
x Î X = R5 ´ (-Π, Π] ´ (- Π2 , Π2 ) of the equation
F (x, ∆) = 0
(3.12)
obtained for different combinations of the values of the control angles ∆ in a given region D.
Definition 3.2. The set S is the set of all the real solutions
Π Π
x = [ Β Α p q r j Θ ]T Î X = R5 ´ (-Π, Π] ´ (- , )
2 2
of the following algebraic system:
ì D Β + D p + D r + D (p sin Α - r cos Α + g sin j cos Θ)+
ï
ï
1
3
5
0
V
ï
ï
ï
+ŷ∆r q[p(n̂∆aia + l̂∆a ) + r(l̂∆a ic - n̂∆a )] = 0
ï
ï
ï
ï
ï
ï
ï
ï
ï
D2 (Α - Α0 ) + D4 q + m̂∆e (-pΒ + ẑ0 + Vg cos j cos Θ) - ẑ∆e [rp + ib(-p2 + r2 )] = 0
ï
í
ï
ï
ï
ï
ï
ï
p + q sin j tan Θ + r cos j tan Θ = 0
ï
ï
ï
ï
ï
ï
ï
ï
ï q cos j - r sin j = 0
î
where the constants Di , i = 0, 5 are given by
D0
D1
D2
D3
D4
=
=
=
=
=
(3.13)
l̂∆a n̂∆r - l̂∆r n̂∆a
ŷΒ D0 + ŷ∆r (l̂Β n̂∆a - l̂∆an̂Β )
ẑΑ m̂∆e - ẑ∆e m̂Α
ŷ∆r (l̂ pn̂∆a - l̂∆a n̂ p)
m̂∆e - ẑ∆e m̂q
D5 = ŷr D0 + ŷ∆r (l̂r n̂∆a - l̂∆a n̂r )
Hypothesis 1. ŷ∆r , ẑ∆e , l̂∆a, l̂∆r , m̂∆e , n̂∆a, n̂∆r ¹ 0
Proposition 3.1. Under Hypothesis 1, the set of steady states E is included in the set S of the
solution of the system (3.13). The solution x = [ Β Α p q r j Θ ]T Î S is obtained for
the following "control surface angles":
ì ∆e = ∆e0 - 1 [pr + ib(-p2 + r2 ) + m̂Α (Α - Α0 ) + m̂q q]
ï
ï
m̂∆e
ï
ï
ï
ï
ï
ï
ï
∆a = - l̂1 (-qr + ia pq + l̂Β Β + l̂ p p + l̂r r + l̂∆r ∆r )
í
(3.14)
ï
∆a
ï
ï
ï
ï
ï
ï
ï
ï ∆r = - 1 (p sin Α - r cos Α + ŷΒ Β + ŷr r + g sin j cos Θ)
ŷ∆r
V
î
Proof. According to (3.8), the first five equations of system (3.12) represent a linear system
with three unknowns (∆e , ∆a , ∆r ), and can be written in the form
A[ ∆e ∆a ∆r ]T = f (x),
æç
çç
çç
ç
where A = çççç
çç
çç
çç
è
0
0 ŷ∆r
ẑ∆e 0 0
0 l̂∆a l̂∆r
m̂∆e 0 0
0 n̂∆a n̂∆r
ö÷
÷÷
÷÷
÷÷
÷÷
÷÷
÷÷
÷÷
÷
ø
(3.15)
120
As rank(A) = 3 (e.g. the 3rd order minor of A formed by the first three lines of A is different
from 0, under Hypothesis 1), the condition of compatibility of the system (3.15) is rank(Ā) = 3,
where Ā is the extended matrix of (3.15). The first two equations of the algebraic system (3.13)
are obtained from this condition of compatibility.
In the last two equations of (3.12), the control angles (∆e , ∆a , ∆r ) are not present, so these
equations provide the last two equations of the algebraic system (3.13).
For the solution x = [ Β Α p q r j Θ ]T Î S, the control surface angles (∆e , ∆a , ∆r ) can
be found solving the system (3.15).
Remark 3.1. If for a solution x = [ Β Α p q r j Θ ]T Î S the corresponding control
angles (∆e , ∆a , ∆r ) have physical meaning , i.e. they belong to D, then the solution x is a steady
state.
The equations (3.13) can be written in the form:
G(x) = 0,
where G : X ® R4
(3.16)
Remark 3.2. (a) The function G from (3.16) is continuous on X, thus the set S = G-1 ({0}) is
a closed set in R7 .
(b) The set S is unbounded. For example, the unbounded set
D g
{x = [ - D10V
Α0 -
1
(D4 q
D2
Π
2
+ m̂∆e ẑ0 ) 0 q 0
0 ]T / q Î R}
is included in S.
Hypothesis 2. D1 D2 ¹ 0
Proposition 3.2. (a) Under Hypothesis 1-2, if D0 = 0 then for any x Î X, the rank of the
(x) is equal to 4.
matrix ¶G
¶x
(b) Under Hypothesis 1-2, if D0 ¹ 0 then for any
T
1
x Î W = {[ Β Α p q r j Θ ] Î X/ |p| p2 + r2 < |K|}
the rank of the matrix
¶G
(x)
¶x
is equal to 4. Where K =
D1 D2
.
D0 m̂∆e
Proof. Let’s suppose that there exists x Î X such that rank( ¶G
(x)) < 4. Thus, all the minors of
¶x
¶G
order 4 of the matrix ¶x (x) are equal to zero. This condition provides 35 equations which have
to be fulfilled by x. One of these equations is
D0 m̂∆e p(p cos Α + r sin Α) + D1 D2 = 0
(3.17)
In case (a), as D0 = 0 and Hypothesis 2 holds, the equation (3.17) cannot be satisfied, thus
rank( ¶G
(x)) = 4 for any x Î X.
¶x
In case (b), as D0 ¹ 0 and Hypothesis 1-2 hold, equation (3.17) provides p ¹ 0. Equation (3.17)
can be written in the form
K
sin (Α + Γ(p, r)) = 1
(3.18)
p p2 + r2
where sin Γ(p, r) = 0 p2
| 0K2
p
p +r 2
p +r 2
| > 1.
and cos Γ(p, r) = 0 r2
p +r 2
. Equation (3.18) is incompatible if
121
Proposition 3.3. (a) Under Hypothesis 1-2, if D0 = 0 then the set S is a 3-dimensional submanifold of R7 .
(b) Under Hypothesis 1-2, if D0 ¹ 0 then the set S È W is a 3-dimensional sub-manifold of
R7 .
Proof. (a) From Proposition 2, rank( ¶G
(x)) = 4 for any x Î S, thus, the theorem of the rank
¶x
provides that S is a 3-dimensional sub-manifold of R7 .
(x)) = 4 for any x Î S È W, thus, the theorem of the rank
(b) From Proposition 2, rank( ¶G
¶x
provides that S È W is a 3-dimensional sub-manifold of R7 .
122
3.4 Zero roll rate steady states
From the point of view of the final approach and landing phase, zero roll rate descent flights of
the ALFLEX reentry vehicle are important [GK04], [GM00], [KBB03], [KBCB02].
Definition 3.3. A solution x = [ Β Α p q r j Θ ]T Î S of the equation (3.13) is a zero
roll rate solution if p = 0.
Proposition 3.4. Under Hypothesis 1-2, the set S0 of zero roll rate solutions is the reunion of
the following three disjoined sets:
S10 = {x = [ ¡Β̄ Α(q) 0 q 0 ± Π2 0 ]T Î R7 / q Î R}
D g
ì
ï
Β̄ = - D 0V
ï
ï
1
ï
where í
ï
ï
ï
ï Α(q) = Α0 - 1 (D4 q + m̂∆e ẑ0 )
D2
î
(3.19)
Π
S20 = {x = [ Β(r, j) Α(r, j) 0 q(r, j) r j 0 ]T Î R7 / r Î R, j Î (-Π, Π] {± }} (3.20)
2
g
1
ì
Β(r,
j)
=
[D
r
D
r
cos
Α(r,
j)
+
D
sin
j
+
ŷ
(
l̂
i
n̂
)q(r,
j)r]
ï
5
0
0V
∆r ∆a c
∆a
D1
ï
ï
ï
ï
ï
ï
ï
where í
Α(r, j) = Α0 - D1 [D4 q(r, j) + m̂∆e (ẑ0 + Vg cos j) - ẑ∆e ibr2 ]
ï
ï
2
ï
ï
ï
ï
ï
ï q(r, j) = r tan j
î
Π Π
S30 = {x = [ Β(j, Θ) Α(j, Θ) 0 0 0 j Θ ]T Î R7 / j Î (-Π, Π], Θ Î (- , ) {0}} (3.21)
2 2
D0 g
ì
Β(j, Θ) = - D V sin j cos Θ
ï
ï
1
ï
ï
where í
ï
ï
ï
ï Α(j, Θ) = Α - m̂∆e (ẑ + g cos j cos Θ)
0
0
D
V
î
2
and they are obtained for the following combination of control surface angles:
(1) x Î S10 is obtained for:
m̂
ì
∆e = ∆e0 + DΑ ẑ0 + (m̂Α - m̂q ẑΑ )q
ï
ï
2
ï
ï
ï
ï
ï
ï
ï
l̂ n̂ -l̂ n̂
ï
í
∆a = ± Β ∆rD ∆r Β Vg
ï
1
ï
ï
ï
ï
ï
ï
ï
ï
ï ∆ = ¡ l̂Β n̂∆a -l̂∆a n̂Β g
D1
V
î r
(3.22)
ì
∆e = ∆e0 - m̂1 [ibr2 + m̂Α (Α(r, j) - Α0 ) + m̂q q(r, j)]
ï
ï
∆e
ï
ï
ï
ï
ï
ï
ï
∆a = - l̂1 (-qr + l̂Β Β(r, j) + l̂r r + l̂∆r ∆r )
í
ï
∆a
ï
ï
ï
ï
ï
ï
ï
ï ∆r = - 1 (-r cos Α(r, j) + ŷΒ Β(r, j) + ŷr r + g sin j)
ŷ∆r
V
î
(3.23)
(2) x Î S20 is obtained for:
123
(3) x Î S30 is obtained for:
m̂
ì
∆e = ∆e0 + DΑ (ẑ0 + Vg cos j cos Θ)
ï
ï
2
ï
ï
ï
ï
ï
ï
ï
l̂ n̂ -l̂ n̂
ï
í
∆a = Β ∆rD ∆r Β Vg sin j cos Θ
ï
1
ï
ï
ï
ï
ï
ï
ï
ï
ï ∆ = - l̂Β n̂∆a -l̂∆a n̂Β g sin j cos Θ
D1
V
î r
(3.24)
3.4.1 Sideslip descent flight solutions from S01
Remark 3.3. If x is a zero roll rate steady state solution from S10 for which q = 0 then:
D g
Β = ¡Β̄ = ± D 0V , Α = Α0 1
m̂∆e
ẑ ,
D2 0
r = 0, j = ± Π2 , Θ = 0, ∆e = ∆e0 +
m̂Α
ẑ ,
D2 0
l̂ n̂∆r -l̂∆r n̂Β g
,
D1
V
∆a = ± Β
l̂ n̂∆a -l̂∆a n̂Β g
.
D1
V
∆r = ¡ Β
Definition 3.4. A zero roll rate solution from S10 for which q = 0 will be called sideslip flight
solution (SS).
Remark 3.4. A sideslip flight solution xSS has the form:
xSS = [ ΒSS ΑSS 0 0 0 jSS = ± Π2 0 ]T
where
ì
ΑSS = ΑSS (V ) = Α0 ï
ï
ï
ï
í
ï
ï
ï
ï Β = Β (V ) = ± D0 g
SS
D1 V
î SS
with
K=
m̂∆e
ẑ
D2 0
(3.25)
= Α0 + VK2 cos Θ0
2WCm∆e
¹0
ΡS(CzΑCm∆e - CmΑCz∆e )
(3.26)
(3.27)
Thus, for a value of the velocity V > 0, there are two sideslip flight solutions (due to the
symmetry):
D g
(3.28)
xVSS = [ ΒVSS = D10V ΑVSS = Α0 + VK2 cos Θ0 0 0 0 Π2 0 ]T
D g
x̄VSS = [ Β̄VSS = - D10V
ΑVSS = Α0 + VK2 cos Θ0 0 0 0 - Π2 0 ]T
(3.29)
The sideslip flight solution xVSS corresponds to the following combination of control surface
angles:
∆a = ∆Va =
l̂Β n̂∆a - l̂∆a n̂Β g
l̂Β n̂∆r - l̂∆r n̂Β g
m̂ g
, ∆r = ∆Vr = , ∆e = ∆Ve = ∆e0 - Α cos Θ0
D1
V
D1
V
D2 V
(3.30)
The sideslip flight solution x̄VSS corresponds to the following combination of control surface
angles:
∆a = -∆Va = -
l̂Β n̂∆r - l̂∆r n̂Β g
l̂Β n̂∆a - l̂∆an̂Β g
m̂ g
, ∆r = -∆Vr =
, ∆e = ∆Ve = ∆e0 - Α cos Θ0 (3.31)
D1
V
D1
V
D2 V
Definition 3.5. A sideslip flight solution xSS will be called:
124
• sideslip descent flight solution (SSD) if jSS =
Π
2
and ΒSS < 0, or if jSS = - Π2 and ΒSS > 0;
Π
2
and ΒSS > 0, or if jSS = - Π2 and ΒSS < 0;
• sideslip level flight solution (SSL) if ΒSS = 0;
• sideslip ascending flight solution (SSA) jSS =
Remark 3.5. For a V > 0, the sideslip flight solutions xVSS and x̄VSS are sideslip descent flight
D g
solutions (SSDs) if and only if D 0V < 0.
1
3.4.2 Straight descent flight solutions from S02
Remark 3.6. If x is a zero roll rate steady state solution from S20 for which j = 0 and r = 0
m̂
m̂
then: q = 0, Β = 0, Α = Α0 - D∆e (ẑ0 + Vg ), ∆a = ∆r = 0 and ∆e = ∆e0 + DΑ (ẑ0 + Vg ).
2
2
Definition 3.6. A zero roll rate solution from S20 for which j = 0 and r = 0 will be called
straight flight solution (ST).
Remark 3.7. A straight flight solution xST has the form:
xST = [ 0 ΑST 0 0 0 0 0 ]T
(3.32)
where
ΑST = ΑST (V ) = Α0 -
m̂∆e
g
K
(ẑ0 + ) = Α0 - 2 (1 - cos Θ0 )
D2
V
V
(3.33)
Thus, for a value V > 0, there is a unique straight flight solution:
xVST = [ 0 ΑVST = Α0 - VK2 (1 - cos Θ0 ) 0 0 0 0 0 ]T
(3.34)
The straight flight solution xVST corresponds to the following combination of control surface
angles:
g
m̂
∆a = ∆r = 0, ∆e = ∆Ve = ∆e0 + Α (ẑ0 + )
(3.35)
D2
V
It is clear that ΑVST ® Α0 and ∆Ve ® ∆e0 as V ® ¥.
Definition 3.7. A straight flight solution xST will be called:
• straight descent flight solution (STD) if ΑST > 0;
• straight level flight solution (STL) if ΑST = 0;
• straight ascending flight solution (STA) if ΑST < 0.
The nature of the straight flight solution xVST given by (3.34) depends on the signs of Α0 and
K(1 - cos Θ0 ) and on the velocity V (see Table 1).
125
Α0 < 0
Α0 = 0
Α0 > 0
K<0
for V < V0 , xVST STD
for V = V0 , xVST STL
for V > V0 , xVST STA
for any V > 0, xVST STD
for any V > 0, xVST STD
Θ0 = 0
K>0
for any V > 0, xVST STA
for any V > 0, xVST STA
for any V > 0, xVST STL
for any V > 0, xVST STA
for any V > 0, xVST STD
for V < V0 , xVST STA
for V = V0 , xVST STL
for V > V0 , xVST STD
Table 3.1: The nature of the ST flight solutions belonging to the path xST , where V0 =
1
| ΑK |
0
3
3.4.3 Symmetric descent flight solutions from S0
Remark 3.8. If x is a zero roll rate solution from S30 for which j = 0 then: ∆a = ∆r = 0,
m̂
m̂
∆e = ∆e0 + DΑ (ẑ0 + Vg cos Θ), Β = 0 and Α = Α0 - D∆e (ẑ0 + Vg cos Θ).
2
2
Definition 3.8. A zero roll rate solution from
flight solution (SM).
S30
obtained for j = 0 will be called symmetric
Remark 3.9. A symmetric flight solution xSM has the form
xSM = [ 0 ΑSM 0 0 0 0 ΘSM ]T
(3.36)
with ΘSM ¹ 0 and the following relation between ΑSM and ΘSM holds:
ΑSM = Α0 -
m̂∆e
g
K
(ẑ0 + cos ΘSM ) = Α0 - 2 (cos ΘSM - cos Θ0 )
D2
V
V
(3.37)
The symmetric flight solution xSM is obtained for the following combination of control surface
angles:
∆a = ∆r = 0, ∆e = ∆e0 +
m̂Α
C K
g
(ẑ0 + cos ΘSM ) = ∆e0 + mΑ 2 (cos ΘSM - cos Θ0 )
D2
V
Cm∆e V
(3.38)
Thus, for a value V > 0, there are two one-dimensional paths of symmetric flight solutions
depending on ∆e :
xVSM : DV∆e ® X
x̄VSM : DV∆e ® X
DV∆e
where
xVSM (∆e ) = [ 0 ΑSM (∆e ) 0 0 0 0 ΘVSM (∆e ) ]T
(3.39)
x̄VSM (∆e ) = [ 0 ΑSM (∆e ) 0 0 0 0 -ΘVSM (∆e ) ]T
(3.40)
Cm∆e V 2
(∆ - ∆e0 ) < 1}
= {∆e Î R/ (∆e , 0, 0) Î D, 0 < cos Θ0 +
CmΑ K e
C
ì
ΑSM (∆e ) = Α0 - Cm∆e (∆e - ∆e0 )
ï
ï
mΑ
ï
ï
í
ï
ï
ï
ï ΘV (∆ ) = arccos(cos Θ + Cm∆e V 2 (∆ - ∆ )) Î (0, Π )
0
e
e0
CmΑ K
2
î SM e
(3.41)
126
Definition 3.9. A symmetric flight solution xSM will be called:
• symmetric descent flight solution (SMD) if ΘSM - ΑSM < 0;
• symmetric level flight solution (SML) if ΘSM - ΑSM = 0;
• symmetric ascending flight solution (SMA) if ΘSM - ΑSM > 0.
Remark 3.10. The symmetric descent flight solutions belonging to the path xVSM correspond to
those values of ∆e Î DV∆e for which the following inequality holds:
-Α0 +
Cm∆e
C V2
(∆e - ∆e0 ) + arccos(cos Θ0 + m∆e (∆e - ∆e0 )) < 0
CmΑ
CmΑ K
(3.42)
The symmetric descent flight solutions belonging to the path x̄VSM correspond to those values of
∆e Î DV∆e for which the following inequality holds:
-Α0 +
C V2
Cm∆e
(∆e - ∆e0 ) - arccos(cos Θ0 + m∆e (∆e - ∆e0 )) < 0
CmΑ
CmΑ K
(3.43)
Remark 3.11. The three-dimensional submanifold of X
XΑqΘ = {x = [ 0 Α 0 q 0 0 Θ ]T Î X}
(3.44)
is invariant to the flow of the dynamical system (1-7), if ∆a = ∆r = 0. For ∆a = ∆r = 0, the
dynamical system (1-7) on the submanifold XΑqΘ can be written as:
g
cos Θ
V
q̇ = i2 (m̂Α (Α - Α0 ) + m̂q q + m̂∆e (∆e - ∆e0 ))
Α̇ = q + ẑ0 + ẑΑ (Α - Α0 ) + ẑ∆e (∆e - ∆e0 ) +
Θ̇ = q
(3.45)
(3.46)
(3.47)
The above system is called the longitudinal decoupled system. The matrix of the linearized
longitudinal decoupled system in (Α, q, Θ) is
1 - Vg sin Θ
æç ẑΑ
çç
0
A(Θ) = ççç i2 m̂Α i2 m̂q
ç
0
1
0
è
ö÷
÷÷
÷÷
÷÷
ø
(3.48)
If ẑΑ , m̂q , m̂Α < 0 and i2 > 0 (the case of the numerical data), then the real parts of the
eigenvalues of the matrix A(Θ) have the following signs:
• Θ < 0: 2 eigenvalues with negative real parts and 1 negative eigenvalue
• Θ = 0: 2 eigenvalues with negative real parts and 1 zero eigenvalue
• Θ > 0: 2 eigenvalues with negative real parts and 1 positive eigenvalue
The steady state x = [ 0 Α 0 q 0 0 Θ ]T Î X is called longitudinally asymptotically
stable / stable / unstable if and only if [ Α q Θ ]T is an asymptotically stable / stable / unstable
steady state of the longitudinal decoupled system (thus, if and only if Θ < 0).
127
3.5 Numerical results
3.5.1 The paths of steady states
For ∆e = 3 deg and ∆a = 0 deg, the following three steady states have been determined:
not
x10
x20
x30,SM
Β(ë )
Α(ë )
p(ë / s)
q(ë / s)
r(ë / s)
j(ë )
Θ(ë )
-2.99929 22.537 95.9772 15.2602 42.195 739.883 -64.9437
2.99929 22.537 -95.9772 15.2602 -42.195 -739.883 -64.9437
0
8.18
0
0
0
0
-9.16
For each of these steady states the conditions of Theorem 1.1 are fulfilled. Consequently, there
are three paths of steady states.
For these paths, the computed domains of variation for the control parameters ∆e and ∆a are
shown in Figs. 3.1.0, 3.2.0 and 3.3.0 respectively while the computed steady state parameters
corresponding to the paths are plotted versus (∆e , ∆a ) in Figs. 3.1.1-3.1.5, 3.2.1-3.2.5 and 3.3.13.3.5. For (∆e , ∆a ) in the white spaces in Figs. 3.1.0, 3.2.0 and 3.3.0 we could not find steady
states that can be connected continuously with the steady states belonging to the corresponding
paths. In these figures, the asymptotically stable steady states are shown in black and the
unstable steady states in gray.
The ranges of variation of the roll rate p for the three paths are:
Path 1 (P1):
Path 2 (P2):
Path 3 (P3):
[-75.9114deg/ s, 219.549deg/ s]
[-219.549/ s, 75.9114deg/ s]
[-137.942deg/ s, 137.942deg/ s]
The zero roll-rate steady states are obtained for (∆a , ∆e ) belonging to the contours represented
with thin black horizontal lines in Figs. 3.1.0, 3.2.0 and 3.3.0.
3.5.2 Bifurcation analysis along some constant elevator angle contours
The components p versus ∆a of the contours of steady states belonging to the three paths
corresponding to ∆e = -6ë , -5ë , -4ë , -1ë , 2ë , 3ë , 4ë , 5ë , 7ë and ∆a in the range [-10ë , 10ë] are
represented in the Figures 3.4.1-3.4.9. In all the figures, the thick parts of the paths represent
asymptotically stable steady states while the thin parts of the paths represent unstable steady
states.
Fig. 3.4.1 shows that for ∆e = -6ë there are two different steady contours located on two
different paths. The steady states belonging to the contour located on P1 are asymptotically
stable for ∆a Î [-10ë , 0ë ] and unstable for ∆a Î [0ë , 10ë ]. The steady states belonging to
the contour located on P2 are unstable for ∆a Î [-10ë , 0ë ] and asymptotically stable for
∆a Î [0ë , 10ë]. For ∆a = 0ë a bifurcation occurs on both contours.
Fig. 3.4.2 shows that for ∆e = -5ë there are two different steady contours located on the first two
paths respectively. Concerning the stability of the steady states in this case, Fig. 3.4.2 shows
that the steady states which belong to the contour located on P1 are asymptotically stable for
∆a Î [-10ë , 0ë ] and unstable for ∆a Î [0ë , 10ë ]; the steady states which belong to the contour
128
Figure 3.1.0: The domain of variation for the Figure 3.1.3: Roll rate p(ë / s) versus ∆e and ∆a
control parameters ∆e and ∆a for P1
for P1
Figure 3.1.1: Angle of attack Α(ë ) versus ∆e and Figure 3.1.4: Pitch rate q(ë / s) versus ∆e and ∆a
∆a for P1
for P1
Figure 3.1.2: Sideslip angle Β(ë ) versus ∆e and Figure 3.1.5: Yaw rate r(ë / s) versus ∆e and ∆a
∆a for P1
for P1
129
Figure 3.2.0: The domain of variation for the Figure 3.2.3: Roll rate p(ë / s) versus ∆e and ∆a
control parameters ∆e and ∆a for P2
for P2
Figure 3.2.1: Angle of attack Α(ë ) versus ∆e and Figure 3.2.4: Pitch rate q(ë / s) versus ∆e and ∆a
∆a for P2
for P2
Figure 3.2.2: Sideslip angle Β(ë ) versus ∆e and Figure 3.2.5: Yaw rate r(ë / s) versus ∆e and ∆a
∆a for P2
for P2
130
Figure 3.3.0: The domain of variation for the Figure 3.3.3: Roll rate p(ë / s) versus ∆e and ∆a
control parameters ∆e and ∆a for P3
for P3
Figure 3.3.1: Angle of attack Α(ë ) versus ∆e and Figure 3.3.4: Pitch rate q(ë / s) versus ∆e and ∆a
∆a for P3
for P3
Figure 3.3.2: Sideslip angle Β(ë ) versus ∆e and Figure 3.3.5: Yaw rate r(ë / s) versus ∆e and ∆a
∆a for P3
for P3
131
located on P1 are unstable for ∆a Î [-10ë , 0ë ] and asymptotically stable for ∆a Î [0ë , 10ë ]. In
this case, for ∆a = 0ë a bifurcation occurs on both contours.
Fig. 3.4.3 shows that for ∆e = -4ë something similar happens as in the cases ∆e = -6ë , -5ë . The
difference is that the bifurcation on the contour belonging to P1 occurs at ∆a = -0.1ë while on
P2, it occurs on ∆a = 0.1ë .
Fig. 3.4.4 shows that for ∆e = -1ë there are three different contours of steady states located
on P1, P2 and P3. The contour located on the P1 is defined for ∆a Î [-10ë , -0.4ë ] and all the
steady states of this contour are asymptotically stable. The contour located on P2 is defined
for ∆a Î [0.4ë , 10ë ] and all the steady states of this contour are asymptotically stable. The
contour located on P3 is defined for ∆a Î [-0.6ë , 0.6ë ] and all the steady states of this contour
are unstable.
Fig. 3.4.5 shows that for ∆e = 2ë there are three contours of steady state located on P1, P2 and
P3. The contour located on P1 is defined for ∆a Î [-10ë , 2.3ë ] and all the steady state on these
contour are asymptotically stable. The contour located on P2 is defined for ∆a Î [-2.3ë , 10ë ]
and all the steady state on these contour are asymptotically stable. The contour located on P3 is
defined for ∆a Î [-3.1ë , 3.1ë ] and all the steady state on these contour are unstable.
Fig. 3.4.6 shows that for ∆e = 3ë there are three contours of steady states located on P1, P2
and P3. The contour located on P1 is defined for ∆a Î [-10ë , 5.4ë ] and for ∆a Î [-10ë , 3.3ë ]
the steady states belonging to this contour are asymptotically stable while for ∆a Î [3.4ë , 5.4ë ]
they are unstable. The contour located on P2 is defined for ∆a Î [-5.4ë , 10ë ] and for ∆a Î
[-5.4ë , -3.4ë ] the steady states belonging to this contour are unstable while for ∆a Î [-3.4ë , 10ë ]
they are asymptotically stable. On both contours, a bifurcation occurs for ∆a = 3.3ë and
∆a = -3.3ë , respectively. The contour located on P3 is defined for ∆a Î [-10ë , 10ë ] and all
the steady states on this contour are unstable.
Fig. 3.4.7 shows that for ∆e = 4ë there are three contours of steady states located on P1, P2 and
P3. The contour located on P1 is defined for ∆a Î [-10ë , 10ë] and for ∆a Î [-10ë , 4.4ë ] the steady
states belonging to this contour are asymptotically stable while for ∆a Î [4.5ë , 10ë ] they are
unstable. The contour located on P2 is defined for ∆a Î [-10ë , 10ë ] and for ∆a Î [-10ë , -4.5ë ]
the steady states belonging to this contour are unstable while for ∆a Î [-4.4ë , 10ë ] they are
asymptotically stable. On both contours, a bifurcation occurs for ∆a = 4.4ë and ∆a = -4.4ë ,
respectively. The contour located on P3 is defined for ∆a Î [-10ë , 10ë ] and all the steady states
of this contour are unstable.
Fig. 3.4.8 shows that for ∆e = 5ë there are three contours of steady states located on P1, P2 and
P3. All the three contours are defined for ∆a Î [-10ë , 10ë]. The steady states of the contour
located on P1 are asymptotically stable for ∆a Î [-10ë , 2.1ë ] and unstable for ∆a Î [2.2ë , 10ë ].
The steady states of the contour located on P2 are unstable for ∆a Î [-10ë , -2.2ë ] while for
∆a Î [-2.1ë , 10ë] they are asymptotically stable. On both of these contours, a bifurcation occurs
(for ∆a = 2.2ë and ∆a = -2.2ë respectively). The steady states of the contour located on P3 are
unstable.
Fig. 3.4.9 shows that for ∆e = 7ë there are three contours of steady states located on P1, P2
and P3. All the three contours are defined for ∆a Î [-10ë , 10ë ]. The steady states of the contour
located on P1 are asymptotically stable for ∆a Î [-10ë , -1.7ë ] and unstable for ∆a Î [-1.6ë , 10ë ].
The steady states of the contour located on P2 are unstable for ∆a Î [-10ë , 1.6ë ] while for
∆a Î [1.7ë , 10ë ] they are asymptotically stable. On both of these contours, a bifurcation occurs
(for ∆a = -1.6ë and ∆a = 1.6ë respectively). The contour located on P3 is unstable.
132
Figure 3.4.1: p versus ∆a for ∆e = -6ë
Figure 3.4.2: p versus ∆a for ∆e = -5ë
Figure 3.4.3: p versus ∆a for ∆e = -4ë
Figure 3.4.4: p versus ∆a for ∆e = -1ë
Figure 3.4.5: p versus ∆a for ∆e = 2ë
Figure 3.4.6: p versus ∆a for ∆e = 3ë
Figure 3.4.7: p versus ∆a for the contours obtained for ∆e = 4ë
Figure 3.4.8: p versus ∆a for ∆e = 5ë
Figure 3.4.9: p versus ∆a for the contours obtained for ∆e = 7ë
133
3.5.3 Zero roll rate descent flight solutions
We have computed the combinations of control parameters ∆e and ∆a for which we obtained
zero roll rate asymptotically stable steady state and we have found out that we can obtain these
kind of steady states for ∆e Î [-7ë , -2.2ë ] and for
• ∆a Î [-0.8247950230200113ë, -0.6881379481613939ë] for P1 (Fig. 3.5.1)
• ∆a Î [0.6881379481613939ë, 0.8247950230200113ë] for P2 (Fig. 3.5.2).
Figure 3.5.1: p versus ∆a for ∆e Î [-7ë , -2.2ë ] Figure 3.5.2: p versus ∆a for ∆e Î [-7ë , -2.2ë ]
on P1
on P2
In this section, zero roll rate descent flight solutions and contours of this kind of solutions
will be determined, taking into consideration the numerical data. A stability analysis will be
undertaken for these flight solutions.
We consider D = [-7ë , 7ë ] ´ [-10ë , 10ë ] ´ [-10ë , 10ë ].
The computed value of the constant K given by (3.27) is K = -671.138.
Sideslip descent flight solutions
For the value of the velocity V = 73.84, the sideslip flight solutions xVSS and x̄VSS are SSD flight
solutions:
xVSS = [ ΒVSS = -15.8727ë ΑVSS = 1.2173ë 0 0 0 Π2 0 ]T
(3.49)
corresponds to (∆e , ∆a , ∆r ) = (3.4335ë, 28.5666ë, -18.2092ë) Î/ D.
x̄VSS = [ Β̄VSS = 15.8727ë ΑVSS = 1.2173ë 0 0 0 - Π2 0 ]T
(3.50)
corresponds to (∆e , ∆a , ∆r ) = (3.4335ë, -28.5666ë , 18.2092ë) Î/ D.
Thus, these SSD flight solutions are not steady states.
Straight descent flight solutions
As K = -671.138 < 0 and Α0 = 8.18 > 0, the straight flight solution xVST is a STD flight
solution.
xVST = [ 0 ΑVST = 8.2699ë 0 0 0 0 0 ]T
(3.51)
corresponds to (∆e , ∆a , ∆r ) = (2.9944ë, 0ë , 0ë ) Î D.
134
The STD flight solution xVST is unstable, as the Jacobian matrix has 2 eigenvalues with positive
real part, and one 0 eigenvalue. But, it is "longitudinally stable", as [ ΑVST 0 0 ]T is a
stable steady state for the longitudinal decoupled dynamical system. The flight state xVST is a
longitudinal bifurcation point.
Symmetric descent flight solutions
The domain of definition of the two contours xVSM and x̄VSM given by (3.39) and (3.40) is
DV∆e = (2.9944ë, 3.4335ë).
The path xVSM contains SMD, SML and SMA flight solutions as well:
• for ∆e Î (2.9944ë , 2.9988ë), xVSM (∆e ) is a SMD flight solution
• for ∆e = ∆Ve,SML = 2.9988ë , xVSM (∆e ) is a SML flight solution
• for ∆e Î (2.9988ë , 3.4335ë), xVSM (∆e ) is a SMA flight solution
For the flight solutions of the contour xVSM , the values of the angle of attack and pitch angle
are plotted versus ∆e in the Figures 3.6.1 and 3.6.2. The gray color represents the SMA flight
solutions while the black color represents the SMD flight solutions.
8
80
7
60
theta
alpha
6
5
4
3
40
20
2
0
3
3.1
3.2
delta e
3.3
3.4
3
Figure 3.6.1: Α versus ∆e for the contour xVSM
3.1
3.2
delta e
3.3
3.4
Figure 3.6.2: Θ versus ∆e for the contour xVSM
The contour x̄VSM contains only SMD flight solutions. For the flight solutions of the path x̄VSM , the
values of the angle of attack and pitch angle are plotted versus ∆e in the Figures 3.7.1 and 3.7.2.
The gray color represents the SMD flight solutions.
0
8
7
-20
theta
alpha
6
5
4
-40
-60
3
2
-80
3
3.1
3.2
delta e
3.3
3.4
3
V0
Figure 3.7.1: Α versus ∆e for the contour x̄SM
3.1
3.2
delta e
3.3
3.4
V0
Figure 3.7.2: Θ versus ∆e for the contour x̄SM
The contours xVSM and x̄VSM are connected by the STD flight solution xVST (which is a longitudinal
bifurcation point). Thought, the physical sense of the unstable path of symmetric flight solutions
xVSM is questionable.
135
The "trivial" [GK04] steady state [ 0 Α0 0 0 0 0 Θ0 ]T with Α0 = 8.18ë and Θ0 = -9.16ë
corresponding to the values of control surface angles ∆e = 3ë and ∆a = ∆r = 0ë , belongs to the
path x̄VSM and is a SMD flight solution.
The contour x̄VSM is included in the path P3.
Each SMD flight solution belonging to the contour xVSM is unstable, as the Jacobian matrix
has at least 1 eigenvalue with positive real part and it is "longitudinally unstable" as well, as
[ ΑVSM 0 ΘVSM ]T is an unstable steady state for the longitudinal decoupled dynamical system.
Each SMD flight solution belonging to the contour x̄VSM is unstable, as the Jacobian matrix has
at least 1 eigenvalue with positive real part. But, it is "longitudinally asymptotically stable",
as [ ΑVSM 0 -ΘVSM ]T is an asymptotically stable steady state for the longitudinal decoupled
dynamical system.
Any maneuver of the type
∆1e ® ∆2e
∆1e , ∆2e Î DV∆e
∆a = ∆r = 0
(3.52)
is successful along the contour x̄VSM and transfers the state x̄VSM (∆1e ) to the state x̄VSM (∆2e ). The
explanation of this fact resides in the longitudinal asymptotic stability of each SMD flight
solution which belongs to x̄VSM .
On the other hand, successful maneuvers along symmetric descent part of the contour xVSM
cannot be found, as any maneuver of the type
∆1e ® ∆2e
∆1e , ∆2e Î (2.9944ë , 2.9988ë) Ì DV∆e
∆a = ∆r = 0
(3.53)
with the starting state xVSM (∆1e ), produces a transfer to the state x̄VSM (∆2e ) (thus, a jump phenomenon
takes place between the two contours). This fact can be explained by the longitudinal instability
of each SMD flight solution which belongs to xVSM and the longitudinal asymptotic stability of
each SMD flight solution which belongs to x̄VSM .
3.5.4 The effect of the perturbations and control surface angles ∆e and ∆a
in the moment of release
According to the flight data [ALF97], in the phase of approaching the moment of release, all
the state parameters of the vehicle are equal to zero, but the values of the control surface angles
are ∆a = ∆r = 0ë and ∆e = 3ë . We remark the fact that the state of the vehicle in this phase
is not a steady state corresponding to some values of the control parameters. The following
problem comes naturally: if this state is not perturbed in the instant of release, which will be
the evolution of the state parameters, after the release? More, it is natural to admit that, in the
instant of release, uncontrolled perturbations of the state parameters occur. This fact means that
in the instant of release, the state of the vehicle x is "unknown". The problem of predicting
the evolution of the state parameters becomes even more complicated under this hypothesis.
Finally, the last question we want to answer in this section concerns the evolution of the state
parameters in the case when the control parameters are also perturbed in the instant of release.
A. If the state of the vehicle in the instant of release is x = [ 0 0 0 0 0 0 0 ]T and the
control angles are ∆a = ∆r = 0ë and ∆e = 3ë , integrating the system, the vehicle’s state parameters
evolve towards the longitudinally asymptotically stable symmetric flight state x30,SM . This means
136
that x belongs to the 5-dimensional stable manifold (in the 7-dimensional phase-space) of x30,SM .
The angle of attack Α and pitch angle Θ evolve as shown in Figs. 3.8.1 and 3.8.2, while the other
state parameters (Β, p, q, r and j) remain constantly equal to zero.
8
0
6
theta
alpha
-2
4
-4
-6
2
-8
0
5
0
15
10
time
20
Figure 3.8.1: A.: variation of Α versus time
0
500
1000
time
1500
2000
Figure 3.8.2: A.: variation of Θ versus time
B. If, in the instant of release, the roll rate p of the vehicle is slightly perturbed, i.e. x =
[ 0 0 Dp 0 0 0 0 ]T with Dp ³ (10-13)ë / s or Dp £ -(10-13 )ë / s, and the control angles
are ∆a = ∆r = 0ë and ∆e = 3ë , the integration of the system shows that the state of the vehicle
evolves towards one of the asymptotically stable steady states x10 or x20 (see Figs. 3.9.1 and
3.9.2).
140
0
-20
100
-40
80
-60
p
p
120
60
-80
40
-100
20
-120
-140
0
0
200
400
600
800
0
1000
50
time
100
time
150
200
Figure 3.9.1: B. Dp = (10-10 )ë / s: variation of p Figure 3.9.2: B. Dp = -(10-6 )ë / s: variation of
versus time
p versus time
Something similar happens if the vehicle experiences small perturbations of the state parameters
Β, r or Φ.
If any perturbation of Α, q and Θ occurs in the instant of release, i.e. x = [ 0 DΑ 0 Dq 0 0 DΘ ]T ,
the vehicle’s state evolves towards the symmetric flight x30,SM (see Figs. 3.9.3 and 3.9.4).
100
0
80
60
q
alpha
-20
40
-40
20
-60
0
0
5
10
time
15
20
0
5
10
time
15
20
Figure 3.9.3: B. Dq = 100ë / s, DΑ = -70ë , Figure 3.9.4: B. Dq = 100ë / s, DΑ = -70ë ,
DΘ = -30ë : variation of Α versus time
DΘ = -30ë : variation of q versus time
C. If, in the instant of release, the state of the vehicle is x = [ 0 0 0 0 0 0 0 ]T , but
some perturbations of the control angles occur, then three cases can be depicted:
137
Malfunction of elevator: ∆e changes from 3ë to 2ë , while ∆a remains equal to 0ë . In this case,
the roll rate does not change, it remains equal to zero, but the angle of attack Α and pitch
rate q begin oscillating in time with great amplitude. The state of the vehicle does not
stabilize in time. If we try to bring back the elevator to ∆e = 3ë , the vehicle is brought
back to the symmetric flight x30,SM (see Fig. 3.10.1 and 3.10.2).
12
6
10
5
4
q
alpha
8
6
3
4
2
2
1
0
0
0
100
200
300
400
500
0
100
200
time
300
400
500
time
Figure 3.10.1: C. Malfunction of elevator: Figure 3.10.2: C. Malfunction of elevator:
variation of Α versus time
variation of q versus time
Malfunction of ailerons: ∆a switches from 0ë to -1ë and the elevator is steady at ∆e = 3ë .
In this case, the vehicle’s state is brought to a 95ë / s roll rate steady state from P1. We
emphasize that trying to bring back the aileron to 0ë does not solve the problem, as in this
case, the vehicle moves to the steady state x10 of P1, corresponding to (∆e , ∆a ) = (3ë , 0ë ),
with the roll rate of 96ë / s (see Fig. 3.10.3).
Malfunction of ailerons and elevator: both control surface angles ∆e and ∆a change to 4ë and
-1ë respectively. In this case, the vehicle’s state evolves towards to a 104ë / s roll rate
asymptotically stable steady state of P1 (see Fig. 3.10.4).
150
120
125
100
100
80
75
p
p
140
60
50
40
25
20
0
0
0
50
100
time
150
200
0
20
40
60
80
time
100
120
140
Figure 3.10.3: C. Malfunction of ailerons: Figure 3.10.4: C. Malfunction of ailerons and
variation of p versus time
elevator: variation of p versus time
In order to find a way to control the roll rate of the vehicle from the initial moment of time,
two control procedures have been described in [GK04, GM00], which bring the vehicle from
the initial symmetric flight state x3SM to low (but not zero) roll rate steady states. These control
procedures consist of the following successive maneuvers:
∆r = 0 and (∆e , ∆a ) : (3ë , 0ë ) (2ë , 5ë ) (-6ë , 5ë ) (-6ë , 0ë )
(3.54)
∆r = 0 and (∆e , ∆a ) : (3ë , 0ë ) (5ë , 5ë ) (-5ë , 5ë ) (-5ë , 0ë )
(3.55)
The evolution of roll rate due to these successive maneuvers is presented in Figure 3.11.2 and
3.12.2. These examples show that in the case of ALFLEX , the roll rate may not decay to zero
even after the aileron is centered, phenomenon described in [Hac78].
These successive maneuvers are successful because:
138
P2,äe=-6
0
4
p
P
1
-50
3
-100
P3,äe=3
P2,äe=2
2
-150
0
50
äa
100
time
150
200
Figure 3.11.1: p versus ∆a for the contours Figure 3.11.2: Evolution of p due to the maneucorresponding to the maneuvers (3.54)
vers (3.54)
50
P2,äe=-5
0
4
1
-50
3
p
P
-100
P3,äe=3
2
-150
-200
P2,äe=5
0
äa
50
100
time
150
200
Figure 3.12.1: p versus ∆a for the contours Figure 3.12.2: Evolution of p due to the maneucorresponding to the maneuvers (3.55)
vers (3.55)
• The state x3SM belongs to the domains of attraction of the asymptotically stable steady
states of P2 corresponding to the control parameters (∆e , ∆a , ∆r ) = (2ë , 5ë , 0ë ) and
(∆e , ∆a , ∆r ) = (5ë , 5ë , 0ë ), respectively.
• The second and the third maneuvers of (3.54) and (3.55) are successful along the
asymptotically stable part of the path P2, providing an illustration of Theorem 1.19.
3.5.5 The region of attraction of a zero roll rate asymptotically stable
steady state and the control technique for the roll rate
For (∆e , ∆a , ∆r ) = (-2.2ë , -0.6881379481613939ë, 0ë ), the following asymptotically stable zero
roll rate steady state of P1 has been obtained:
x̃1 = [ Β̃ = -0.4424 Α̃ = 23.166 p̃ = 0 q̃ = 19.374 r̃ = 7.8487 j̃ = 67.946 Θ̃ = 0]T
(3.56)
The region of attraction Da (x̃1 ) of this state has been estimated using the methods described
in Chapter 1. The estimate D0a,7 obtained by the first method is represented in the Figures
3.13.1-3.13.6. The estimate N6 (c6 = 0.8) obtained by the second method is plotted in Figures
3.14.1-3.14.4 The over 90.000 computed steady states of the three paths belong to the estimates
D0a,7 and N6 of Da (x̃1 ) . This means that any computed steady state can be transferred to the zero
roll rate steady state x̃1 , switching the control angles to ∆e = -2.2ë and ∆a = -0.6881ë.
Thus, the successive maneuvers (3.54) and (3.55) can be replaced by the single maneuver
(∆e , ∆a , ∆r ) : (3ë , 0ë , 0ë ) (-2.2ë , -0.6881ë, 0ë ), which transfers the vehicle’s state from x30,SM to
the zero roll rate asymptotically stable steady state x̃1 (see Fig. 3.15).
In the previous section, it has been pointed out that some perturbations of the initial state
parameters (see point B.) or some malfunctions of the control surface angles (see point C.) may
139
100
q
r
50
20
10
0
0
-50
-10
20
50
10
p
p
0
0
-10
-50
-20
-50
0
0
20
50
a
40
a
60
100
Figure 3.13.1: The intersection of the estimate Figure 3.13.2: The intersection of the estimate
D0a,7 of the region of attraction of x̃1 with the D0a,7 of the region of attraction of x̃1 with the
manifold: Β = Β̃, r = r̃, j = j̃, Θ = Θ̃
manifold: Β = Β̃, q = q̃, j = j̃, Θ = Θ̃
60
60
a
q
40
20
20
0
0
-20
-20
20
20
p
40
p
0
-20
0
-20
-5
-5
0
0
b
b
5
5
Figure 3.13.3: The intersection of the estimate Figure 3.13.4: The intersection of the estimate
D0a,7 of the region of attraction of x̃1 with the D0a,7 of the region of attraction of x̃1 with the
manifold: q = q̃, r = r̃, j = j̃, Θ = Θ̃
manifold: Α = Α̃, r = r̃, j = j̃, Θ = Θ̃
20
r
10
0
0
100
100
50
p
20
r
10
50
p
0
0
-50
-50
-100
-100
-20
-10
0
-5
20
0
b
5
q
40
Figure 3.13.5: The intersection of the estimate Figure 3.13.6: The intersection of the estimate
D0a,7 of the region of attraction of x̃1 with the D0a,7 of the region of attraction of x̃1 with the
manifold: Α = Α̃, q = q̃, j = j̃, Θ = Θ̃
manifold: Β = Β̃, Α = Α̃, j = j̃, Θ = Θ̃
140
30
20
20
10
0
p
p
10
0
-10
-10
-20
-20
-20
-30
-4
-2
0
beta
2
4
0
20
q
40
60
20
100
10
50
0
0
p
p
Figure 3.14.1: The intersection of the estimate Figure 3.14.2: The intersection of the estimate
N6 of the region of attraction of x̃1 with the N6 of the region of attraction of x̃1 with the
manifold: Α = Α̃, q = q̃, r = r̃, j = j̃, Θ = Θ̃
manifold: Β = Β̃, Α = Α̃, r = r̃, j = j̃, Θ = Θ̃
-50
-10
-100
-20
0
20
alpha
40
-5
60
5
0
10
15
20
r
Figure 3.14.3: The intersection of the estimate Figure 3.14.4: The intersection of the estimate
N6 of the region of attraction of x̃1 with the N6 of the region of attraction of x̃1 with the
manifold: Β = Β̃, q = q̃, r = r̃, j = j̃, Θ = Θ̃
manifold: Β = Β̃, Α = Α̃, q = q̃, j = j̃, Θ = Θ̃
50
40
p
30
20
10
0
0
50
100
150
200
250
time
Figure 3.15: Evolution of p due to the maneuver (∆e , ∆a ):(3ë , 0ë )(-2.2ë , -0.6881ë)
141
lead to dramatic changes in roll rate. In some of these cases, the vehicle’s state is transferred
to the asymptotically stable high roll rate steady states x10 , x20 or to the 104ë / s roll rate steady
state of P1, corresponding to the control angles ∆e = 4ë and ∆a = -1ë . It will be shown that
the vehicle can be transferred from these high roll rate states to the zero roll rate asymptotically
stable steady state x̃1 . Computations show that:
• x10 belongs to the region of attraction of x̃1 . The evolution of roll rate due to the maneuver
(∆e , ∆a ) : (3ë , 0ë )(-2.2ë , -0.6881379481ë)
(3.57)
is shown in Fig. 3.16.1.
• x20 belongs to the region of attraction of x̃1 . The evolution of roll rate due to the maneuver
(∆e , ∆a ) : (3ë , 0ë )(-2.2ë , -0.6881379481ë)
(3.58)
is shown in Fig. 3.16.2.
• The 104ë / s roll rate steady state of P1, corresponding to the control angles ∆e = 4ë and
∆a = -1ë belongs to the region of attraction of x̃1 . The evolution of roll rate due to the
maneuver
(∆e , ∆a ) : (4ë , -1ë )(-2.2ë , -0.6881379481ë)
(3.59)
is shown in Fig. 3.16.3.
50
80
25
60
p
p
0
-25
40
-50
20
-75
0
-100
0
50
100
time
150
0
200
50
100
time
150
200
Figure 3.16.1: Evolution of p due to the maneu- Figure 3.16.2: Evolution of p due to the maneuver (3.57) with the starting point x10
ver (3.58) with the starting point x20
100
80
p
60
40
20
0
0
50
100
time
150
200
Figure 3.16.3: Evolution of p due to the maneuver (3.59)
142
3.5.6 Achieving a symmetric descent flight state
In the previous section, it has been shown that the roll rate of the ALFLEX model plane can be
brought to zero using the region of attraction of the steady state x̃1 . This section seeks an answer
to the following question: is it possible to transfer the vehicle from an arbitrary steady state to
a symmetric descent (SMD) flight state? It has been shown that all SMD states of ALFLEX are
saddle points, those belonging to the contour x̄VSM being longitudinally asymptotically stable.
In this section, a technique for the control of the "path capture" and "steady descent" flight
phases of the ALFLEX reentry vehicle (during its final approach and landing flight) is presented.
Successive, prescribed and quick changes of ∆e and ∆a will be found, which maintain the vehicle
in the neighborhoods of the stable manifolds of the steady states corresponding to the flight
phases of "path capture" and "steady descent", and lead the vehicle’s state close to these steady
states. It is emphasized that, although the mathematical model has certain limits, the obtained
results (except the time scale) are comparable to those reported in the experimental flight data
[ALF97].
The results presented in this section have the following theoretical basis:
Let be the system of nonlinear autonomous ordinary differential equations with control:
ẋ = F(x, ∆)
(3.60)
where F : W´D Ì Rn ´Rm ® Rn is of class C1 (W´D) and ∆ Î D are the control parameters. We
suppose that there exist ∆i Î D, i = 0, 1, 2 and xi (∆i ) Î W, i = 0, 1, 2, such that F(xi (∆i ), ∆i ) = 0
for i = 0, 1, 2.
Theorem 3.1. If the following conditions hold:
i. The steady state x0 (∆0 ) is a saddle point, with the stable manifold W s (x0 (∆0 ));
ii. x1 (∆1 ) and x2 (∆2 ) are asymptotically stable steady states;
iii. x1 (∆1 ) Î Da (x2 (∆2 ));
iv. the solution x(t; 0, x1(∆1 ); ∆2 ) of the initial value problem
ẋ = F(x, ∆2 )
x(0) = x1 (∆1 )
(3.61)
intersects W s (x0 (∆0 ));
then, any x Î Da (x1 (∆1 )) can be transferred in any small neighborhood of the saddle point
x0 (∆0 ).
Proof. Consider the neighborhood B(x0 (∆0 ), ¶) of the saddle point x0 (∆0 ). Let be x̄ Î DA(x1 (∆1 )).
We consider T1 the moment when the solution x(t; 0, x1(∆1 ); ∆2 ) intersects W S (x0 (∆0 )). Let be
xS = x(T1 ; 0, x1 (∆1 ); ∆2 ) Î W S (x0 (∆0 )).
For any t ³ 0, we find that x(t; 0, xS; ∆0 ) Î W S (x0 (∆0 )), and lim x(t; 0, xS ; ∆0) = x0 (∆0 ). This
t®¥
means that there exists T2 (¶) > 0 such that
¶
üx(t; 0, xS; ∆0 ) - x0 (∆0 )ü < ,
2
for any t ³ T2 (¶)
(3.62)
143
The theorem of continuous dependence of the solutions on the initial states provides that there
exists ∆(¶) > 0 such that for any z Î B(xS , ∆(¶)), the following inequality holds:
¶
üx(t; 0, z; ∆0) - x(t; 0, xS ; ∆0 )ü < ,
for any t £ 2T2 (¶)
(3.63)
2
From (3.62,3.63) we get:
üx(t; 0, z; ∆0) - x0 (∆0 )ü < ¶, for any t Î [T2 (¶), 2T2 (¶)] and z Î B(xS , ∆(¶))
(3.64)
(i.e. x(t; 0, z; ∆0) Î B(x0 (∆0 ), ¶), for any t Î [T2 (¶), 2T2 (¶)]) and z Î B(xS , ∆(¶)).
The theorem of continuous dependence of the solutions on the initial states provides that there
exists Ρ(¶) > 0 such that for any y Î B(x1 (∆1 ), Ρ(¶)) the following inequality holds:
üx(t; 0, y; ∆2) - x(t; 0, x1(∆1 ); ∆2)ü < ∆(¶),
for any t £ T1
(3.65)
Making t = T1 in (3.65), gives that
x(T1 ; 0, y; ∆2) Î B(xS , ∆(¶))
for any
y Î B(x1 (∆1 ), Ρ(¶))
(3.66)
As lim x(t; 0, x̄; ∆1) = x1 (∆1 ), for any x̄ Î DA(x1 (∆1 )), there exists T0 = T0 (x̄, ¶) > 0 such that
t®¥
x(t; 0, x̄; ∆1 ) Î B(x1 (∆1 ), Ρ(¶)), for t ³ T0 .
Let be ȳ = x(T0 ; 0, x̄; ∆1 ) Î B(x1 (∆1 ), Ρ(¶)). Using (3.66), we find that z̄ = x(T1 ; 0, ȳ; ∆2 ) Î
B(xS , ∆(¶)).
From (3.64), it results that the control which transfers the state from x̄ in the neighborhood
B(x0 (∆0 ), ¶) of the saddle point x0 (∆0 ) is:
∆:ø®∆1
∆:∆1 ®∆2
∆:∆2 ®∆0
x̄ ȳ = x(T0 ; 0, x̄; ∆1 ) z̄ = x(T1 ; 0, ȳ; ∆2) x(T ; 0, z̄; ∆0 ) Î B(x0 (∆0 ), ¶)
where and T Î [T2 (¶), 2T2(¶)].
(3.67)
Remark 3.12. Theorem 3.1 provides a control technique for the transfer of any state belonging
to the region of attraction of the asymptotically stable steady state x1 (∆1 ) to any neighborhood
of the saddle point x0 (∆0 ) (relation (3.65)). This technique uses a second asymptotically
stable steady state x2 (∆2 ), whose region of attraction includes x1 (∆1 ), and the property that
the trajectory from x1 (∆1 ) to x2 (∆2 ) intersects the stable manifold of x0 (∆0 ). The exact moment
of this intersection is practically impossible to compute, but it can be approximated. If at this
estimated moment of time, the control parameters are changed to ∆0 , the state parameters will
evolute towards x0 (∆0 ), along the stable manifold of x0 (∆0 ).
In the control procedure described in this section, some of the numerically computed steady
states will be used which are presented in Table 3.2.
According to the algorithm presented in [Per91], a 2nd -order approximation of the 6-dimensional
stable manifold of x32,SM has been found. The implicit equation of this 2nd -order approximation
of the stable manifold is given by (3.68). The coefficients are given in a 5-digits precision form:
S2 (x32,SM ) : -6.1033Β - 0.0598p + 3.3969r - 0.2780j - 0.7023ΒΑ -0.3524Βq - 0.5076ΒΘ - 0.0216Α2 - 0.5724Αp + 0.0079Αq +
+2.3073Αr - 0.1303Αj + 0.0381ΑΘ - 0.3856pq - 0.0019pΘ +
+0.0158q2 + 1.5925qr - 0.0906qj + 0.1267qΘ - 0.1525rΘ +
+0.0114jΘ + 0.2537Θ2 = 0.255366
(3.68)
144
x̃˜ 1
x̃˜ 2
notation
x̃1
path
P1
P1
P2
ë
∆e ( )
-2.2
-2.2
-2.2
ë
∆a ( )
-0.6881
-0.7
0.7
ë
Β( )
-0.4424 -0.7027 0.7027
Α(ë )
23.166
23.923
23.923
ë
p( / s)
0
7.5543 -7.5543
ë
q( / s)
19.374
19.546
19.546
r(ë / s)
7.8487
10.566 -10.566
j(ë )
67.946
61.605 -61.605
ë
Θ( )
0
-18.777 -18.777
stability as. stab. as. stab. as. stab.
x31,SM
P3
3.0532
0
0
7.3256
0
0
0
0
-30
saddle
s
5-dim Wloc
x32,SM
P3
3.2
0
0
4.9681
0
0
0
0
-57.871
saddle
s
6-dim Wloc
Table 3.2: Some of the computed steady states (5-digits precision)
The stable subspace of x32,SM is given by:
Es (x32,SM ) : p = 102.0027Β + 56.7711r + 4.6461j
(3.69)
According to the experimental flight data [ALF97] the "path capture" flight phase begins with
a flight of 3.7s during which the pitch angle Θ changes quickly from 0ë to -50ë . This flight is
followed by a short symmetric flight of 1.9s during which Θ > -50ë , after which Θ grows from
-50ë to -30ë (3.8s). This is the end of the "path capture" flight phase, and the vehicle reaches
the flight phase of "steady descent", which is a symmetric flight with Θ = -30ë . It will be shown
that all these components of the phase of "path capture" can be realized in the framework of the
considered mathematical model, with adequate successive changes of the control angles ∆e and
∆a .
The procedure of reproducing the "path capture" and "steady descent" flight phases in the
mathematical model of ALFLEX is the following:
M4
M1
M2
M3
x x̃1 x̃˜ 1 x32,SM x31,SM
(3.70)
M1 Due to the fact that, in the instant of release from the helicopter, uncontrolled perturbations
may appear, the vehicle’s state x at this moment is unknown. It is reasonable to admit that
the state x is one of the 90000 computed steady states, or more general, that it belongs to the
region of attraction of the asymptotically stable zero roll rate steady state x̃1 . For this reason,
the first maneuver is that, in the instant of release, the control angles are fixed to ∆e = -2.2ë ,
∆a = -0.6881ë . Due to this maneuver, the vehicle is transferred from x to x̃1 . In this new state
of the vehicle, the pitch angle Θ is still equal to zero.
M2 The second maneuver (∆e , ∆a ) : (-2.2ë , -0.6881..ë ) ® (-2.2ë , -0.7ë ) along P1 transfers
the vehicle ALFLEX from x̃1 to x̃˜ 1 . Due to this maneuver, the flight parameters evolve
corresponding to the first part of the experimental "path capture" flight phase, i.e. the pitch
angle Θ changes from 0ë to -18ë .
For example, let be x the asymptotically stable steady state of P1, corresponding to ∆e = 7ë
and ∆a = -10ë , with the roll rate of p = 99.2ë / s and pitch angle Θ = -37.6ë . The variations
145
of roll rate p and pitch angle Θ versus time during the steps M1 and M2 are shown in Figs.
3.17.1-3.17.2. The long transfer period (500s) is due to the fact that an extreme case has been
considered (with a very high roll rate) and that a good stabilization in the state x̃˜ 1 has been
awaited.
140
0
120
-10
100
theta
-20
p
80
60
40
-30
-40
20
-50
0
0
100
200
300
400
500
0
time
100
200
300
400
500
time
Figure 3.17.1: The variation of p vs. time for Figure 3.17.2: The variation of Θ vs. time for
the steps M1 and M2
the step M1 and M2
M3 In this step of the procedure, the vehicle will be transferred from the state x̃˜ 1 to the
symmetric flight x32,SM which has a pitch rate of -57ë . In this transfer, the steady state x̃˜ 2
is also involved. The idea is to bring close the state parameters of the vehicle to the 6s
dimensional stable manifold Wloc
(x32,SM ) of the SMD saddle point x32,SM by the maneuver (∆e , ∆a ) :
(-2.2ë , -0.7ë ) ® (-2.2ë , 0.7ë ), and at the moment when the distance is sufficiently small, to
make the maneuver (∆e , ∆a ) : (-2.2ë , 0.7ë ) ® (3.2ë , 0ë ) in order to oblige the state parameters of
the vehicle to evolve towards the symmetric flight x32,SM . When the state parameters recede from
the stable manifold of x32,SM , the maneuver (∆e , ∆a ) : (3.2ë , 0ë ) ® (-2.2ë , -0.7ë ) is made, in order
to oblige the state parameters of the vehicle to evolve towards x̃˜ 1 . More precisely, in this step
we have to execute three types of maneuvers, which follow each other in the order presented in
Table 3.3, composing a stadium of the step.
type
1.
2.
3.
the maneuver
period of waiting time after the
maneuver has been undertaken
(∆e , ∆a ) : (-2.2, -0.7) ® (-2.2, 0.7)
T1
(∆e , ∆a ) : (-2.2, 0.7) ® (3.2, 0)
T2
(∆e , ∆a ) : (3.2, 0) ® (-2.2, -0.7)
T3
Table 3.3: Maneuver types for M3
The maneuver of type 1 takes the vehicle towards the steady state x̃˜ 2 . In this evolution, the
s
trajectory of the state parameters intersects the stable manifold Wloc
(x32,SM ). Using Es (x32,SM ) and
s
S2 (x32,SM ), we can estimate the moment of time when the trajectory intersects Wloc
(x32,SM ): T1
seconds after the maneuver has been undertaken. If at this instant the maneuver of type 2 is
s
applied, the vehicle is lead along the stable manifold Wloc
(x32,SM ) towards x32,SM , for T2 seconds.
s
After this period, the state parameters recede from Wloc (x32,SM ), due to the fact that the exact
moment T1 of the intersection has been just estimated. For this reason, after the period T2 we
make the maneuver of type 3, in order to lead the state parameters of vehicle towards x̃˜ 1 . After
a very short period of T3 seconds, we begin a new stadium.
The step M3 is built up by four stadia and 12 maneuvers. The periods of waiting time after
each maneuver has been undertaken in each stadium is presented in Table 3.4.
The variation of the control parameters ∆e and ∆a in the step M3 are presented in the Figs.
146
the stadium T1 (s)
(1)
T1
(2)
T1
(3)
T1
(4)
T1
(1)
(2)
(3)
(4)
T2 (s)
(1)
T2
(2)
T2
(3)
T2
(4)
T2
= 0.122176344847711
= 0.419675154935004
= 0.3076460516793457
= 0.0026325111602317
T3 (s)
(1)
= 11 T3 = 1
(2)
= 12 T3 = 0.5
(3)
=9
T3 = 0
=9
-
Table 3.4: The periods of waiting time after the maneuver has been undertaken
3.18.1-3.18.2. The evolutions of the state parameters due to these maneuvers are presented in
the Figs. 3.19.1-3.19.7.
0.8
3
0.4
1
0.2
delta_a
delta_e
0.6
2
0
0
-0.2
-0.4
-1
-0.6
-2
0
10
20
time
30
40
0
10
20
time
30
40
Figure 3.18.1: The variation of ∆e vs. time for Figure 3.18.2: The variation of ∆a vs. time for
the step M3
the step M3
0.5
20
0.25
alpha
beta
0
-0.25
15
10
-0.5
-0.75
5
-1
0
10
20
time
30
40
Figure 3.19.1: The variation of Β vs.
during the step M3
0
10
20
time
30
40
time Figure 3.19.2: The variation of Α vs. time
during the step M3
147
20
20
15
15
q
p
10
5
10
5
0
-5
0
0
10
20
time
30
40
Figure 3.19.3: The variation of p vs.
during the step M3
0
10
20
time
30
40
time Figure 3.19.4: The variation of q vs.
during the step M3
time
10
r
8
6
4
2
0
10
20
time
30
40
100
-20
80
-30
theta
phi
Figure 3.19.5: The variation of r vs. time during
the step M3
60
-40
-50
40
-60
20
-70
0
10
20
time
30
40
0
10
20
time
30
40
Figure 3.19.6: The variation of j vs. time Figure 3.19.7: The variation of Θ vs.
during the step M3
during the step M3
time
148
Remarks
1. In each stadium, during the period T2 , when (∆e , ∆a ) = (3.2ë , 0ë ), the trajectory of the
vehicle follows the stable manifold of x32,SM , slowly approaching the steady state x32,SM . At
the end of the forth stadium, the state parameters achieve the steady state x32,SM .
2. The maneuvers of type 1 and 3 are needed because it is almost impossible to find the exact
moment when the state parameters’ trajectory intersects the stable manifold of x32,SM .
M4 By the maneuver (∆e , ∆a ) : (3.2ë , 0ë ) ® (3.0532ë , 0ë ), the state parameters of the vehicle are
transferred to the symmetric flight x31,SM . This maneuver is successful because the symmetric
flight x32,SM belongs to the 5-dimensional stable manifold of the symmetric flight x31,SM . Due to
this maneuver, the pitch angle Θ grows from -57ë to -30ë , and the vehicle arrives to the flight
phase of "steady descent".
The variations of the angle of attack Α and pitch angle Θ during this step are shown in Figs.
3.20.1-3.20.2. The step M4 takes place along the contour of symmetric flights of P3, thus the
other state parameters (Β, p, q, r, Φ) remain constantly equal to zero.
-30
7
-35
-40
theta
alpha
6.5
6
-45
-50
5.5
-55
5
0
100
200
300
time
400
500
0
100
200
300
400
500
time
Figure 3.20.1: The variation of Α during the step Figure 3.20.2: The variation of Θ during the step
M4
M4
Conclusions
1. In the framework of a simplified mathematical model, there exist 15 quick changes of
the control parameters ∆e and ∆a which lead the state parameters of the ALFLEX reentry
vehicle from a quasi-unknown state, in which it is at the moment of release, to the "steady
descent" flight state parameters.
2. The amplitudes of the variations of ∆e and ∆a are comparable to those reported in the
experimental flight data.
3. During the "path capture" and "steady descent" flight phases, Α, Β and Θ exhibit similar
perturbations as those reported in the experimental flight data.
4. The duration of the computed "phase capture" and "steady descent" flight phases is much
more longer than that reported in the experimental flight data. This fact may be due to:
– The initial state of the vehicle is quasi-unknown and due to that, in the computation,
an extreme case of an initial state with a very high roll rate was considered.
– The succession of changes of the control angles ∆e and ∆a presented in this section
may not be the unique succession which controls the flight. More, in this control
technique, the rudder angle ∆r is kept equal to zero.
Chapter 4
Control procedures for Hopfield-type
neural networks using domains of
attraction
4.1 Continuous time Hopfield neural networks
4.1.1 Introduction
Consider the Hopfield-type neural network defined by the following system of nonlinear
differential equations [YHF99]:
n
ẋi = -ai xi + â Ti j g j (x j ) + Ii
i = 1, n
(4.1)
j=1
where ai > 0, Ii are constants and denote the external input, T = (Ti j )n´n is a constant matrix
referred to as the interconnection matrix, gi : R ® R (i = 1, n) represent the neuron input-output
activations. In this chapter, if it is not mentioned otherwise, it is assumed that the functions gi
are R-analytic and gi (0) = 0, for i = 1, n.
For some of the results presented in this chapter, the following hypothesis will be used:
(B) The activation functions are bounded. Without restraining generality, we may suppose
that
|gi (s)| £ 1 for any s Î R, i = 1, n
(if this is not so, one can consider the activation function
gi
sup |gi (s)|
and replace the matrix T
sÎR
by the matrix (Ti j sup |g j (s)|)n´n)
sÎR
(M) The activation functions are increasing and have bounded derivatives. More precisely,
there exist ki > 0 such that 0 < g¢i (s) £ ki for any s Î R, i = 1, n. We denote
K = diag(k1 , k2 , ..., kn ).
The system (4.1) can be written in the matrix form:
ẋ = Ax + T g(x) + I
149
(4.2)
150
where x = (x1 , x2 , ..., xn )T Î Rn , A = diag(-a1 , ..., -an ) Î Mn´n , I = (I1 , ..., In)T Î Rn and
g : Rn ® Rn is given by g(x) = (g1 (x1 ), g2 (x2 ), ..., gn(xn ))T .
Let be f : Rn ´ Rn ® Rn the function given by
f (x, I) = Ax + T g(x) + I
Neural networks like (4.1) have been considered in [FT95, TH86]. Existence and uniqueness
of the steady state of Hopfield-type neural networks have been studied in [CZW04] and
the references therein. Stability, especially global exponential stability properties of neural
networks of this kind have been studied in [Cao04, CT01, CZW04, CG83, DFK91, Din89,
FT95, Ura89, YHF99] using single Lyapunov functions and in [KS93, LMS91] using vector
Lyapunov functions.
To solve problems of optimization, neural control and signal processing, Hopfield-type neural
networks have to be designed to exhibit for an input only one globally exponentially stable
steady state [CZW04]. On the other hand, if neural networks are used to analyze associative
memories, several locally exponentially stable steady states are desired for one input, as they
store information and constitute distributed and parallel neural memory networks. In this case,
the purpose of the qualitative analysis is the study of the locally exponentially stable steady
states (existence, number, regions of attraction) so as to ensure the recall capability of the
models.
Some results on the estimation of the local exponential convergence rate and of the regions of
attraction, in the case of Hopfiled-type neural networks, are given in [Cao04, CT01, YHF99].
Our aim is to analyze, in the case of Hopfield-type neural networks the following problems:
Which state x Î Rn can be a steady state? When does the system (4.1) have one or several
steady states for a given input I? How do the steady states of (4.1) depend on the input? Which
are the conditions for the global or local exponential stability of the steady states? How to find
or estimate the region of attraction of a steady state? When and how can a configuration of
steady states which corresponds to an external input I ø be transferred into another configuration
of steady states corresponding to an other external input I øø ?
The answers to these questions can play an important role in the design and maneuvering of
Hopfield-type neural networks.
Some of the results of this section have been published in [KBB05b].
4.1.2 Steady states
Definition 4.1. A steady state x = (x1 , x2 , ..., xn )T of (4.2) corresponding to the external input
I = (I1 , I2 , ..., In)T is a solution of the equation:
Ax + T g(x) + I = 0
(4.3)
For a given external input vector I = (I1 , I2, ..., In)T Î Rn the system (4.3) may have one solution,
several solutions or it may happen that it has no solutions. On the other hand, the following
statement holds:
Theorem 4.1. For any state x = (x1 , x2 , ..., xn)T Î Rn there exists a unique external input
I = (I1 , I2 , ..., In)T Î Rn such that x is a steady state of (4.1) corresponding to the input I.
151
Proof. For a given state x = (x1 , x2 , ..., xn)T Î Rn the external input vector is given by the
formula
I = -Ax - T g(x)
Remark 4.1. The "external input function" I : Rn ® Rn defined by
I(x) = -Ax - T g(x)
is an R-analytic function.
The set I defined by
I = {I Î Rn / $x Î Rn : I = -Ax - T g(x)}
is the collection of those inputs I for which the system (4.3) has at least one solution. If I = Rn
then for any input I Î Rn the system (4.3) has at least one solution. If I is strictly included in
Rn then there exist input vectors I for which system (4.3) has no solution.
Let be an external input I 0 = (I10, I20 , ..., In0)T Î I and x0 = (x01 , x02 , ..., x0n )T Î Rn a steady state
corresponding to this input, i.e. I(x0 ) = I 0.
Theorem 4.2. If the matrix A + T Dg(x0 ) is non-singular then there exists a unique maximal
domain U0 Ì Rn (i.e. U0 is a maximal open and connected set) containing x0 , a unique maximal
domain V0 Ì Rn containing I 0 and a unique bijective R-analytic function j : V0 ® U0 having
the following properties:
i. j(I 0 ) = x0 ;
ii. -Aj(I) - T g(j(I)) = I for any I Î V0 ;
iii. the matrix A + T Dg(x) is non-singular on U0 .
Proof. Direct consequence of the implicit function theorem and the continuous dependence of
det(A + T Dg(x)) on x.
Remark 4.2. The function j given by Theorem 4.2 is an analytic path of steady states for (4.2).
(see Subsection 1.1.2)
Theorem 4.3. If D is a rectangle in Rn , i.e. for i = 1, n there exist Αi , Βi Î R, Αi < Βi such that
D = (Α1 , Β1) ´ (Α2 , Β2) ´ ... ´ (Αn , Βn) and det(A + T Dg(x)) ¹ 0 for any x Î D then the function
I|D (the restriction of the external input function to D) is injective.
Proof. Let be x¢ , x¢¢ Î D, x¢ ¹ x¢¢ . For every i = 1, n there exists ci Î (x¢i , x¢¢i ) such that
gi (x¢i ) - gi (x¢¢i ) = g¢i (ci )(x¢i - x¢¢i ). Therefore, we have
I(x¢ ) - I(x¢¢ ) = -A(x¢ - x¢¢ ) - T (g(x¢ ) - g(x¢¢ )) = -(A + T Dg(c))(x¢ - x¢¢ )
(4.4)
where c = (c1 , c2 , ..., cn)T Î (x¢1 , x¢¢1 ) ´ (x¢2 , x¢¢2 ) ´ ... ´ (x¢n , x¢¢n ) Ì D. Hence, the matrix A + T Dg(c)
is non-singular, which provides that I(x¢ ) ¹ I(x¢¢ ). Thus, I|D is injective.
Corollary 4.1. Let be a rectangle D Ì Rn such that det(A + T Dg(x)) ¹ 0 for any x Î D. Then
for any I Î I(D) the system (4.2) has a unique steady state in D.
152
Corollary 4.2. Let be a rectangle D Ì Rn . If g¢i (s) > 0 for any s Î R and i = 1, n and:
n
a
Tii - ¢ i + â |Tji | < 0
gi (xi ) j=1
"i = 1, n, "x Î D
i¹ j
then for any I Î I(D) the system (4.2) has a unique steady state in D. If Ii > 0 for any i = 1, n
then the coordinates of the steady state are positive.
In [CZW04], it has been shown that in certain conditions (similar to the from Corollary 4.2) for
a given input vector I Î Rn the neural network defined by (4.2) has a unique steady state in Rn .
Theorem 4.4. Under hypothesis (B), for any input vector I Î Rn the following statements hold:
i. There exists at least one steady state of (4.2) (corresponding to I) in the rectangle
D = [-M1 , M1 ] ´ [-M2 , M2 ] ´ ... ´ [-Mn , Mn ] of Rn , where
n
1
Mi = (|Ii | + â |Ti j |) for any i = 1, n
ai
j=1
(4.5)
ii. Every steady state of (4.2), corresponding to I, belongs to the rectangle D defined above.
iii. If in addition det(A + T Dg(x)) ¹ 0 for any x Î D then the system (4.2) has a unique steady
state, corresponding to I, and it belongs to D.
Proof. The set of steady states of (4.2) corresponding to I are given by equation (4.3), which is
equivalent to
x = -A-1 (I + T g(x))
(4.6)
Let be the function h : Rn ® Rn defined by h(x) = -A-1 (I + T g(x)). One has
n
n
1
1
|hi (x)| = | - (Ii + â Ti j g j (x j ))| £ (|Ii | + â |Ti j |) = Mi
ai
ai
j=1
j=1
"x Î Rn , i = 1, n
(4.7)
Therefore, h(Rn ) Ì D, which proves ii. More, one gets that h(D) Ì D, and as h is a continuous
function, Brouwer’s fixed point theorem guarantees the existence of at least one steady state
of (4.2) corresponding to I in D, so the statement i holds. Statement iii follows directly from
Corollary 4.1.
Let be the set C defined by
C = {x Î Rn / det(A + T Dg(x)) = 0}
and the set G = Rn C. The set G is an open set and for any x Î G we have that
det(A + T Dg(x)) ¹ 0.
Let be {GΑ }Α the set of the open connected components of G, i.e. for any Α the set GΑ ¹ Æ is
open and connected, Ç GΑ = G and GΑ¢ È GΑ¢¢ = Æ if Α¢ ¹ Α¢¢ .
Α
Theorem 4.5. If GΑ is a rectangle in Rn then there exists a unique R-analytic function
jΑ : HΑ ® GΑ having the following properties:
153
i. HΑ = I(GΑ ) where I(x) = -Ax - T g(x) for any x Î Rn ;
ii. -AjΑ (I) - T g(jΑ (I)) = I for any I Î HΑ;
iii. the matrix DjΑ (I) is non-singular on HΑ .
Proof. According to Proposition 4.3, I|GΑ is injective. Consider HΑ = I(GΑ ) and remark that I|GΑ
is an R-analytic bijection. Now we can consider jΑ = (I|GΑ )-1 which satisfies ii. and iii.
Remark 4.3. For every Α, the function jΑ given by Theorem 4.5 is an analytic path of steady
states of (4.2). (see Subsection 1.1.2)
Remark 4.4. If the set GΑ is not a rectangle in Rn , consider DΑ the largest rectangle included
in GΑ . For the rectangle DΑ the statements of Theorem 4.5 are fulfilled, i.e. there exists a unique
analytic path of steady states jΑ : I(DΑ ) ® DΑ .
It has been shown that Rn can be decomposed as Rn = (Ç GΑ ) Ç C and sufficient conditions
Α
have been found assuring that for an input vector I in a certain set HΑ Ì Rn there exists a unique
steady state in the largest rectangle DΑ included in GΑ . This is the case in general, when several
paths of steady states exist. This kind of a result can be important in the design of Hopfield-type
neural networks used to analyze associative memories.
For every ¶ Î {±1}n we define the rectangle D¶ = J(¶1 ) ´ J(¶2 ) ´ ... ´ J(¶n ) where J(1) = (1, ¥)
and J(-1) = (-¥, -1).
The following theorem holds (for non-analytic activation functions):
Theorem 4.6. In addition to hypothesis (B), suppose that the functions gi , i = 1, n satisfy
gi (s) = 1 if s ³ 1 and gi (s) = -1 if s £ -1
(4.8)
If the external input I Î Rn satisfies
|Ii | < Tii - ai - â |Ti j | "i = 1, n
(4.9)
i¹ j
then in every rectangle D¶ , ¶ Î {±1}n , there exists a unique steady state of (4.2) corresponding
to I.
Proof. Let be an external input I Î Rn which satisfies (4.9) and let be ¶ Î {±1}n. We consider
xI,¶ = -A-1 (T ¶ + I) Î Rn and we prove that xI,¶ Î D¶ . Indeed, for any i = 1, n one has:
n
¶i xI,¶
i
¶
1
1
= i (â Ti j ¶ j + Ii ) = (Tii + â Ti j ¶i ¶ j + ¶i Ii ) ³ (Tii - â |Ti j | - |Ii |) > 1
ai j=1
ai
ai
j¹i
j¹i
I,¶
and therefore, xi Î J(¶i ) for any i = 1, n, thus xI,¶ Î D¶ . It is easy to see that xI,¶ is a steady
state of (4.2) corresponding to the input I in the rectangle D¶ and there are no other steady states
of (4.2) in D¶ .
The following theorem holds:
154
Theorem 4.7. In addition to hypothesis (B), suppose that there exists Α Î (0, 1) such that the
functions gi , i = 1, n satisfy
gi (s) ³ Α if s ³ 1 and gi (s) £ -Α if s £ -1
(4.10)
If the external input I Î Rn satisfies
|Ii | < Tii Α - ai - â |Ti j |
"i = 1, n
(4.11)
i¹ j
then the following statements hold:
i. In every rectangle D¶ , ¶ Î {±1}n , there exists at least one steady state of (4.2) corresponding to the input I.
ii. Every D¶ , ¶ Î {±1}n, is invariant to the flow of system (4.2).
Proof. Let be an input I satisfying (4.11) and ¶ Î {±1}n.
i. Consider the function h : Rn ® D defined by h(x) = -A-1 (I + T g(x)) and the rectangle D given
in the Theorem 4.4. For x Î D¶ we have that ¶i xi ³ 1 for any i = 1, n and therefore
¶
1
¶i hi (x) = i (Tii gi (xi ) + â Ti j g j (x j ) + Ii ) ³ (Tii Αi - â |Ti j | - |Ii )| > 1
ai
ai
j¹i
j¹i
This means that hi (x) Î J(¶i ) for any i = 1, n and therefore, hi (x) Î D¶ . We have just proved
that h(D¶ ) Ì D¶ È D and Brouwer’s fixed point theorem guarantees the existence of at least one
steady state of (4.2) corresponding to the input I in D¶ È D.
ii. Let be x0 Î D¶ . Suppose that there exists t0 ³ 0 and i Î 1, n such that xi (t0) = ¶i , where
xi (t) = xi (t; x0, I). Consider yi (t) = ¶i (-ai xi (t) + Únj=1 Ti j g j (x j (t)) + Ii ). Based on (4.11) and
hypothesis (B), we have that
yi (t0 ) = ¶i (-ai ¶i + Tii gi (¶i ) + â Ti j g j (x j (t0)) + Ii ) ³ -ai + Tii Α - â |Ti j | - |Ii | > 0
j¹i
j¹i
Therefore, there exists t1 > t0 such that yi (t) > 0 for any t Î [t0 , t1]. This implies that
¶i ẋi (t) = yi (t) > 0 for any t Î [t0, t1] and therefore the function ¶i xi is strictly increasing on
[t0, t1]. Hence ¶i xi (t) > ¶i xi (t0) = ¶2i = 1 for any t Î (t0 , t1]. This means that xi (t) Î J(¶i ) for any
t Î (t0 , t1]. It follows that the solution x(t; x0, I), x0 Î D¶ will remain in D¶ for any t ³ 0.
Remark 4.5. According to Theorem 4.4, if there exists an input I satisfying
n
|Ii | £ ai - â |Ti j |
"i = 1, n
j=1
then there exists al least one steady state of (4.2) corresponding to I belonging to the rectangle
[-1, 1]n , and there are no other steady states corresponding to I outside this rectangle. The
existence of such an input implies that ai > |Tii | for any i = 1, n.
On the other hand, Theorems 4.6 and 4.7 guarantee that if there exists Α Î (0, 1] and an input
I satisfying
|Ii | < Tii Α - ai - â |Ti j | "i = 1, n
i¹ j
then there exist 2 steady states corresponding to I outside the rectangle [-1, 1]n (in every
rectangle D¶ ). The existence of such an input implies that ai < Tii Α for any i = 1, n.
n
It is easy to see that the above conditions oppose.
155
4.1.3 Exponential stability of the steady states
The following theorem gives sufficient conditions for global exponential stability of the steady
state of a Hopfield-type neural network:
Theorem 4.8. (see [YHF99] Thm. 4) If hypothesis (M) holds and there exist constants Αi > 0
(i = 1, n) such that the matrix MK -1 A + 21 (MT + (MT )T ) (where M = diag(Α1 , Α2 , ..., Αn)) is
negative definite, then for any external input I Î Rn the neural network (4.2) has a unique
steady state x Î Rn and this steady state is globally exponentially stable.
Remark 4.6. If the conditions of the Theorem 4.8 hold, then I = Rn , C = Æ, G = Rn and the
system (4.2) has a unique analytic path of globally exponentially stable steady states.
Concerning the local exponential stability of the steady states of a Hopfield-type neural network,
the following results hold:
Theorem 4.9. If for an external input I Î Rn the state x Î Rn is a steady state of (4.2) and the
real parts of the eigenvalues of the matrix A + T Dg(x) are negative, then the steady state x is
locally exponentially stable.
Corollary 4.3. Let be a rectangle D Ì Rn . If the real parts of the eigenvalues of the matrix
A + T Dg(x) are negative for any x Î D then for any I Î I(D) the system (4.2) has a unique
locally exponentially stable steady state which lies in D.
Theorem 4.10. Suppose that the conditions of Theorem 4.6 are satisfied. Let be an input I Î Rn
satisfying (4.9) and ¶ Î {±1}n . The steady state xI,¶ of (4.2) corresponding to I and belonging
to D¶ is exponentially stable, and its region of attraction includes D¶ .
Proof. The exponential stability of xI,¶ results from the fact that on D¶ , the system (4.2) is a
nonhomogeneous linear system and the Jacobi matrix of (4.2) in xI,¶ is the negative definite
matrix A.
Let’s show that D¶ Ì Da (xI,¶ ). For this purpose, let be x0 Î D¶ and y(t) = x(t; x0, I) - xI,¶ . The
function y(t) satisfies the equation ẏ = Ay at least on an interval [0, t0], t0 > 0. Hence, one
gets that x(t; x0 , I) = xI,¶ + eAt (x0 - xI,¶ ) for any t Î [0, t0]. We obtain that the solution x(t; x0 , I)
remains in the rectangle of Rn determined by the points x0 and xI,¶ , therefore it remains in the
rectangle D¶ , for any t ³ 0. More, x(t; x0, I) ® xI,¶ as t ® ¥, therefore x0 Î Da (xI,¶ ).
We will show finally that ¶D¶ Ì Da (xI,¶ ). For this, let be x0 Î ¶D¶ . We consider the case
x0 = (¶1 , ¶2 , ..., ¶ p, x0p+1 , ..., x0n)T where p Î {1, 2, .., n} and x0i Î J(¶i ) for any i = p + 1, n. There
exists t0 > 0 such that xi (t; x0, I) Î J(¶i ) for any t Î [0, t0] and any i = p + 1, n.
For i = 1, p let be yi (t) = ¶i (-ai xi (t) + Únj=1 Ti j g j (x j (t)) + Ii ). Based on (4.9) and hypothesis (H1),
we have that
yi (0) = ¶i (-ai ¶i + Tii ¶i + â Ti j g j (x0j ) + Ii ) ³ -ai + Tii - â |Ti j | - |Ii | > 0
j¹i
j¹i
Therefore, there exists t1 Î (0, t0] such that yi (t) > 0 for any t Î [0, t1] and any i = 1, p. This
implies that ¶i ẋi (t) = yi (t) > 0 for any t Î [0, t1]. Hence, the function ¶i xi is strictly increasing
on [0, t1]. It follows that ¶i xi (t) > ¶i xi (0) = ¶2i = 1 for any t Î (0, t1] and i = 1, p. This means
that xi (t) Î J(¶i ) for any t Î (0, t1] and i = 1, p.
Thus, the solution x(t; x0 , I) enters in the rectangle D¶ Ì Da (xI,¶ ), so x0 Î Da (xI,¶ ).
156
Theorem 4.11. Assume that the conditions of Theorem 4.7 are fulfilled and consider an external
a
input I Î Rn satisfying (4.11). If |g¢i (s)| < n i for any |s| ³ 1 and i = 1, n then the steady state
Ú |T ji |
j=1
of (4.2) corresponding to the input I, which lies in the rectangle D¶ , ¶ Î {±1}n is unique, it is
exponentially stable and its region of attraction includes D¶ .
Proof. Consider Βi such that |g¢i (s)| £ Βi <
ai
Ú |T ji |
n
for any |s| ³ 1 and i = 1, n. We will first show
j=1
that the steady state of (4.2), corresponding to the input I, which lies in D¶ is unique. Suppose
the contrary, i.e. there exist x, y Î D¶ , x ¹ y such that x = h(x) and y = h(y), where the function
h is defined by h(z) = -A-1 (I + T g(z)). One has:
n
n
j=1
j=1
ai |xi - yi | = | â Ti j (g j (x j ) - g j (y j ))| £ â |Ti j |Β j |x j - y j |
Therefore,
n
n
n
"i = 1, n
n
â ai |xi - yi | £ â â |Ti j |Β j |x j - y j | < â a j |x j - y j |
i=1
i=1 j=1
j=1
which is absurd. Therefore, there exists a unique steady state of (4.2) corresponding to the input
I, which lies in the rectangle D¶ . It will be denoted by xI,¶ .
Let’s prove that xI,¶ is exponentially stable and its region of attraction includes D¶ . Let be
x0 Î D¶ . From Theorem 4.7 we get that x(t; x0, I) Î D¶ for any t ³ 0. Consider the function
V : R+ ® R+ defined by
n
V (t) = â |xi (t; x0, I) - xI,¶
i |
"t ³ 0
i=1
The function V is differentiable on (0, ¥) and its derivative satisfies
n
V ¢ (t) = â sgn(xi (t) - xI,¶
i )ẋi (t) =
i=1
n
n
= â sgn(xi (t) -
xI,¶
i )(-ai xi (t)
+ â Ti j g(x j (t)) + Ii ) =
i=1
n
j=1
n
= â sgn(xi (t) -
xI,¶
i )(-ai (xi (t)
-
xI,¶
i )
+ â Ti j (g(x j (t)) - g(xI,¶
j )) =
j=1
i=1
n
= - â ai |xi (t) -
n
xI,¶
i |
i=1
n
£ - â ai |xi (t) -
+ â sgn(xi (t) i=1
n
xI,¶
i |
i=1
n
n
i=1
j=1
n
I,¶
xi ) â Ti j (g(x j (t))
j=1
- g(xI,¶
j )) £
n
+ â â |Ti j |Β j |x j (t) - xI,¶
j | =
i=1 j=1
= - â(ai - Βi â |Tji |)|xi (t) - xI,¶
i | £ -kV (t)
where k = min(ai - Βi Ú |Tji |) > 0. Therefore, we have that V (t) £ e-kt V (0). Hence V (t) ® 0
n
i=1,n
j=1
exponentially as t ® ¥. This means that x(t; x0, I) ® xI,¶ as t ® ¥. Thus xI,¶ is exponentially
stable and its region of attraction includes D¶ .
157
Remark 4.7. The conditions from Theorems 4.7 and 4.11 are not too restraining. Indeed, in
addition to hypothesis (B), the activation functions gi are usually chosen to satisfy the following
conditions: gi (s) ® 1 as s ® ¥, gi (s) ® -1 as s ® -¥ and g¢i (s) ® 0 as s ® ±¥. Hence, for
Α Î (0, 1) there exists M > 0 such that for any i = 1, n one has:
• gi (s) ³ Α if s ³ M, gi (s) £ -Α if s £ -M
• |g¢i (s)| <
ai
Ú |T ji |
n
for |s| ³ M.
j=1
If M £ 1, then the conditions from Theorem 4.7 hold for the activation functions gi .
If M > 1, consider the suitable change of coordinates in the system (4.2): y = M1 x. System (4.2)
becomes:
1
1
ẏ = Ay + T g(My) + I
(4.12)
M
M
which describes a neural network having the activation function g̃(y) = M1 g(My) and the external
input Ĩ = M1 I. The functions g̃i satisfy similar conditions to those from Theorem 4.7:
• g̃i (s) ³ Α̃ if s ³ 1 and g̃i (s) £ -Α̃ if s £ -1 where Α̃ =
• |g̃¢i (s)| <
ai
Ú |T ji |
n
Α
M
Î (0, 1)
for |s| ³ 1
j=1
Therefore, Theorems 4.7 and 4.11 can be applied for the system (4.12). The obtained results
can be transposed to system (4.2). In this manner, we obtain:
Corollary 4.4. In addition to hypothesis (B), suppose that there exists Α Î (0, 1) such that the
functions gi , i = 1, n satisfy
• gi (s) ³ Α if s ³ M, gi (s) £ -Α if s £ -M
• |g¢i (s)| <
ai
Ú |T ji |
n
for |s| ³ M.
j=1
Let be ¶ Î {±1}n and the rectangle MD¶ = MJ(¶1 )´MJ(¶2 )´...´MJ(¶n), where MJ(1) = (M, ¥)
and MJ(-1) = (-¥, -M).
For an input I Î Rn satisfying
|Ii | < Tii Α - M(ai + â |Ti j |) "i = 1, n
(4.13)
i¹ j
there exists a unique steady state of (4.2) corresponding to I, which lies in the rectangle MD¶ ,
it is exponentially stable and its region of attraction includes MD¶ .
Theorem 4.12. (see [YHF99] Cor.) Under hypothesis (M), if the state x Î Rn is a steady state
of (4.2) corresponding to an input I Î Rn and the matrix Dg(x)-1 A + 12 (T + T T ) is negative
definite, then the steady state x is locally exponentially stable.
Other conditions for the global or local exponential stability of a steady state of a Hopfield
neural network and the estimation of its region of attraction are given in [Cao04, CT01].
The following theorem is a characterization of the region of attraction of a locally exponentially
stable steady state using the optimal Lyapunov function defined in Chapter 1.
158
Theorem 4.13. If for an external input I ø Î Rn the state xø Î Rn is a steady state of (4.2)
and the real parts of the eigenvalues of the matrix A + T Dg(xø ) are negative then the region
of attraction Da (xø ) of xø coincides with the natural domain of analyticity of the R-analytical
function V defined by
XÑV (x), f (x, I ø)\ = -üx - xø ü2
V (xø ) = 0
(4.14)
The function V is strictly positive on Da (xø ) {xø } and V (x) ® ¥ as x ® y, y Î ¶Da (xø ) or as
üxü ® ¥.
Remark 4.8. In the conditions of Theorem 4.13, the region of attraction Da (xø ) satisfies
Da (xø ) = Da (0) + xø where Da (0) is the region of attraction of the steady state y = 0 of the
system
ẏ = Ay + T h(y)
(4.15)
where h : Rn ® Rn is defined by h(y) = g(xø + y) - g(xø ). Therefore, in order to find the region
of attraction Da (xø ) it is sufficient to find the region of attraction Da (0) of the zero solution of
(4.15). The methods of approximation of the region of attraction described in Chapter 1 can be
successfully applied.
4.1.4 Controllability
Definition 4.2. A change at a certain moment of the external input from I ¢ to I ¢¢ is called
maneuver and it is denoted by I ¢ # I ¢¢ . The maneuver I ¢ # I ¢¢ made at t = t0 is successful on
the path jΑ : HΑ = I(DΑ ) ® DΑ if I ¢, I ¢¢ Î HΑ and if the solution of the initial value problem
ẋ = Ax + T g(x) + I ¢¢
x(t0 ) = jΑ (I ¢)
(4.16)
tends to jΑ (I ¢¢ ) as t ® ¥.
The system (4.2) is controllable along a path of steady states if any two steady states belonging
to the path can be transferred one in the other by a finite number of successive maneuvers.
Remark 4.9. If the conditions of Theorem 4.8 hold then the system (4.2) has a unique analytic
path of globally exponentially stable steady states j : Rn ® Rn and consequently, any maneuver
I ¢ # I ¢¢ is successful on the path j, therefore, (4.2) is controllable.
If the steady states jΑ (I) of the path jΑ are only locally exponentially stable, then it may happen
that some maneuvers are not successful along this path. In such cases, it is appropriate to use
the following result:
Theorem 4.14. For two steady states jΑ (I ø ) and jΑ (I øø) belonging to the R-analytic path jΑ
of locally exponentially stable steady states of (4.2), there exists a finite number of values of the
external inputs I 1 , I 2, ..., I p Î HΑ such that all the maneuvers
I ø # I 1 # I 2 # ... # I p # I øø
(4.17)
are successful on the path jΑ .
Remark 4.10. Theorem 4.14 states that the system (4.2) is controllable along an analytic
path jΑ of locally exponentially stable steady states. In fact, the transfer from a steady
state jΑ (I ø ) to a steady state jΑ (I øø) is made through the regions of attraction of the states
jΑ (I 1), jΑ (I 2), ..., jΑ(I n), jΑ (I øø).
159
Remark 4.11. If È HΑ ¹ Æ for a certain set G of indexes Α and the paths jΑ : È HΑ ® DΑ
ΑÎG
ΑÎG
(Α Î G) are locally exponentially stable, then for two configurations of steady states {jΑ(I ø )}ΑÎG
and {jΑ(I øø )}ΑÎG where I ø , I øø Î È HΑ there exists a finite number of external input vectors
ΑÎG
I 1, I 2, ..., I p Î È HΑ such that the maneuvers
ΑÎG
I ø # I 1 # I 2 # ... # I p # I øø
transfer the configuration {jΑ (I ø )}ΑÎG into the configuration {jΑ (I øø)}ΑÎG .
4.1.5 Examples
Example 4.1. Let be the one-dimensional Hopfield-type neural network:
ẋ = -ax + T tanh x + I
xÎR
(4.18)
where a > 0, T and I are constants. The steady states of (4.18) corresponding to an external
input I are the solutions of the equation
-ax + T tanh x + I = 0
For a given state x Î R the external input I(x) for which x is a steady state of the system (4.18)
is given by
I(x) = ax - T tanh x
1. If T < a then I ¢(x) > 0 for x Î R and lim I(x) = -¥, lim I(x) = ¥. It follows that I = R
x®-¥
x®¥
and for any I 0 Î R there exists a unique x0 Î R such that -ax0 + T tanh x0 + I 0 = 0. In this case,
C = Æ, G = R and there exists a unique analytic path of globally exponentially stable steady
states j : R ® R.
2. If T = a then I ¢(x) > 0 for any x Î R {0} and I ¢(0) = 0. We also have lim I(x) = -¥ and
x®-¥
lim I(x) = ¥. It follows that I = R and for any I 0 Î R there exists a unique x0 Î R such that
-ax0 + T tanh x0 + I 0 = 0. In this case, C = {0}, G = R {0} and there exist two analytic paths
of steady states j- : (-¥, 0) ® (-¥, 0) and j+ : (0, ¥) ® (0, ¥).
1
1
3. If T > a then for x1 = -arctanh TT-a and x2 = arctanh TT-a we have I ¢ (x1 ) = I ¢(x2 ) = 0. For
x Î (-¥, x1 ) Ç (x2 , +¥) we have I ¢ (x) > 0 and for x Î (x1 , x2 ) we have I ¢(x) < 0. We also have
I(x1) > 0, I(x2 ) < 0, lim I(x) = -¥ and lim I(x) = ¥. It follows that I = R and the steady
x®-¥
x®¥
state corresponding to an external input I Î I are as follows:
x®¥
• if I Î (-¥, I(x2)) then there exists a unique globally exponentially stable steady state
x¢ Î (-¥, xø1 ) corresponding to I ( where xø1 < x1 verifies I(xø1 ) = I(x2 )).
• if I = I(x2 ) then there exist two steady states x¢ = xø1 and x¢¢ = x2 corresponding to I
(where xø1 < x1 and verifies I(xø1 ) = I(x2 )). The steady state x¢ is locally exponentially
stable and its region of attraction is the interval (-¥, x2 ) but the second steady state x¢¢ is
unstable.
160
• if I Î (I(x2 ), I(x1)) then there exist three steady states x¢ , x¢¢ , x¢¢¢ Î (xø1 , xø2 ), x¢ < x¢¢ < x¢¢¢
corresponding to I (where xø1 < x1 verifies I(xø1 ) = I(x2 ) and xø2 > x2 verifies I(xø2 ) = I(x1 )).
The steady states x¢ and x¢¢¢ are locally exponentially stable and their regions of attraction
are (-¥, x¢¢ ) and (x¢¢ , ¥) respectively. The steady state x¢¢ is unstable.
• if I = I(x1 ) then there exist two steady states x¢ = xø2 and x¢¢ = x1 corresponding to I
(where xø2 > x2 and verifies I(xø2 ) = I(x1 )). The steady state x¢ is locally exponentially
stable with the region of attraction (x1 , ¥). The second steady state x¢¢ is unstable.
• if I Î (I(x2), ¥) then there exists a unique globally exponentially stable steady state
x¢ Î (xø2 , ¥) corresponding to I ( where xø2 > x2 verifies I(xø2 ) = I(x1 )).
In this case, C = {x1 , x2 }, G = R C = (-¥, x1 ) Ç (x1 , x2 ) Ç (x2 , ¥) and there exist three
analytic paths of steady states j1 : (-¥, I(x1)) ® (-¥, x1 ), j2 : (I(x1), I(x2 )) ® (x1 , x2 ) and
j3 : (I(x2), ¥) ® (x2 , ¥).
Example 4.2. Let be the following Hopfield-type neural network [YHF99]:
; x˙1 = -x1 + 1715ln 4 tanh x2 + I1
2
2
1
2
15
x˙ = -x +
17 ln 4
tanh x + I
(4.19)
It is easy to see that for (I1 , I2 ) = (0, 0) the system (4.19) has three steady states: (0, 0)T which
is unstable and (ln 4, ln 4)T and (- ln 4, - ln 4)T which are locally exponentially stable. The
external input function is I : R2 ® R2 defined by
I(x1 , x2 ) = (x1 -
17 ln 4
17 ln 4
tanh x2 , x2 tanh x1 )T
15
15
The Jacobi matrix of the system is
æç
çç
ç
è
-1
17 ln 4
15(cosh x1 )2
17 ln 4
15(cosh x2 )2
-1
ö÷
÷÷
÷
ø
which is non-singular if and only if
cosh x1 cosh x2 ¹
17 ln 4
15
It follows that the set G has two open connected components:
G- = {x = (x1 , x2 )T Î R2 : cosh x1 cosh x2 <
17 ln 4
}
15
17 ln 4
}
15
All the steady states belonging to the set G- are unstable, as the eigenvalues of the Jacobi matrix
4
in a point x = (x1 , x2 )T Î G- are ± 15 cosh17xlncosh
- 1, one of them being positive, and the other
x2
1
negative.
G+ = {x = (x1 , x2 )T Î R2 : cosh x1 cosh x2 >
For the other connected component G+ there exist at least two paths of steady states in G+ one
of them containing (ln 4, ln 4)T and the other containing (- ln 4, - ln 4)T . All the steady states
belonging to G+ are locally exponentially stable.
161
We use the method of approximation of the regions of attraction proposed in Chapter 1. For
this example, we use the Taylor polynomial of order 6 of the optimal Lyapunov function. The
estimates N6 of the regions of attraction of the steady states (ln 4, ln 4)T and (- ln 4, - ln 4)T
obtained by this method are presented in the Figure 4.1.1. In both cases, we obtain that c6 = 8.
In this figure, we have also represented the estimate of the region of attraction of (ln 4, ln 4)T
obtained in [YHF99] (the small square centered in (ln 4, ln 4)T ), which is much more smaller
that our estimate.
4
3
2
4
1
3
2
1
Figure 4.1.1: Estimates of the regions of attrac- Figure 4.1.2: Estimates of the regions of attraction of (ln 4, ln 4)T and (- ln 4, - ln 4)T
tion of xi , i = 1, 4
Let us analyze some characteristics of the steady states of the neural network (4.19) which
correspond to external inputs of the form I = (I1 , I2 )T with I1 = I2 . One can prove that the
steady states which correspond to a given input (I1 , I1)T are of the form (x1 , x1 )T . It is obvious
that for any steady state (x1 , x1 )T from the first bisector it corresponds an input (I1 , I1)T where
I1 (x1 ) = x1 - 1715ln 4 tanh x1 .
In the Figure 4.1.2 we have represented four steady states which belong to the first bisector:
x1 = (1, 1)T for which I11 = -0.196566, x2 = (ln 4, ln 4) which corresponds to I12 = 0, x3 = (3, 3)
for which I13 = 1.43664 and x4 = (4, 4) which corresponds to I14 = 2.42992. All these steady
states belong to G+ , therefore, they are locally exponentially stable. In Figure 4.1.2 the estimates
N6 of the regions of attraction of each steady state xi , i = 1, 4 are also presented. Computations
have provided that for x1 we have c6 = 2.8, for x3 we have c6 = 10.8 and for x4 we obtain
c6 = 32.4.
One can see that x1 is in the estimate of the region of attraction of x4 , therefore, the maneuver
I : (I11, I11)T # (I14 , I14)T is successful and transfers the neural network from the steady state
x1 to the steady state x4 directly. On the other hand, x4 does not belong to the estimate
of the region of attraction of x1 , therefore, we are not convinced that the direct maneuver
I : (I14 , I14)T # (I11, I11)T is successful. On the other hand, we observe that x4 Î Da (x3 ),
x3 Î Da (x2 ) and x2 Î Da (x1 ), hence the neural network can be transferred from x4 to x1 by
the following successive maneuvers:
I : (I14 , I14)T # (I13, I13)T # (I12 , I12)T # (I11, I11)T
Example 4.3. Consider the following neural network:
; x˙1 = -a1 x1 + b1 g(x1 ) + b2 g(x2 ) + I1
2
2 2
2
1
1
2
2
x˙ = -a x + b g(x ) + b g(x ) + I
(4.20)
162
where g : R ® (-1, 1), g(s) = Π2 arctan( Π2 s). Let be Α = g(1) > 0.63 and Β = g¢ (1) > 0.28. One
can check that g(s) ³ Α if s ³ 1 and g(s) £ -Α if s £ -1. More 0 < g¢ (s) £ Β for any |s| ³ 1.
Based on Theorems 4.7 and 4.11, it can be proved that if
Β(|b1| + |b2 |) < ai < Αb1 - |b2 |
i = 1, 2
then for any input I = (I1 , I2) satisfying
|Ii | < Αb1 - |b2 | - ai
i = 1, 2
there exists a unique steady state xI,¶ in every rectangle D¶ , it is locally exponentially stable and
its region of attraction includes D¶ .
For b1 = 1000, b2 = -0.5 and a1 = a2 = Αb1 - |b2 | - 300 it results that for any input I such
that |Ii | < 300, i = 1, 2 in every rectangle D¶ there exists a unique steady state xI,¶ , which is
exponentially stable and whose region of attraction includes D¶ . Let be S¶ = {xI,¶ / |Ii| < 300, i =
1, 2} Ì D¶ . In Figure 4.2, the gray rectangles represent the four sets S¶ .
The four spirals in Figure 4.2 represent the steady states corresponding to the inputs Iu =
(20u cos u, 20u sin u) with u Î [0, 4Π].
4
3
2
1
0
-1
-2
-3
-3
-2
-1
0
1
2
3
4
Figure 4.2: The sets S¶ for (4.20)
Example 4.4. Let be the following two dimensional decoupled Hopfield-type neural network:
; x˙1 = -x 1+ (x - sin x1 ) +1I
2
2
2
2
2
x˙ = -ax + T tanh x + I
(4.21)
where a > 0 and T Î R. It is easy to see that I = R ´ [-1, 1]. The external input function
I : R2 ® I is given by
I(x1 , x2 ) = (ax1 - T tanh x1 , sin x2 )T
The Jacobi matrix of the system is
K
-a + T (1 - (tanh x1 )2 )
0
O
0
- cos x2
163
and it is non-singular if
(-a + T (1 - (tanh x1 )2 )) cos x2 ¹ 0
It follows that the set C is given by:
C = R ´ (2Z + 1) Π2 if T < a
C = (R ´ (2Z + 1) Π2 ) Ç ({0} ´ R) if T = a
1
C = (R ´ (2Z + 1) Π2 ) Ç ({±arctanh TT-a } ´ R) if T > a
Therefore, the set G = R2 C for which the Jacobi matrix of the system is non-singular is given
by:
G = R ´ (2ZΠ - Π2 , 2ZΠ + Π2 ) if T < a
G = Rø ´ (2ZΠ - Π2 , 2ZΠ + Π2 ) if T = a
1
G = (R {±arctanh TT-a })´(2ZΠ- 2Π , 2ZΠ+ 2Π ) if T > a
1. If T < a the open connected components of the set G are Gk = R ´ (2kΠ - Π2 , 2kΠ + Π2 ) for
k Î Z. According to Theorem 4.5, for each k Î Z there exists a unique R-analytic path of
steady states jk : Hk ® Gk where Hk = I(Gk ) = R ´ (-1, 1). The function jk is given by
jk (I1 , I2) = (g-1 (I1 ), (-1)k arcsin I2 + kΠ)T where g : R ® R defined by g(y) = ay - T tanh y is
invertible. Therefore, in this case, we have an infinity of paths of steady states.
Concerning the stability of the steady states which belong to the path jk , the following
statement holds: if k is odd then the steady state is unstable, if k is even then the steady
state is locally exponentially stable. In other words, the paths j2k+1 contain only unstable
steady states, while the paths j2k contain only locally exponentially stable steady states and
Da (j2k (I)) = R ´ (arcsin I2 + (2k - 1)Π, arcsin I2 + (2k + 1)Π) for any I Î R ´ (-1, 1).
Concerning the transfer of a steady state xø = (xø1 , xø2 )T which corresponds to an input
øø T
øø
I ø = (I1ø , I2ø )T to a steady state xøø = (xøø
= (I1øø , I2øø)T
1 , x2 ) which corresponds to an input I
the following statements hold:
• if both steady states xø and xøø belong to G2k then the transfer can be made by the single
maneuver I ø # I øø.
• if xø Î G2k±1 and xøø Î G2k then there exists I 1 = (I11 , I21)T such that xø is transferred to
xøø by the maneuvers I ø # I 1 # I øø .
• if xø Î G2k and xøø Î G2l with k ¹ l then there is no way to transfer xø is to xøø because
any maneuver would transfer xø to a steady state which lies in G2k .
2. If T = a the open connected components of the set G are G+k = (0, ¥) ´ (2kΠ - Π2 , 2kΠ + Π2 ) and
G-k = (-¥, 0) ´ (2kΠ - Π2 , 2kΠ + Π2 ) for k Î Z. According to Theorem 4.5, the R-analytic paths
of steady states are: j+k : Hk+ ® G+k where Hk+ = I(G+k ) = (0, ¥) ´ (-1, 1) and j-k : Hk- ® G-k
where Hk- = I(G-k ) = (-¥, 0) ´ (-1, 1), for any k Î Z. The functions j+k and j-k are given
by j+k (I1, I2 ) = (s+ (I1 ), (-1)k arcsin I2 + kΠ)T and j-k (I1 , I2 ) = (s- (I1 ), (-1)k arcsin I2 + kΠ)T where
s+ (I1 ) and s- (I1 ) are the positive and the negative roots of the equation I1 = ay - T tanh y.
Concerning the stability of the steady states which belong to the paths j±k , the following
statement holds: if k is odd then the steady state is unstable, if k is even then the steady
state is locally exponentially stable. In other words, the paths j±2k+1 contain only unstable
steady states, while the paths j±2k contain only locally exponentially stable steady states and
Da (j+2k (I)) = (0, ¥) ´ (arcsin I2 + (2k - 1)Π, arcsin I2 + (2k + 1)Π) for any I Î (0, ¥) ´ (-1, 1) and
Da (j-2k (I)) = (-¥, 0) ´ (arcsin I2 + (2k - 1)Π, arcsin I2 + (2k + 1)Π) for any I Î (-¥, 0) ´ (-1, 1).
164
Concerning the transfer of a steady state xø which corresponds to an input I ø to a steady state
xøø which corresponds to an input I øø the following statements hold:
• if xø , xøø Î G+2k or xø , xøø Î G-2k then the transfer can be made by the single maneuver
I ø # I øø .
• if xø Î G+2k±1 and xøø Î G+2k (or xø Î G-2k±1 and xøø Î G-2k ) then there exists I 1 = (I11 , I21)T
such that xø is transferred to xøø by the maneuvers I ø # I 1 # I øø .
• if xø Î G+2k and xøø Î G+2l (or xø Î G-2k and xøø Î G-2l ) with k ¹ l then there is no way to
transfer xø into xøø because any maneuver would transfer xø to a steady state which lies
in G+2k (G-2k respectively).
• if xø Î G+k and xøø Î G-l then there is no way to transfer xø is to xøø .
3. If T > a the open1
connected components of the set G are:
Gk = (-¥, -arctanh TT-a ) ´ (2kΠ - Π2 , 2kΠ + Π2 )
1
1
G0k = (-arctanh TT-a , arctanh TT-a )´(2kΠ- Π2 , 2kΠ+ Π2 )
1
G+k = (arctanh TT-a , ¥) ´ (2kΠ - Π2 , 2kΠ + Π2 )
According to Theorem 4.5, the R-analytic paths of steady states are:
j-k : Hk- ® G-k with Hk- = I(G-k ) = (-¥, b) ´ (-1, 1)
j0k : Hk0 ® G0k with Hk0 = I(G0k ) = (-b, b) ´ (-1, 1)
j+k : Hk+ ® G+k with Hk+ = I(G+k ) =1
(-b, ¥) ´ (-1, 1)
0
where b = T (T - a) - aarctanh TT-a .
The functions j-k , j0k and j+k are given by
j-k (I1 , I2 ) = (s- (I1 ), (-1)k arcsin I2 + kΠ)T
j0k (I1 , I2) = (s0 (I1 ), (-1)k arcsin I2 + kΠ)T
j+k (I1 , I2 ) = (s+ (I1 ), (-1)k arcsin I2 + kΠ)T
where s- (I1 ) < s0 (I1) < s+ (I1 ) are the roots of the equation I1 = ay - T tanh y.
All the steady states of the paths j0k are unstable. Concerning the stability of the steady states
which belong to the paths j±k , the following statement holds: if k is odd then the steady state is
unstable, if k is even then the steady state is locally exponentially stable. We have that
1
Da (j-2k (I)) = (-¥, -arctanh TT-a ) ´ (arcsin I2 + (2k - 1)Π, arcsin I2 + (2k + 1)Π)
1
1
Da (j02k (I)) = (-arctanh TT-a , arctanh TT-a ) ´ (arcsin I2 + (2k - 1)Π, arcsin I2 + (2k + 1)Π)
1
Da (j+2k (I)) = (arctanh TT-a , ¥) ´ (arcsin I2 + (2k - 1)Π, arcsin I2 + (2k + 1)Π)
Concerning the transfer of a steady state xø which corresponds to an input I ø to a steady state
xøø which corresponds to an input I øø the following statements hold:
• if xø , xøø Î G+2k or xø , xøø Î G-2k then the transfer can be made by the single maneuver
I ø # I øø .
• if xø Î G+2k±1 and xøø Î G+2k (or xø Î G-2k±1 and xøø Î G-2k ) then there exists I 1 = (I11 , I21)T
such that xø is transferred to xøø by the maneuvers I ø # I 1 # I øø .
165
• if xø Î G+2k and xøø Î G+2l (or xø Î G-2k and xøø Î G-2l ) with k ¹ l then there is no way to
transfer xø is to xøø because any maneuver would transfer xø to a steady state which lies
in G+2k (G-2k respectively).
• if xø Î G+k Ç G0k and xøø Î G-l (or xø Î G-k Ç G0k and xøø Î G+l ) then there is no way to
transfer xø is to xøø .
166
4.2 Discrete time Hopfield-type neural networks
4.2.1 Introduction
In [MG00] a semi-discretization technique has been presented for obtaining discrete-time neural
networks, starting from the continuous time Hopfield-type neural network (4.1). The result is
the following discrete semi-dynamical system:
n
xip+1 = e-ai h xip +
1 - e-ai h
(â Ti j g j (x pj ) + Ii )
ai
j=1
"i = 1, n, p Î N
(4.22)
where h > 0 denotes the uniform discretization step size. It has been established that for any
h > 0 the discrete time models (4.22) faithfully preserve the characteristics of (4.1), i.e. the
steady states and their stability properties.
In this section, we will consider a more general class of discrete time Hopfield-type neural
networks (which includes (4.22)), defined by the following discrete semi-dynamical system:
n
xip+1
=
bi xip
+ â T̄i j g j (x pj ) + Īi
"i = 1, n, p Î N
(4.23)
j=1
where bi Î (0, 1), Īi denotes the external input, T̄ = (T̄i j )n´n is the interconnection matrix,
gi : R ® R (i = 1, n) represent the neuron input-output activations. If it is not mentioned
otherwise, it is assumed that the functions gi are R-analytic and gi (0) = 0, for i = 1, n. For
some of the results presented in this section, the hypothesis (B) and (M) will also be used for
the activation functions gi , i = 1, n:
(B) The activation functions are bounded:
|gi (s)| £ 1
for any s Î R, i = 1, n
(M) The activation functions are increasing and have bounded derivatives: there exist ki > 0
such that 0 < g¢i (s) £ ki for any s Î R, i = 1, n.
Some results on the exponential stability and estimation of the region of attraction of a steady
state of system (4.23) have been obtained in [GHW04, YHH04, YHH05] and the references
therein.
The system (4.23) can be written in the matrix form:
x p+1 = Bx p + T̄ g(x p) + Ī
(4.24)
where x = (x1 , x2 , ..., xn)T Î Rn , B = diag(b1 , ..., bn ) Î Mn´n , Ī = (Ī1 , ..., Īn)T Î Rn and
g : Rn ® Rn is given by g(x) = (g1 (x1 ), g2 (x2 ), ..., gn(xn ))T .
Let be f : Rn ´ Rn ® Rn the function given by
f (x, Ī) = Bx + T̄ g(x) + Ī
167
4.2.2 Steady states
Definition 4.3. A steady state x = (x1 , x2 , ..., xn )T of (4.24) corresponding to the external input
Ī = (Ī1 , Ī2 , ..., Īn)T is a solution of the equation:
Bx + T̄ g(x) + Ī = x
(4.25)
Concerning the existence and uniqueness of steady states, similar results hold as in the
continuous case:
Theorem 4.15. For any state x = (x1 , x2 , ..., xn)T Î Rn there exists a unique external input
Ī = (Ī1 , Ī2 , ..., Īn)T Î Rn such that x is a steady state of (4.23) corresponding to the input Ī.
Proof. For a given state x = (x1 , x2 , ..., xn)T Î Rn the external input vector is given by the
formula
Ī = (Id - B)x - T̄ g(x)
where Id Î Mn´n is the identity matrix.
Remark 4.12. The "external input function" Ī : Rn ® Rn defined by
Ī(x) = (Id - B)x - T̄ g(x)
is an R-analytic function.
The set I defined by
I = {Ī Î Rn / $x Î Rn : Ī = (Id - B)x - T̄ g(x)}
is the collection of those inputs Ī for which the system (4.25) has at least one solution. If I = Rn
then for any input Ī Î Rn the system (4.25) has at least one solution. If I is strictly included in
Rn then there exist input vectors Ī for which system (4.25) has no solution.
The proofs of the following theorems are similar to those from the continuous case and will be
omitted.
Let be an external input Ī 0 = (Ī10, Ī20 , ..., Īn0)T Î I and x0 = (x01 , x02 , ..., x0n )T Î Rn a steady state
corresponding to this input, i.e. Ī(x0 ) = Ī 0.
Theorem 4.16. If the matrix B-Id + T̄ Dg(x0 ) is non-singular then there exists a unique maximal
domain U0 Ì Rn (i.e. U0 is a maximal open and connected set) containing x0 , a unique maximal
domain V0 Ì Rn containing Ī 0 and a unique bijective R-analytic function j : V0 ® U0 having
the following properties:
i. j(Ī 0 ) = x0 ;
ii. (Id - B)j(Ī) - T̄ g(j(Ī)) = Ī for any Ī Î V0 ;
iii. the matrix B - Id + T̄ Dg(x) is non-singular on U0 .
Remark 4.13. The function j given by Theorem 4.16 is an analytic path of steady states for
(4.24). (see Subsection 2.1.2)
168
Theorem 4.17. If D is a rectangle in Rn , i.e. for i = 1, n there exist Αi , Βi Î R, Αi < Βi such
that D = (Α1 , Β1) ´ (Α2 , Β2) ´ ... ´ (Αn , Βn ) and det(B - Id + T̄ Dg(x)) ¹ 0 for any x Î D then the
function Ī|D (the restriction of the external input function to D) is injective.
Corollary 4.5. Let be a rectangle D Ì Rn such that det(B - Id + T̄ Dg(x)) ¹ 0 for any x Î D.
Then for any Ī Î Ī(D) the system (4.24) has a unique steady state in D.
Corollary 4.6. Let be a rectangle D Ì Rn . If g¢i (s) > 0 for any s Î R and i = 1, n and:
n
1-b
T̄ii - ¢ i + â |T̄ji | < 0
gi (xi )
j=1
"i = 1, n, "x Î D
i¹ j
then for any Ī Î Ī(D) the system (4.24) has a unique steady state in D. If Īi > 0 for any i = 1, n
then the coordinates of the steady state are positive.
Theorem 4.18. Under hypothesis (B), for any input vector Ī Î Rn the following statements
hold:
i. There exists at least one steady state of (4.24) (corresponding to Ī) in the rectangle
D = [-M1 , M1 ] ´ [-M2 , M2 ] ´ ... ´ [-Mn , Mn ] of Rn , where
n
Mi =
1
(|Ī | + â |T̄i j |)
1 - bi i
j=1
for any i = 1, n
(4.26)
ii. Every steady state of (4.24), corresponding to Ī, belongs to the rectangle D defined above.
iii. If in addition det(B - Id + T̄ Dg(x)) ¹ 0 for any x Î D then the system (4.24) has a unique
steady state, corresponding to Ī, and it belongs to D.
Let be the set C defined by
C = {x Î Rn / det(B - Id + T Dg(x)) = 0}
and the set G = Rn C. The set G is an open set and for any x Î G we have that
det(B - Id + T̄ Dg(x)) ¹ 0.
Let be {GΑ }Α the set of the open connected components of G, i.e. for any Α the set GΑ ¹ Æ is
open and connected, Ç GΑ = G and GΑ¢ È GΑ¢¢ = Æ if Α¢ ¹ Α¢¢ .
Α
Theorem 4.19. If GΑ is a rectangle in Rn then there exists a unique R-analytic function
jΑ : HΑ ® GΑ having the following properties:
i. HΑ = Ī(GΑ ) where Ī(x) = (Id - B)x - T̄ g(x) for any x Î Rn ;
ii. (Id - B)jΑ (Ī) - T̄ g(jΑ (Ī)) = Ī for any Ī Î HΑ ;
iii. the matrix DjΑ (Ī) is non-singular on HΑ .
Remark 4.14. Every function jΑ given by Theorem 4.19 is an analytic path of steady states of
(4.24). (see Subsection 2.1.2)
169
Remark 4.15. If the set GΑ is not a rectangle in Rn , consider DΑ the largest rectangle included
in GΑ . For the rectangle DΑ the statements of Theorem 4.19 are fulfilled, i.e. there exists a
unique analytic path of steady states jΑ : Ī(DΑ ) ® DΑ .
It has been shown that Rn can be decomposed as Rn = (Ç GΑ ) Ç C and sufficient conditions
Α
have been found assuring that for an input vector Ī in a certain set HΑ Ì Rn there exists a unique
steady state in the largest rectangle DΑ included in GΑ .
The following theorem holds (for non-analytic activation functions):
Theorem 4.20. In addition to hypothesis (B), suppose that the functions gi , i = 1, n satisfy
gi (s) = 1 if s ³ 1 and gi (s) = -1 if s £ -1
(4.27)
|Īi | < T̄ii + bi - 1 - â |T̄i j | "i = 1, n
(4.28)
If an input Ī Î Rn satisfies
i¹ j
then
i. in every rectangle D¶ , ¶ Î {±1}n , there exists a unique steady state of (4.24) corresponding
to Ī.
ii. every D¶ , ¶ Î {±1}n, is invariant to the map x # f (x, Ī).
Proof. Let be an input Ī satisfying (4.28) and ¶ Î {±1}n.
i. Similarly to the proof of Theorem 4.6 it can be proved that the unique steady state of (4.24)
corresponding to Ī which lies in D¶ is xĪ,¶ = (Id - B)-1 (T̄ ¶ + Ī).
ii. Let be x Î D¶ . One has to prove that f (x, Ī) Î D¶ . Using (4.28) and hypothesis (B), for any
i = 1, n it results that:
¶i fi (x, Ī) = ¶i (bi xi + T̄ii gi (xi ) + â T̄i j g j (x j ) + Īi ) ³ bi + T̄ii - â |T̄i j | - |Īi | > 1
j¹i
j¹i
Therefore, f (x, Ī) Î D¶ .
The following theorem holds:
Theorem 4.21. In addition to hypothesis (B), suppose that there exists Α Î (0, 1) such that the
functions gi , i = 1, n satisfy
gi (s) ³ Α if s ³ 1 and gi (s) £ -Α if s £ -1
(4.29)
For any input Ī Î Rn satisfying
|Īi | < T̄ii Α + bi - 1 - â |T̄i j |
i¹ j
the following statements hold:
"i = 1, n
(4.30)
170
i. In every rectangle D¶ , ¶ Î {±1}n, there exists at least one steady state of (4.24)
corresponding to the input Ī.
ii. Every D¶ , ¶ Î {±1}n, is invariant to the map x # f (x, Ī).
Proof. Let be an input Ī satisfying (4.30) and ¶ Î {±1}n.
i. similar to the proof of Theorem 4.7(i) (one has to replace ai by 1 - bi ).
ii. Let be x Î D¶ . One has to prove that f (x, Ī) Î D¶ . Using (4.30) and hypothesis (B), for any
i = 1, n it results that:
¶i fi (x, Ī) = ¶i (bi xi + T̄ii gi (xi ) + â T̄i j g j (x j ) + Īi ) ³ bi + T̄ii Α - â |T̄i j | - |Īi | > 1
j¹i
j¹i
Therefore, f (x, Ī) Î D¶ .
Remark 4.16. Theorem 4.18 states that if there exists an input Ī satisfying
n
|Īi | £ 1 - bi - â |T̄i j |
"i = 1, n
j=1
then there exists al least one steady state of (4.24) corresponding to Ī belonging to the rectangle
[-1, 1]n , and there are no other steady states corresponding to Ī outside this rectangle. The
existence of such an input implies that 1 - bi > |T̄ii | for any i = 1, n.
On the other hand, Theorems 4.20 and 4.21 guarantee that if there exists Α Î (0, 1] and an input
Ī satisfying
|Īi | < T̄ii Α + bi - 1 - â |T̄i j | "i = 1, n
i¹ j
then there exist 2n steady states corresponding to Ī outside the rectangle [-1, 1]n (in every
rectangle D¶ ). The existence of such an input implies that 1 - bi < T̄ii Α for any i = 1, n.
It is easy to see that the above conditions oppose.
4.2.3 Exponential stability of the steady states
The following result on global exponential stability of a steady state of (4.24) has been proved
in [GHW04]:
Theorem 4.22. Assume that the activation functions gi , i = 1, n are Lipschitz continuous
functions with the Lipschitz constants qi > 0. Define the matrix Q = (Qi j )n´n with Qi j =
one of the following conditions hold:
i. Ρ(Q) < 1
ii. Ú Ú Q2i j < 1
n
n
i=1 j=1
iii. Ρ(QT Q) < 1
|T̄i j q j |
.
1-bi
If
171
then the system (4.24) has a unique steady state which is globally exponentially stable.
The following theorems give sufficient conditions for the local exponential stability of the steady
state:
Theorem 4.23. If for an external input Ī Î Rn the state x Î Rn is a steady state of (4.24) and
Ρ(B + T̄ Dg(x)) < 1 then the steady state x is locally exponentially stable.
Corollary 4.7. Let be a rectangle D Ì Rn . If Ρ(B + T̄ Dg(x)) < 1 for any x Î D then for any
Ī Î Ī(D) the system (4.24) has a unique locally exponentially stable steady state which lies in D.
Theorem 4.24. Suppose that the conditions of Theorem 4.20 are satisfied. Let be an input
Ī Î Rn satisfying (4.28) and ¶ Î {±1}n. The steady state xĪ,¶ of (4.24) corresponding to Ī and
belonging to D¶ is exponentially stable, and its region of attraction includes D¶ .
Proof. The exponential stability of xĪ,¶ results from the fact that on D¶ , the system (4.24) is
a nonhomogeneous linear system and the Jacobi matrix of (4.24) in xĪ,¶ is the matrix B with
Ρ(B) < 1.
Let’s prove that D¶ Ì Da (xĪ,¶ ). Let be x Î D¶ . Then f (x, Ī) - xĪ,¶ = B(x - xĪ,¶ ), and by the fact that
D¶ is invariant to the map x # f (x, Ī) it results that f p(x, Ī) - xĪ,¶ = B p(x - xĪ,¶ ) for any p Î N.
Therefore f p(x, Ī) ® xĪ,¶ as p ® ¥, so x Î Da (xĪ,¶ ).
Theorem 4.25. Suppose that the conditions of Theorem 4.21 are fulfilled. Let be an input
1-b
Ī Î Rn satisfying (4.30) and ¶ Î {±1}n . If |g¢i (s)| < n i for any |s| ³ 1 and i = 1, n then the
Ú |T̄ ji |
j=1
steady state of (4.24) corresponding to the input Ī, which lies in the rectangle D¶ , ¶ Î {±1}n is
unique, it is exponentially stable and its region of attraction includes D¶ .
Proof. Suppose that |g¢i (s)| £ Βi <
1-bi
Ú |T̄ ji |
n
for any |s| ³ 1 and i = 1, n. We will first show that
j=1
the steady state of (4.24), corresponding to the input Ī, which lies in D¶ is unique. Suppose the
contrary, i.e. there exist two steady states x, y Î D¶ , x ¹ y of (4.24). One has:
n
n
(1 - bi )|xi - yi | = | â T̄i j (g j (x j ) - g j (y j ))| £ â |T̄i j |Β j |x j - y j |
j=1
"i = 1, n
j=1
Therefore,
n
n
n
n
â(1 - bi )|xi - yi | £ â â |T̄i j |Β j |x j - y j | < â(1 - b j )|x j - y j |
i=1
i=1 j=1
j=1
which is absurd. Therefore, there exists a unique steady state of (4.24) corresponding to the
input Ī, which lies in the rectangle D¶ . It will be denoted by xĪ,¶ .
Let’s prove that xĪ,¶ is exponentially stable and its region of attraction includes D¶ . Consider the
function V : Rn ® R+ defined by
n
V (x) = â |xi - xĪ,¶
i |
i=1
"x Î Rn
172
On D¶ , the function V satisfies:
n
V ( f (x, Ī)) = â(| fi (x, Ī) -
n
xĪ,¶
i |
i=1
n
= â |bi (xi n
n
xĪ,¶
i )
+ â T̄i j (g j (x j ) - g j (xĪ,¶
j ))| £
i=1
n
j=1
Ī,¶
£ â bi |xi - xĪ,¶
i | + â â |T̄i j ||g j (x j ) - g j (x j )| £
i=1
n
i=1 j=1
n
n
Ī,¶
£ â bi |xi - xĪ,¶
i | + â â |T̄i j |Β j |x j - x j | =
i=1
n
n
i=1 j=1
i=1
j=1
= â(bi + Βi â |T̄ji |)|x j - xĪ,¶
j | £ kV (x)
where k = max(bi +Βi Ú |T̄ji |) Î (0, 1). From Theorem 4.21(ii) we have that V ( f p(x, Ī)) £ k pV (x)
n
i=1,n
j=1
for any p Î N. Hence V ( f p(x, Ī)) ® 0 exponentially as p ® ¥. This means that f p(x, Ī) ® xĪ,¶
as p ® ¥. Thus xĪ,¶ is exponentially stable and its region of attraction includes D¶ .
Corollary 4.8. In addition to hypothesis (B), suppose that there exists Α Î (0, 1) such that the
functions gi , i = 1, n satisfy
• gi (s) ³ Α if s ³ M, gi (s) £ -Α if s £ -M
• |g¢i (s)| <
1-bi
Ú |T̄ ji |
n
for |s| ³ M.
j=1
Let be ¶ Î {±1}n and the rectangle MD¶ = MJ(¶1 )´MJ(¶2 )´...´MJ(¶n), where MJ(1) = (M, ¥)
and MJ(-1) = (-¥, -M).
If an input Ī Î Rn satisfies
|Īi | < T̄ii Α - M(1 - bi + â |T̄i j |) "i = 1, n
(4.31)
i¹ j
then there exists a unique steady state of (4.24) corresponding to Ī, which lies in the rectangle
MD¶ , it is exponentially stable and its region of attraction includes MD¶ .
The following theorem is a characterization of the region of attraction of a locally exponentially
stable steady state using the optimal Lyapunov function defined in Chapter 2.
Theorem 4.26. If for an external input Ī ø Î Rn the state xø Î Rn is a steady state of (4.24)
and Ρ(B + T̄ Dg(xø )) < 1 then the region of attraction Da (xø ) of xø coincides with the natural
domain of analyticity of the R-analytical function V defined by
V ( f (x, Ī ø )) - V (x) = -üx - xø ü2
V (xø ) = 0
(4.32)
The function V is strictly positive on Da (xø ) {xø } and V (x) ® ¥ as x ® y, y Î ¶Da (xø ) or as
üxü ® ¥.
173
Remark 4.17. In the conditions of Theorem 4.26, the region of attraction Da (xø ) satisfies
Da (xø ) = Da (0) + xø where Da (0) is the region of attraction of the steady state y = 0 of the
system
y p+1 = By p + T̄ h(y p) "p Î N
(4.33)
where h : Rn ® Rn is defined by h(y) = g(xø + y) - g(xø ). Therefore, in order to find the region
of attraction Da (xø ) it is sufficient to find the region of attraction Da (0) of the zero solution of
(4.33). The methods of approximation of the region of attraction described in Chapter 2 can be
applied.
4.2.4 Controllability
Definition 4.4. A change at a certain moment of the external input from Ī ¢ to Ī ¢¢ is called
maneuver and it is denoted by Ī ¢ # Ī ¢¢ . The maneuver Ī ¢ # Ī ¢¢ made at t = t0 is successful on
the path jΑ : HΑ = Ī(DΑ ) ® DΑ if Ī ¢, Ī ¢¢ Î HΑ and if the solution of the initial value problem
x p+1 = Bx p + T̄ g(x p) + Ī ¢¢
x0 = jΑ (Ī ¢)
(4.34)
tends to jΑ (Ī ¢¢ ) as p ® ¥.
The system (4.24) is controllable along a path of steady states if any two steady states belonging
to the path can be transferred one in the other by a finite number of successive maneuvers.
If the steady states jΑ (Ī) of the path jΑ are only locally exponentially stable, then it may happen
that some maneuvers are not successful along this path. In such cases, it is appropriate to use
the following result:
Theorem 4.27. For two steady states jΑ (Ī ø) and jΑ (Ī øø ) belonging to the R-analytic path jΑ of
locally exponentially stable steady states of (4.24), there exists a finite number of values of the
external inputs Ī 1 , Ī 2, ..., Ī p Î HΑ such that all the maneuvers
Ī ø # Ī 1 # Ī 2 # ... # Ī p # Ī øø
(4.35)
are successful on the path jΑ .
Remark 4.18. Theorem 4.27 states that the system (4.24) is controllable along an analytic
path jΑ of locally exponentially stable steady states. In fact, the transfer from a steady
state jΑ (Ī ø ) to a steady state jΑ (Ī øø) is made through the regions of attraction of the states
jΑ (Ī 1), jΑ (Ī 2), ..., jΑ(Ī n), jΑ (Ī øø).
Remark 4.19. If È HΑ ¹ Æ for a certain set G of indexes Α and the paths jΑ : È HΑ ® DΑ
ΑÎG
ΑÎG
(Α Î G) are locally exponentially stable, then for two configurations of steady states {jΑ(Ī ø )}ΑÎG
and {jΑ(Ī øø )}ΑÎG where Ī ø , Ī øø Î È HΑ there exists a finite number of external input vectors
ΑÎG
Ī 1, Ī 2, ..., Ī p Î È HΑ such that the maneuvers
ΑÎG
Ī ø # Ī 1 # Ī 2 # ... # Ī p # Ī øø
transfer the configuration {jΑ (Ī ø )}ΑÎG into the configuration {jΑ (Ī øø)}ΑÎG .
174
4.2.5 Examples
Example 4.5. Consider the discrete semi-dynamical system obtained from the system (4.19) by
the semi-discretization technique given in [MG00]:
; x2p+1 = e-h x2p + (1 - e-h )( 1715ln 4 tanh x1p + Ī1 )
p+1
p
p
2
15
= e-h x1 + (1 - e-h )( 17 ln 4 tanh x2 + Ī )
x1
h>0
(4.36)
For any h > 0, and any input Ī Î R2 the steady states of (4.36) coincide with the steady states
of (4.19), and they have the same stability properties.
For h = 0.2, we estimate the region of attraction of the asymptotically stable steady state
(ln 4, ln 4)T corresponding to Ī = (0, 0)T by the methods described in Chapter 2. The estimates
Np and M p, for p = 1, 9, are presented in Figures 4.3.1-4.3.2.
5
20
4
15
3
10
2
5
1
0
0
-5
-1
-10
-1
0
1
2
3
4
5
-10
-5
0
5
10
15
20
Figure 4.3.1: Estimates Np, p = 1, 9, of the Figure 4.3.2: Estimates M p, p = 1, 9, of the
region of attraction of (ln 4, ln 4)T
region of attraction of (ln 4, ln 4)T
Let be the asymptotically stable steady state (4, 4)T corresponding to the input Ī = (2.42992, 2.42992)T .
One can see that the maneuver Ī : (2.42992, 2.42992)T # (0, 0)T is successful and transfers the
steady state (4, 4)T to the steady state (ln 4, ln 4)T , as (4, 4)T is in the estimate of the region of
attraction of (ln 4, ln 4)T .
Example 4.6. We consider a discrete Hopfield-type neural network with the non-monotone
activation function f (x) = tanh(5x) tanh(10x2 - 1):
x1p+1 = 0.5x1p + 20 f (x1p) - f (x2p) + Ī1
; x2 = 0.5x2 - f (x1 ) + 20 f (x2 ) + Ī
p+1
p
p
p
2
(4.37)
It has been shown [Mor96] that in some cases, the absolute capacity of an associative neural
network can be improved by using non-monotone activation functions instead of the usual
sigmoid ones.
The conditions of Theorems 4.21 and 4.25 are satisfied (Α = f (1) Î (0, 1)). Therefore, for any
input Ī = (Ī1 , Ī2)T such that |Īi | < 18.4982, i = 1, 2, there exists a unique exponentially stable
steady state x¶,Ī in each rectangle D¶ , ¶ Î {±1}2 , and D¶ Ì Da (x¶,Ī ).
In Figure 4.4, the gray rectangles represent the sets S¶ = {xĪ,¶ / |Īi | < 18.4982, i = 1, 2} Ì D¶ .
The red points represent the four steady states x¶,Ī corresponding to the input Ī = (0, 0)T ,
namely: (38, 38)T , (-42, 42)T , (42, -42)T and (-38, -38)T . The blue points represent the
175
four steady states x¶,Ī corresponding to the input Ī = (10, 10)T , namely: (58, 58)T , (-22, 62)T ,
(62, -22)T and (-18, -18)T . The maneuver Ī : (0, 0)T # (10, 10)T transfers the configuration
of steady states {(38, 38)T , (-42, 42)T , (42, -42)T , (-38, -38)T } to the configuration of steady
states {(58, 58)T , (-22, 62)T , (62, -22)T , (-18, -18)T }.
75
50
25
0
-25
-50
-75
-75
-50
-25
0
25
50
75
Figure 4.4: The sets S¶ for (4.37) and the maneuver Ī : (0, 0)T # (10, 10)T
176
Bibliography
[ALF97]
Experimental flight data alflex. Technical report, National Aerospace Laboratory
- Japan, http:// www.nal.go.jp/ flight/ eng/ Museum/ ALFLEX/ alflex.pdf, 1997.
[Aul92]
B. Aulbach. One-dimensional center manifolds are c¥ . Results in Mathematics,
21(1-2):3–11, 1992.
[Bal85]
St. Balint. Considerations concerning the maneuvering of some physical systems.
An. Univ. Timisoara, seria St. Mat., XXIII:8–16, 1985.
[Bar51]
E.A. Barbashin. The method of sections in the theory of dynamical systems.
Matem. Sb., 29, 1951.
[BBN86]
St. Balint, A. Balint, and V. Negru. The optimal lyapunov function in diagonalizable case. An. Univ. Timisoara, seria St. Mat., XXIV:1–7, 1986.
[BK54]
E.A. Barbashin and N.N. Krasovskii. On the existence of lyapunov functions in
the case of asymptotic stability in the whole. Prikle. Kat. Mekh., XVIII:345–350,
1954.
[BKBG05] St. Balint, E. Kaslik, A.M. Balint, and A. Grigis. Methods of determination
and approximation of domains of attraction in the case of autonomous discrete
dynamical systems. Advances in Difference Equations, (4):–, 2005.
[BNBS87]
St. Balint, V. Negru, A. Balint, and T. Simiantu. An appoach of the region of
attraction by the region of convergence of the series of the optimal lyapunov
function. An. Univ. Timisoara, seria St. Mat., XXV:15–30, 1987.
[Cao04]
J. Cao. Estimation on domain of attraction and convergence rate of hopfield
continuous feedback neural networks. Physics Letters A, 325(5–6):370–374,
2004.
[Car81]
J. Carr. Applications of center manifold theory. Springer-Verlag, 1981.
[CG83]
M.A. Cohen and S. Grossberg. Absolute stability of global pattern formation and
parallel memory storage by competitive neural networks. IEEE Transactions on
Systems, Man, and Cybernetics, 13(5):815–826, 1983.
[CGT97]
G. Chesi, R. Genesio, and A. Tesi. Optimal ellipsoidal stability domain estimates
for odd polynomial systems. In Proc. of 36th IEEE Conference on Decision and
Control, San Diego, California, pages 3528–3529, 1997.
177
178
BIBLIOGRAPHY
[CGTV01]
G. Chesi, A. Garulli, A. Tesi, and A. Vicino. Lmi-based techniques for convexification of distance problems. In Proc. of 40th IEEE Conference on Decision and
Control, Orlando, Florida, 2001.
[Che01]
G. Chesi. New convexification techniques and their applications to the analysis
of dynamical systems and vision problems. Ph. D. Thesis, Università di Bologna,
Italy, 2001.
[CT01]
J. Cao and Q. Tao. Estimation on domain of attraction and convergence rate of
hopfield continuous feedback neural networks. Journal of Computer and Systems
Sciences, 62:528–534, 2001.
[CTVG01]
G. Chesi, A. Tesi, A. Vicino, and R. Genesio. An lmi approach to constrained
optimization with homogeneous forms. System and Control Letters, 42:11–19,
2001.
[CZW04]
S. Chen, Q. Zhang, and C. Wang. Existence and stability of equilibria of
continuous time hopfield neural network. Journal of Computational and Applied
Mathematics, 169(1):117–125, 2004.
[DFK91]
A. Denbo, O. Farotimi, and T. Kailath. Higher order absolute stable neural
networks. IEEE Transactions on Circuits and Systems, 38:57–65, 1991.
[Din89]
N.J. Dinopoulos. A study of the asymptotic behavior of neural networks. IEEE
Transactions on Circuits and Systems, 36:863–867, 1989.
[DK71]
E.J. Davison and E.M. Kurak. A computational method for determining quadratic
lyapunov functions for nonlinear systems. Automatica, 7:627–636, 1971.
[Ela96]
Saber N. Elaydi. An Introduction to Difference Equations. Springer-Verlag, New
York, 1996.
[ETB+ 87]
C. Elphick, E. Tirapegui, M.E. Brachet, P. Coullet, and G. Iooss. A simple global
characterization of normal forms of singular vector fields. Physica D, 29:95–127,
1987.
[Fei95]
G. Feichtinger. Hopf bifurcation in an advertising diffusion model. Journal of
Economic Behavior and Organisation, 17(3):401–411, 1995.
[FT95]
M. Forti and A. Tesi. New conditions for global stability of neural networks with
applications to linear and quadratic programming problems. IEEE Transactions
on Circuits and Systems, 42(7):345–366, 1995.
[GH83]
J. Guckenheimer and P. Holmes. Nonlinear oscilations, dynamical systems and
bifurcations of vector fields. Springer-Verlag, New York, 1983.
[GHW04]
S. Guo, L. Huang, and L. Wang. Exponential stability of discrete-time hopfield
neural networks. Computers and Mathematics with Applications, 47:1249–1256,
2004.
[GK04]
N. Goto and T. Kawakita. Bifurcation analysis for the inertial coupling problem of
a reentry vehicle. In Advances in Dynamics and Control, pages 45–57. Chapman
& Hall / CRC Press Company, UK, 2004.
BIBLIOGRAPHY
[GM00]
179
N. Goto and K. Matsumoto. Bifurcation analysis for the control of a reentry vehicle. In Proceedings of the 3rd International Conference on Nonlinear Problems in
Aviation and Aerospace, Daytona Beach, Florida, USA, pages 167–175. European
Conference Publishers, 2000.
[GRBG04] L. Gruyitch, J-P. Richard, P. Borne, and J-C. Gentina. Stability domains.
Nonlinear systems in aviation, aerospace, aeronautics, astronautics. Chapman&Hall/CRC, 2004.
[GTV85]
R. Genesio, M. Tartaglia, and A. Vicino. On the estimation of the asymptotic
stability regions: state of the art and new proposals. IEEE Trans. Automatic
Control, A.C. 30:747–755, 1985.
[Hac78]
T. Hacker. Constant-control rolling maneuver. Journal of Guidance and Control,
1(5), 1978.
[HJ85]
R. Horn and C. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[Hor85]
L. Hormander. An Introduction to Complex Analysis in Several Variables. D. Van
Nostrand Company, Inc., Princeton, New Jersey, 1985.
[Kas05]
E. Kaslik. Methods for the determination and approximation of the regions of
attraction in the case of non-exponential asymptotic stability. Nonlinear Studies,
12(3):199–214, 2005.
[KBB03]
E. Kaslik, A.M. Balint, and St. Balint. Gradual approximation of the domain
of attraction by gradual extension of the "embryo" of the transformed optimal
lyapunov function. Nonlinear Studies, 10(1):67–78, 2003.
[KBB05a]
E. Kaslik, A.M. Balint, and St. Balint. Methods of determination and approximation of domains of attraction. Nonlinear Analysis: Theory, Methods and Applications, 60(4):703–717, 2005.
[KBB05b]
E. Kaslik, L. Braescu, and St. Balint. On the controllability of the continuous-time
hopfield-type neural networks. In Proceedings of the 7th International Symposium
on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2005),
volume Workshop: Natural Computing and Applications. IEEE Computer Society
Press, 2005.
[KBBB03]
E. Kaslik, A.M. Balint, S. Birauas, and St. Balint. Approximation of the domain of
attraction of an asymptotically stable fixed point of a first order analytical system
of difference equations. Nonlinear Studies, 10(2):1–12, 2003.
[KBBB04]
E. Kaslik, A.M. Balint, S. Birauas, and St. Balint. On the controlability of the roll
rate of the alflex reentry vehicle. Nonlinear Studies, 11(4):543–564, 2004.
[KBCB02]
E. Kaslik, A.M. Balint, C. Chilarescu, and St. Balint. The control of rolling
maneuver. Nonlinear Studies, 9(4):331–360, 2002.
[KBGB04] E. Kaslik, A.M. Balint, A. Grigis, and St. Balint. The controlability of the "path
capture" and "steady descent" flight of alflex. Nonlinear Studies, 11(4):674–690,
2004.
BIBLIOGRAPHY
180
[KBGB05a] E. Kaslik, A.M. Balint, A. Grigis, and St. Balint. Control procedures using
domains of attraction. Nonlinear Analysis: Theory, Methods and Applications,
63(5-7):e2397–e2407, 2005.
[KBGB05b] E. Kaslik, A.M. Balint, A. Grigis, and St. Balint. An extension of the characterization of the domain of attraction of an asymptotically stable fixed point in the
case of a nonlinear discrete dynamical system. In S. Sivasundaram, editor, Proceedings of the 5th International Conference on Nonlinear Problems in Aviation
and Aerospace (ICNPAA 2004). European Conference Publications, 2005.
[KBGB05c] E. Kaslik, A.M. Balint, A. Grigis, and St. Balint. On the controllability of some
steady states in the case of nonlinear discrete dynamical systems with control.
Nonlinear Studies, 12(1):1–11, 2005.
[KK74]
H.W. Knobloch and F. Kappel.
Teubner, Stuttgart, 1974.
Gewohnliche Differentialgleichungen. B.G.
[Koc90]
H. Kocak. Differential and Difference Equations Through Computer Experiments.
Springer-Verlag, New York, 1990.
[KP01]
W.G. Kelley and A.C. Peterson. Differene equations. Academic Press, 2001.
[KS93]
S. Köksal and S. Sivasundaram. Stability properties of the hopfield-type neural
networks. Dynamics and Stability of Systems, 8(3):181–187, 1993.
[Kuz98]
Yu. A. Kuznetsov. Elements of applied bifurcation theory. Springer-Verlag, 1998.
[LaS86]
J. LaSalle. The Stability and Control of Discrete Processes. Springer-Verlag, New
York, 1986.
[LaS97]
J. LaSalle. Stablity theory for difference equations, maa studies in mathematics
14. In J. Hale, editor, Studies in Ordinary Differntial Equations, pages 1–31.
Taylor and Francis Science Publishers, UK, 1997.
[LMS91]
V. Lakshmikantham, V.M. Matrosov, and S. Sivasundaram. Vector lyapunov
functions and stability analysis of nonlinear systems. In Mathematics and its
Applications, volume 63. Kluwer Academic Publishers Group, Dordrecht, The
Netherlands, 1991.
[LQVY91] G. Ladas, C. Qian, P. Vlahos, and J. Yan. Stability of solutions of linear
nonautonomous difference equations. Applied Analysis, 41:183–191, 1991.
[LT88]
V. Lakshmikantham and D. Trigiante. Theory of Difference Equations: Numerical
Methods and Aplications. Academic Press, New York, 1988.
[MG00]
S. Mohamad and K. Gopalsamy. Dynamics of a class of discrete-time neural
networks and their continuous-time counterparts. Mathematics and Computers in
Simulation, 53(1-2):1–39, 2000.
[Mor96]
M. Morita. Memory and learning of sequential patterns by nonmonotone neural
networks. Neural Networks, 9(8):1477–1489, 1996.
BIBLIOGRAPHY
181
[MSM82]
A.N. Michel, N.R. Sarabudla, and R.K. Miller. Stability analysis of complex
dynamical systems, some computation methods. Circ. Syst. Signal Processing,
1:561–573, 1982.
[Per91]
L. Perko. Differential Equations and Dynamical Systems. Springer-Verlag, New
York, 1991.
[TH86]
D.W. Tank and J.J. Hopfield. Simple neural optimization networks: an a/d converter, signal decision circuit and a linear programming circuit. IEEE Transactions
on Circuits and Systems, 33:533–541, 1986.
[Tib00]
B. Tibken. Estimation of the domain of attraction for polynomial systems via lmis.
In Proc. of 39th IEEE Conference on Decision and Control, Sydney, Australia,
2000.
[Ura89]
K. Urakama. Global stability of some class of neural networks. Transactions of
the IEICE, E72:863–867, 1989.
[Ver90]
F. Verhulst. Nonlinear differential equations and dynamical systems. SpringerVerlag, New York, 1990.
[VV85]
A. Vanelli and M. Vidyasagar. Maximal lyapunov functions and domains of
attraction for autonomous nonlinear systems. Automatica, 21(1):69–80, 1985.
[Wig03]
S. Wiggins. Introduction to Applied Nonlinear Dynamical Systems and Chaos.
Springer-Verlag, 2003.
[YHF99]
Zhang Yi, P.A. Heng, and Ada W.C. Fu. Estimate of exponential convergence
rate and exponential stability for neural networks. IEEE Transactions on Neural
Networks, 10(6):1487–1493, November 1999.
[YHH04]
Z. Yuan, D. Hu, and L. Huang. Stability and bifurcation analysis on a discretetime system of two neurons. Applied Mathematical Letters, 17:1239–1245, 2004.
[YHH05]
Z. Yuan, D. Hu, and L. Huang. Stability and bifurcation analysis on a discrete-time
neural network. Journal of Computational and Applied Mathematicsa, 177:89–
100, 2005.
[Zub64]
V.I. Zubov. Methods of A.M. Lyapunov and their applications. Leningrad Gos.
University, Leningrad, 1964.
[Zub78]
V.I. Zubov. Théorie de la commande. Editions Mir, Moscou, 1978.
© Copyright 2026 Paperzz