Contents
1. Motivation for Nonlinear Control
2. The Tracking Problem
1. Feedback Linearization
3. Adaptive Control
4. Robust Control
1. Sliding mode
2. High-gain
3. High-frequency
5. Learning Control
6. The Tracking Problem, Revisited Using the Desired Trajectory
1. Feedback Linearization
2. Adaptive Control
7. Filtered tracking error r(t) for second-order systems)
8. Introduction to Observers
9. Observers + Controllers
10. Filter Based Control
1. Filter + Adaptive Control
11. Summary
12. Homework Problems
1.
2.
3.
4.
A1
A2
A3
A4 – Design observer, observer + controller, control based on filter
Nonlinear Control
• Consider the following problem:
x f ( x, u ), x R n
y h( x), u R n
find u r ( x) state feedback
output feedback
u ( y )
so that the closed loop system x f ( x, r ( x)) or x f ( x, ( x)) exhibits
desired stability and performance characteristics.
x is bounded & goes to 0
how it goes to setpoint
• Why do we use nonlinear control :
–
–
–
–
Tracking, regulate state setpoint
Ensure the desired stability properties
Ensure the appropriate transients
Reduce the sensitivity to plant parameters
2
Applications and Areas of Interest
Mechanical Systems
Electrical/Computer Systems
• Electric Motors
• Magnetic Bearings
• Visual Servoing
• Structure from Motion
Nonlinear Control
and Estimation
• Textile and Paper Handling
• Overhead Cranes
• Flexible Beams and Cables
• MEMS Gyros
Chemical Systems
• Bioreactors
•Tumor Modeling
Robotics
• Position/Force Control
• Redundant and Dual Robots
Mobile Platforms
• Path Planning
• Fault Detection
• UUV, UAV, and UGV
• Teleoperation and Haptics
• Satellites & Aircraft
Automotive Systems
• Steer-By-Wire
• Thermal Management
• Hydraulic Actuators
• Spark Ignition
• CVT
The Mathematical Problem
Typical Electromechanical System Model
Electrical Dynamics
u
·
y g ( x , y, u )
Classical Control Solution
Mechanical Dynamics
y
·
x f ( x, y)
·
x
·
y
y g ( x , y , u)
u
x f ( x, y)
x
Linear
Controller
Obstacles to Increased Performance
– System Model often contains
Hard Nonlinearities
– Parameters in the Model are
usually Unknown
– Actuator Dynamics cannot
be Neglected
– System States are Difficult
or Costly to Measure
f
u
gLinear
x
y
·
y ?( x, y, u )
y
?
y
u
u
g
fLinear
·
y g ( x , y , u)
?
·
x ?( x, y)
·
x f ( x, y )
·
x f ( x, y)
x
x
?
The Mathematical Solution or Approach
Mechatronics
Based Solution
Advanced
Nonlinear Control
Design Techniques
+
New
Control
Solutions
Realtime
Hardware/Software
Nonlinear Lyapunov-Based Techniques Provide
– Controllers Designed for the Full-Order
Nonlinear Models
– Adaptive Update Laws for On-line
Estimation of Unknown Parameters
– Observers or Filters for State
Measurement Replacement
– Analysis that Predicts System
Performance by Providing Envelopes for
the Transient Response
u
·
y ?( x, y, u )
·
x
x ?( x, y)
Nonlinear
Parameter
Estimator
Nonlinear
Controller
u
y
·
y g ( x , y, u )
Nonlinear
Controller
?
·
x f ( x, y )
Nonlinear
Observer
t
Transient
Performance
Envelopes
x
Nonlinear Control Vs. Linear Control
• Why not always use a linear controller ?
– It just may not work.
Ex:
x x u3
xR
When u 0, the equilibrium point x 0 is unstable.
Choose u kx.
Then
x x k 3 x3.
We see that the system can’t be made asymptotically
stable at x 0.
On the other hand, a nonlinear feedback does exist :
u( x) 3 kx
Then x x kx (1 k ) x
Asymptotically stable if k 1.
6
Example
• Even if a linear feedback exists, a nonlinear one may be
better.
Ex: y u
v +
Output
Feedback
(OFB)
_
y
y u
y
for v 0
y
k1
u k1 y v
v +
y k1 y v
y
_
y u
for v 0
y y
x1 y
K
Full-State
Feedback
(FSFB)
u k2 y k1 y v
y k2 y k1 y v
7
Example (continued)
Let us use a nonlinear controller : To design it, consider the same system in
the form:
x1 x2
x2 u kx1
If k 1
If k 1
x2
x2
x1
x1
On the line: x2 -kx1 exponentially stable
Why is that especially interesting? If we could get onto that line then the
system converges to the origin
Both systems have interesting properties, can we combine the best
features of each into a single control?
8
Example (continued)
Switch k from 1 to 1 appropriat ely and obtain a variable structure system.
x2
k 1
k 1
k 1
x1
k 1
sliding line
k 1
k 1
if x1s 0
1
k
if x1s 0
1
where s=x1 x2
x1 x2 0
Created a new trajectory: the system is insensitive to disturbance in the
sliding regime Variable structure control
9
x1 x2
Example (continued)
x2 kx1
if x1s 0
1
k
if x1s 0
1
where s=x1 x2
x
x
x
x
10
The Tracking Problem
Consider the system:
x f ( x) u
Need to accomplish two control objectives:
1) Control Objective-make x xd (xd is a desired trajectory), assuming xd , xd L .
2) Hidden Control Objective-keep everything bounded (ie., x, x, u L ).
Need to make some assumptions first:
1) x is measureable.
2) if x L , then f ( x) L .
3) x has a solution.
4) x(0) L .
11
The Tracking Problem (continued)
Let the tracking error, e, be defined as
e xd x
e xd x
Now we can substitute for x :
e xd f ( x) u
Letting e ke, we get :
u xd f ( x)
ke
Feedback
Feedback Linearization
Exact Model Knowledge
Feed Forward
Now, solve the differenti al equation
e(t ) e(0) exp( kt)
Finally, insure all signals are bounded
xd , xd L , e L x L f ( x) L u L x L
by assumption
All signals are bounded!
12
Example Exact Model Knowledge
Control
Input
Nonlinear
Damper
u(t)
Mass
bx 3
Disturbance
asin (t)
• Dynamics:
Velocitybx
3
x bx3 a sin(t ) u
e xd x
Drive e(t) to zero
• Tracking Control
Objective:
• Open Loop Error System:
• Controller:
a,b are
constants
e xd x xd bx 3 a sin(t ) u
u xd bx3 a sin(t ) ke
Feedforward
• Closed Loop Error System:
• Solution:
e ke
e(t ) e(0) exp(kt )
Feedback
Exponential
Stability
Assume
a,b are
known
Example Exact Model Knowledge
Control
Input
Nonlinear
Damper
u(t)
Mass
bx 3
Disturbance
asin (t)
Velocitybx
3
• Open Loop Error System:
• Lyapunov Function:
• Control Design:
1 2
V e ;V ee e( xd bx 3 a sin(t ) - u )
2
u xd bx3 a sin(t ) ke
• Closed Loop Error System:
• Solution:
A different
perspective on
the control
design
e xd x xd bx 3 a sin(t ) u
Feedforward
Exponential
Stability
a,b are
constants
Feedback
V ke 2
e(t ) e(0) exp(kt )
Assume
a,b are
known
Adaptive Control
Consider a linearily parameteri zable function
Constant that can be factored out
f ( x) W ( x) for example f ( x) x 2 x 3 sin( x)
where W ( x) is known, and is an unknown constant.
R
L
By Assumption 2: both f(x) and W(x) are bounded.
Let
e x d W ( x) u
where
ˆ ke
u x d W ( x)
~
Let be defined as
~
ˆ
our control
Yet to be designed, feedforward term based on an
(2)
estimate of the parameters
(1)
Now, combining (1) and (2), we get
~
e ke W ( x)
15
Adaptive Control (continued)
Choose the Lyapunov candidate
e
~ ~
V 1 e 2 1 T 1 z T z , where z ~
2
2
2
Q : Why is this a good candidate?
A : It is lower bounded (not necessaril y by zero), radially unbounded in z , and
positive definite in z.
Lemma : if
V "explodes" as e and "explode"
Lyapunov-like lemma
1) V 0
2) V g (t ), where g (t ) 0
if g (t ) is bounded the g (t ) is uniformly continuous
3) g (t ) L
then lim g (t ) 0
t
will use this lemma by getting get e and into g and satisfying the conditions on g
16
Note: detailed in deQueiroz
Adaptive Control (continued)
With our candidate Lyapunov function
~ ~
V 1 e 2 1 T
2
2
closed-loop
Taking the derivative gives
error system
~ ~
V ee T
since 0
~ ~ ˆ
V e( ke W) T
~
ˆ
V ke2 T (W T e
) design ˆ to help Lyapunov function
ˆ
Letting
W T e, we finally get
V ke2 didn't get 2 here analysis is more complicated
~
ˆ , x L u L all signals are bounded!
Therefore V L , e,
For this problem
Now have a dynamic control
g (t ) ke2 and g (t ) 2kee L
So our closed loop system is
~
~
e ke W and W T e
e 0 x xd
(control has dynamics) compared
to state-feedback which ia s static control
~
Q : So, does 0 ?
A : Not necessaril y!
can't identify the parameters
17
Example Unknown Model Parameters
• Open Loop Error
System:
• Control Design:
e xd x xd bx a sin(t ) u
u xd bˆ(t ) x3 aˆ (t )sin(t ) ke
3
Same controller as before, but
of time
How do we adjust aˆ (t ) and bˆ(t )
aˆ (t )
a,b are
unknown
constants
and bˆ(t ) are functions
?
Use the Lyapunov Stability Analysis to develop an
adaptive control design tool for compensation of
parameter
parametric uncertainty
error
• Closed Loop Error
System:
e ke b (t ) x a(t ) sin(t )
3
At this point, we have not fully developed the controller since
are yet to be determined.
aˆ (t )
a(t ) a aˆ (t )
b (t ) b bˆ(t )
and bˆ(t )
Example Unknown Model Parameters
Fundamental Theorem
i) If V (t ) 0
ii) If V (t ) 0
V (t) ¸is 0bounded
V (t )
limV (t ) 0
t
constant
iii) If V (t ) is bounded (V (t ) is UC)
V (t) ¸ 0
V (t ) ¸is 0
effects of conditions i) and ii)
bounded
V (t ) finally
¸ 0 becomes a
effects of condition iii)
constant
V (t) ¸ 0
• Non-Negative Function: V (t ) 1 e2 1 a 2 1 b 2
2
2
2
• Time Derivative of V(t): V (t ) ee aaˆ bbˆ
e ke b (t ) x a (t ) sin(t )
3
satisfies condition i)
design
aˆ (t )
examine condition ii)
and bˆ(t )
substitute the dynamics lfor
im
t! 1
e(t) = 0
Example Unknown Model Parameters
• Substitute Error V (t ) ke 2 b (t )(ex 3 bˆ) a (t )(e sin(t ) aˆ )
System:
How do we select aˆ (t ) and bˆ(t ) such that V (t ) 0 ?
• Update Law Design:
• Substitute in Update
Laws:
V (t) ¸is 0bounded
Fundamental
Theorem
V (t ) is bounded
limV (t ) 0
Fundamental
Theorem
u xd x
bˆ ex3
V (t ) ke2 0 V (t ) 0 and V (t ) 0
aˆ e sin(t )
t
t
3
t
all signals are
bounded
lim e 0
t
ex dt sin(t ) e sin(t )dt ke
3
0
0
Feedforward
control
objective
achieved
control
structure
derived from
stability
Feedbackanalysis
How Can We Use the Adaptive Controller?
Design adaptive control to track a desired trajectory while compensating for unknown,
constant parameters (parametric uncertainty)
u h1 x, ˆ
ˆ h2 x
x f x, u
u
x
Adaptive control with backstepping in cascaded subsytems to track a desired trajectory
while compensating for unknown, constant parameters (parametric uncertainty)
u
y f 2 y,2 u
y
x f x, y
Backstepping - intermediate controller is adaptive
yd h x, ˆ , ˆ h2 x
x
How Can We Use the Adaptive Controller?
(continued) Adaptive control with backstepping in cascaded subsytems to track a desired
trajectory while compensating for unknown, constant parameters (parametric
uncertainty)
u
y f 2 y,2 u
y
x f x, y
x
Backstepping - intermediate and input controller are adaptive
yd h x, ˆ , ˆ h2 x and u h3 x , y , ˆ2 , ˆ2 h4 x , y
u
y f 2 y,2 u
y
x f x, y
Backstepping - input controller is adaptive
u h3 x, y , ˆ2 , ˆ2 h4 x, y
x
How Can We Use the Adaptive Controller?
What about the case where input multiplied by an unknown parameter, can we design
adaptive control to track a desired trajectory while compensating for unknown,
constant parameters (parametric uncertainty)
u?
u
x f x , 2u
Homework A.2-2
x
Robust Control
Recall the system defined by the following:
x f ( x) u
e xd x
e xd f ( x) u
We can try to make several assumptions about the system:
1) xd , xd L
2) if x L , then f ( x) L .
3) x xd and all signals are bounded
4) f ( x) is linearily parameterizable (ie., f ( x) W ( x))
Adaptive control ONLY
5)
f ( x) ( x)
unknown
dynamics
known
bounding
function
Restriction on the structure but not the uncertainty
We use this assumption for Robust (Sliding Mode) control ONLY!
24
Robust Control (continued)
Feedback and feedforward motivated by: e xd f ( x ) u
Now, let the control be
auxiliary control
f ( x) ( x)
unknown
dynamics
u ke xd VR
known
bounding
function
where VR is a function that we can choose. Consider the three following functions
frequency
In reality can't implement this
e
VR1 Sliding mode
sgn(e)
control because it is not really defined at 0.
e
Practically, as in MATLAB, use:
1 2
VR 2 e Robust, high gain
VR 3
2e
e
Robust, high frequency
where 0. We will consider each VR separately.
if is small then it looks
VR 2 and VR 3 are a response
like the high frequency
to fix this mathematically
controller ( frequency)
25
Robust (Sliding Mode) Control
Let's try the first function
e ke f ( x) VR1
this differential equation will not have a solution
Now, take a Lyapunov candidate
V 1 e2
2
V ee e(ke f ( x) VR1 )
Use assumption 5 here:
f ( x) ( x)
more positive less negative bound on V
V ke e f ( x) eVR1
Substitute proposed control VR1
e2
V ke e ( x) ( x)
e
Note it doesn't exist at e 0
2
2
e
e
where we are trying to drive the error system
V ke 2 e ( x) e ( x) ke 2
e 2 2V
V 2kV V 2kV 0
V 2kV s (t ), where s (t ) 0
differential inequlaity differential equality solve
26
Robust (Sliding Mode) Control (continued)
Base of the natural
Solving the differential equation, we get
logarithm.
t
V (t ) V (0) exp( 2kt ) exp( 2kt ) exp(2k ) s( ) d
0
always negative
V (t ) V (0) exp( 2kt )
2
2
1 e (t ) 1 e (0) exp( 2kt )
upper bound
2
2
e(t ) e(0) exp( kt )
So, the system is globally exponentially stable, and all signals are bounded!
27
Robust (High-Gain) Control
Recall, we proposed u ke xd VR 2
Now, let's try it with VR 2 and the same Lyapunov function
e ke f ( x ) VR 2
Same basic proof as VR1
new robust control term
V ke 2 e ( x ) eVR 2
1
2
1
V ke 2 e ( x ) e 2 ( x )e ke 2 e ( x ) 2 ( x ) e
High-gain controller
1
V ke 2 e ( x ) 1 e ( x )
Reminder of assumption:
5)
f ( x) ( x) ( x) 0
unknown
dynamics
known
bounding
function
28
Robust (High-Gain) Control (continued)
Lyapunov analysis continued. As a reminder we started with:
V 1 e2
2
Now we have:
This is what the new robust control term
accomplished. Is it useful?
1
V ke 2 e ( x ) 1 e ( x )
Case 1: if e ( x ) :
1
1
1
e ( x ) 1 1 e ( x) 0 e ( x) 1 e ( x) 0
2
V ke
Case 2: if e ( x ) :
1
1
1
e ( x) 1 0 1 e ( x) 1 e ( x) 1 e ( x) e ( x)
2
V ke
V ke 2
V 2kV
V 2kV s(t ) Solve differnetial equation (as before): V 2kV s(t )
29
Robust (High-Gain) Control (continued)
Solving the differential equation yields
t
t
0
0
V (t ) V (0) exp( 2kt ) exp( 2kt ) exp(2k ) s( )d exp( 2kt ) exp(2k )d
discard this negative term
V (t ) V (0) exp( 2kt )
2k
1 exp( 2kt )
1
exp( 2 kt ) 1
2k
V (t ) is less than a constant.
1 e 2 (t ) 1 e 2 (0) exp( 2kt ) 1 exp( 2kt )
2
2
2k
e(t ) e (0) exp( 2kt )
2
1 exp( 2kt )
e(t ) will go to a ball of size
k
The system is Globally Uniformly Ultimately Bounded (GUUB),
k
and all signals are bounded. Signal chasing: e,x d are bounded x bounded u is bounded.
Can make small to reduce the size of the ball,
but the trade-off is that the control term VR 2
1
k
2 e becomes large.
30
Robust (High-Frequency) Control
High frequency control chattering
=
Using the third function, VR 3 , we obtain similar results
Same basic proof as VR 2
e 2 e
2e2
2
2
V ke e e
ke
e
e
e
2e
e 2 e 2e2
V ke
e
2
2
2
e
V ke 2
e
V ke 2
This is what the new robust control term
accomplished. Is it useful?
Upper bounded by 1
As you can see, the solution to this equation will be the same as for VR 2
Same basic result as VR 2
Note Global doesn't depend on intial time
31
Learning Control
Need to compensate for an unknown periodic disturbance.
Let's take another look at the system from the previous control
x f ( x) u , e xd x, and e xd f ( x) u
For each control type, we attempt to make different assumptions. Those
assumptions eventually help us in the proof of stability and boundness of
the system. For instance, we made the assumption that f ( x) was linearily
parameterizable (f ( x) W ( x)). For the Robust (Sliding Mode)
control, we made the assumption that f ( x) was unknown, but that it could
be bounded by some known function
f ( x) ( x) .
For learning control,
we make the assumption that f ( x) is periodic:
f ( x(t )) f ( x(t T ))
Let d (t ) f ( x(t )), that leaves us with
e xd d (t ) u , where d means "disturbance"
We also know, via our assumption that
d (t ) d (t T )
32
Learning Control (continued)
Now, take the control to be
u x ke dˆ
d
e ke (d dˆ ) ke d
Without this we can't prove that dˆ is bounded.
sat ()
where
d d dˆ
closeness of dˆ to d
Our task is to design dˆ. So, let's try
x for x
dˆ (t ) sat (dˆ (t T )) kd e, where sat ( x)
sgn( x) for x
We make the assumption that the magnitude of the disturbance, d (t ) is bounded:
d (t ) , where is a constant
Don't actually use in the control
So, then we can say
d (t ) sat (d (t )) sat (d (t T ))
since is an upper bound
d (t ) sat (d (t T )) sat (dˆ (t T )) kd e use this in the stability proof!
substitute proposed dˆ
33
Learning Control (continued)
We choose the following Lyapunov candidate to investigate stability:
1
V1 e
2
2k d
2
t
t T
2
sat (d ( )) sat (dˆ ( )) d
V 0 Can you prove this?
2
1
ˆ
V e( ke d (t ))
sat (d (t )) sat (d (t ))
2k d
2
1
ˆ
sat (d (t T )) sat (d (t T ))
2k d
Leibniz' s rule :
v( x)
d
dv
du
f (t )dt f v( x) f u ( x)
dx u ( x )
dx
dx
Derivative of an integral
d derived on the previous slide
d kd e
2
2
1
1
ˆ
d kd e
V e( ke d (t ))
sat (d (t )) sat (d (t ))
2k
2k d
d
Cancels the cross term
2
1
1
2
ˆ
d 2k de k e 2
V e( ke d (t ) )
sat (d (t )) sat (d (t ))
d
d
2k
2k d
d
2
2
1
1
From definition d
ˆ
ˆ
V ke
sat (d (t )) sat (d (t ))
d (t ) d (t ) k d e 2
2k
2k d
d
2
34
Learning Control (continued)
2
2
2
1
k
e
d
V ke
sat (d (t )) sat (dˆ (t )) d (t ) dˆ (t )
2k d
2
2
Math Note: ( x y ) 2 sat ( x ) sat ( y )
2
V ke 2
So, V g (t ), for g (t ) 0 and
g (t ) ke 2 g (t ) 2kee g (t ) L
So g (t ) 0 and that means
V L e L x L dˆ (t ), e, x, u L e 0
The system is Globally Asymtotically Stable (GAS),
and all signals are bounded.
35
The Tracking Problem, Revisited
Using the Desired Trajectory
Feedback Linearization
We want to build a tracking controller for the following system:
x f ( x) u
e xd x x xd e
where our controls are
e xd f ( x) u
u ke Vaux xd
Differentiable assumption needed
in analysis but not required to
implement control.
which yields
e ke f ( x) Vaux
For this problem, we assume f ( x) C 1 (where C1 means once differentiable)
e ke f ( xd ) f Vaux , where f f ( x) f ( xd )
if e 0 then f 0
We also assume f e ( xd , e ), where is non-decreasing, and ( x) 0. We
also assume that f ( x) is known. So, we can say
Vaux f ( xd )
e ke f
36
Mean Value Theorem for scalar function
f ( x)
f ( xd )
x
xd
f (c)
f e
f ( xd ) f ( x )
f
xd x
e
f ( c ) e e , xd
if e 0 then f 0
The Tracking Problem, Revisited (continued)
Let's see what f does to the system.
Let our Lyapunov candidate be
V 1 e 2 2V e 2
2
~
V ee ke2 ef
~
V ke2 e f
Dropped the xd notation just to
2
V ke2 e ( e )
clarify next steps (it is still there).
Let k 1 k n , then we have
V e 2 k n ( e ) e
V e 2 if k n ( e )
2
V 2V if k n ( 2V )
V (t ) V (0) exp( 2t ) if k n ( 2V (t ) )
V (t ) V (0) exp( 2t ) if k n ( 2V (0) )
So k n ( 2V (0) ) ( 2V (t ) ) true because is non - decreasing
38
The Tracking Problem, Revisited (continued)
Region is adjustable
(not a fixed local region)
Now we can write
1 e 2 (t ) 1 e 2 (0) exp( 2t ) if k
n
2
2
e(t ) e(0) exp( t ) if kn e(0)
e 2 (0)
So, we have semi-global exponential tracking! It is semi-global (instead of just
local) because we can, in theory, set kn as high as we want. Also, as long as
assumptions are met, all signals will remain bounded.
Design alternatives:
Vaux f ( x) GES
Vaux f ( xd ) SGES (this may work better in an experiment if there is noise on x,
could pre-compute f ( xd ))
39
The Tracking Problem, Revisited (continued)
Adaptive Control
What if we assumed f ( x) was linearily parameterizable (ie., f ( x) W ( x)) ? Then
we get
f ( xd ) W ( xd )
e ke W ( xd ) f Vaux
ˆ , would make e ke W () f , and we have
Letting Vaux W ( xd )
ˆ W T ( x )e
d
W T ( xd )e
40
The Tracking Problem, Revisited (continued)
If we let our Lyapunov function be
V 1 e 2 1 T
2
2
we get
V e 2 e
2
k
from f
ˆ recall that
ˆ and
ˆ
( e ) eW ( xd ) T
n
0 (design of the adaptation law)
V e 2 if kn e
V e 2 if kn
2V (0) Be careful! We can't plug in 2V for e 2 . Why?
V depends
on !
Finally, we can show
V g (t ), where g (t ) 0
g (t ) 2ee g (t ) L , so lim e(t ) 0
t
We have semi-global asymptotic tracking.
Design alternatives:
ˆ W T ( x)e GAS
ˆ W T ( x )e SGAS (this may work better in an experiment if there is noise on x,
d
41
could pre-compute f ( xd ))
Continuous Asymptotic Tracking
Scalar System: x u
Let u x ˆ so that x x
If we let
ˆ V x 2 x
ˆ
V 1 x 2 1 2 V xx
2
2
ˆ.
having made x x and
Then let's us say
ˆ x
V x 2 if
Now try the new approach:
V x2 P
Don't know P V is unknown
V x 2 x P
t
where P x, then P x( )( ) d a
t0
42
Continuous Asymptotic Tracking (continued)
Constant of integration
If you knew that x, then
2
t
t d 1 2 ( )
P ( )( ) d
d 1 2 (t ) 1 2 (t )
2
2
a
a
0
a
d
t
t
0
0
V x 2 , so 1 2 (t )
2
a
0
This solution is not unique even though we found it two different ways.
43
Continuous Asymptotic Tracking (continued)
General problem: control is multiplied by an
Consider t he system
unknown function that we can't invert
x f ( x) g ( x)u a scalar system where f and g are unknown
if g ( x) goes through zero then we can't control the system
here, g ( x) 0
We want x(t ) x d (t ), we can rewrite the system as
m( x) x f ( x) u , where m( x) g ( x) 1 and f ( x) g ( x) 1 f ( x)
here, m( x) 0
We make the following assumption s :
A1)
xd C 3
m( x) 2 m( x)
A2) m( x),
,
L and
2
x
x
as long as x L
f ( x) 2 f ( x)
f ( x),
,
L
2
x
x
m( x d ) 2 m( x d )
f ( x d ) 2 f ( x d )
A3) m( x d ),
,
L and f ( x d ),
,
L
2
2
x d
x
xd
xd
d
44
Continuous Asymptotic Tracking (continued)
Proportional
Integral
Let our control be
t
u (k s 1)e(t ) (k s 1)e(t 0 ) (k s 1)e( ) sgn e( ) d
t0
where k s , , and are positive constants. We see that u (t 0 ) 0.
Here the error vari able is defined as e x d x
(Note : This controller is piece - wise continuous .)
Note that we can write r e e
Taking the derivative of u gives
u (t ) (k s 1)e(t ) (k s 1)e(t ) sgn e(t ) (ks 1)r (t ) sgn e(t )
Let' s define a new variable r as
r e e
It can be shown that if r 0, then e 0, and if r L , then e L (Why is this true?).
From linear systems:
e+ e=r
1
45
E(s)=
R(s) if r(t) 0 then e(t ) 0, then e(t ), r(t) 0 e(t ) 0
s
Continuous Asymptotic Tracking (continued)
Now, from our original system, we can write
m( x)e m( x) xd f ( x) u
We also know
r e e
(where e xd x)
Derivative
and we can then proceed as
m ( x ) e m ( x ) e m ( x ) xd m ( x ) x d f ( x ) u
m( x)r m( x)( xd e) m( x) x f u
m( x ) r 1 m( x ) r e N ( x, x, t ) u
2
Why?, Motivated by the analysis
1
e
m( x)
where
2
N ( x, x, t ) m( x)( xd e) m( x)( 1 r x) e f ( x)
2
Substituting for u gives
Analyze these
m( x)r 1 m( x)r e N () (k s 1)r sgn(e)
2
closed-loop dynamics
46
Continuous Asymptotic Tracking (continued)
Cancelation by
The second term that results from the derivative of the Lyapunov function
term introduced
is canceled by the term introduced in previous slide.
in previous slide.
Let's study the stability of our control using the following Lyapunov candidate:
Solve
V 1 e 2 1 m( x)r 2 Vnew
2
2
r e e
V e( e r ) r 1 m( x) r e (k s 1)r r N () sgn(e) 1 m( x) r 2 Vnew
for e
2
2
V e 2 r 2 r N () sgn(e) k s r Vnew Crucial step: N d ( xd , xd , t )
V e 2 r 2 r N d () sgn(e) r ( N () N d () k s r ) Vnew
Let us define a new variable L, as follows:
L(t ) r N d sgn( e)
Nd
small if x xd
N ( x , x , t ) | x xd , x x d
We assume that N N N d can be bounded as follows
e
z
,
z
r
where, (), is a non-decreasing, positive, scalar function
N () z
So, due to the above assumptions N d , N d L .
Nd always bounded
47
Continuous Asymptotic Tracking (continued)
t
Let Vnew b L( )d ( b is a positive constant) then, Vnew L(t )
t0
where we still have to show that Vnew 0.
challenging
Substituting these definitions into the equaton for V we get
Complete the square
V e 2 r 2 r ( N () k s r )
V e 2 r 2 r z
V 3 z
2
2 z
4 ks
2
2
z ks r 2 ks r
z
z ks r
z
2 ks
2
z
z
V 3
4k s
2
z
2
r z
z
2ab r z
2
4k s
2ab
a2
Now use the bound for N :
b 2
b 2
a b
z b r
2
z
z
2 ks r 2
z
2 ks
add/subtract b 2 then write as a squared term
then find an upper bound by throwing away the
negative term
2
z ,
where 3 min{ ,1}
48
z
Continuous Asymptotic Tracking (continued)
0 e
1
1
We can also write V as: V e r
r Vnew
0
m
(
x
)
2
eigenvalues of the diagonal matrix
z
2
2
where, 1 y V 2 ( x) y and y
V
new
Now, let 1 12 min{1, m} and 2 ( x) max 12 m( x),1
We then have (continuing from previous slide)
V z
2
V z
2
if 3
if k s
Knowing that y
2 z
4k s
2 y
, where 0
43
V (t )
1
we can write
V (t )
1
if k s
Here, we can replace t with t0 .
43
2
V z
2
49
Continuous Asymptotic Tracking (continued)
So, we have Semi-Global Asymptotic tracking!
How do you know?
Remember our Lemma involving V g(t)?
Recall our Lyapunov candidate
V 1 e 2 1 m( x)r 2 Vnew
2
2
V negative terms L Vnew and Vnew 0
L(t ) r (t ) N d (t ) sgn e(t )
Vnew L(t )
So, this gave us V negative terms L(t ) L(t ) negative terms Asymptotic stability
Why not follow this procedure all the time?
Difficult to show that Vnew is lower bounded by zero (i.e. the integral is always 0).
50
Continuous Asymptotic Tracking (continued)
So our result is only valid if Vnew 0.
t
Vnew b e( ) e( ) N d ( sgn e( ) d
t0
b e( ) N d ( ) sgn(e( )) d e( ) N d ( ) sgn(e( )) d Expanded
t0
t0
t
Vnew
t
Remember L r N d sgn e . We now show that if is selected as
N d (t )
1
N d (t ) ,
This condition is actually
developed in the next slide.
t
then L( )d b .
This would ensure that VNEW is positive.
t0
So b e(t0 ) e(t0 ) N d (t0 ) 0
51
Continuous Asymptotic Tracking (continued)
Working with just the integral:
de( )
de( )
L
(
)
d
N
(
)
d
t
t d d
t d sgn e( ) d t e( ) N d ( ) sgn e( ) d
0
0
0
0
t
t
t
t
Integrate by parts
t
dN d ( )
t
t L( )d e( ) N d ( ) | t e( ) d d e( ) t0 t e( ) N d ( ) sgn e( ) d
0
0
0
t
t
t
t0
1
2 xx x
d x2
Note:
2
x sgn( x) x
2
dt
x
x
Because of the condition on .
b
1
e(t ) N d (t ) e(t ) e(t0 ) N d (t0 ) e(t0 ) and are positive since > N (t )
L
(
)
d
t
b
d
1
0
t
1 dN d ( )
e( ) N d ( )
sgn e( ) d This term is always negative
d
t0
t
t
So, we have
L( )d
t0
b
Thus V 0
Done !
52
Feedback Linearization for Second-Order
Systems
Consider t he system
M (q )q Vm (q, q )q G (q ) F (q ) , where M (q ) is positive definite, symmetric.
xT 1 M (q ) Vm (q, q ) x 0
2
e qd q
We could rewrite the system as
M (q )q Vm (q, q )q N (q, q )
General Dynamic Equation for an n-link Robot
( N (q, q ) G (q ) F (q ))
Me Mqd Vm q N
If we know everything about the system (the model), we can write
M (qd kv e k p e) Vm q N
e kv e k p e 0
What if we try
Mˆ (qd kv e k p e) Vˆm q Nˆ
Me Mˆ (kv e k p e) (Vm Vˆm )q N Nˆ ( M Mˆ )qd
53
Feedback Linearization Problem (continued)
Continuing from previous slide:
e kv e k p e ( I M 1Mˆ )(kv e k p e) M 1 ( Mqd Vm q N )
where M M Mˆ , Vm Vm Vˆm and N N Nˆ
e kv e k p e f ( M , M ,Vm , N ) f (e, e, qd , qd , qd , q, q ) Not good. Why?
Let's try something else. Define
r e e
r e e
Filtered Tracking Error
Multiplying through by M gives
Mr Me M e
Mr M ( qd e) Vm q N
From linear systems:
e+ e=r
1
R(s) if r(t) 0 then e(t ) 0,
s
then e(t ), r(t) 0 e(t ) 0
E(s)=
Mr Vm r M (qd e) Vm (qd e) N
Mr Vm r Y (q, q, qd , qd , qd )
ˆ kr and
ˆ Y T r. Now, we can write
Design your control, letting Y
ˆ , and
ˆ.
Mr Vm r kr Y , where Y T r ,
54
Feedback Linearization Problem (continued)
Our Lyapunov candidate can be selected to be
V 1 r T Mr 1 T
2
2
which gives
V 1 r T Mr 1 r T Mr 1 r T Mr 1 T 1 T
2
2
2
2
2
ˆ
V r T Mr 1 r T Mr T
2
V r T (Vm r kr Y ) T Y T r 1 r T Mr
2
V r T kr g (t ) recall that M r r T Mr
2
So, all signals are bounded, and r 0 (due to our stability lemma). Notice that this
way did not feedback linearize the system like the previous one.
55
Feedback Linearization Problem (continued)
Example - Simple case : scalar state, exact model knowledge
x f ( x, x ) u
e xd x , e x d x , e x d f ( x , x ) u
r e e
Converting the 2nd-order problem into a 1st-order problem
r e e xd f ( x , x ) u e
Our Lyapunov candidate can be selected to be
V 12 r 2
which gives
V rr r xd f ( x, x ) u e
Opportunity to design the control u (t )
design
u xd f ( x , x ) e r
then
V r 2
V is PD,V is ND r 0,
since r(t) 0 then e(t ) 0,
since e(t ), r(t) 0 e( t ) 0
From linear systems:
e+ e=r
1
R(s) if r(t) 0 then e(t ) 0,
s
then e(t ), r(t) 0 e(t ) 0
E(s)=
56
Previous Problem Using a Robust Approach
For the previous system, we want to apply a robust control:
Mr Vm r W
W M (qd e) Vm (qd e) N (q, q )
We made the assumption that M ( q) was p.d. symmetric and xT ( 1 M Vm ) x 0.
2
Let our control be
2r
2r
r
kr VR , where we choose from VR1
, VR 2
, or VR 3
r
r
So, our system can be written
Mr kr Vm r W VR
Where W
Choose the Lyapunov candidate to be
V 1 r T Mr
2
Taking the derivative gives
V r T kr r T (W VR )
57
Previous Problem Using a Robust Approach
Continuing from the previous slide:
2 r
V min {k} r r
2
2
r
r 1
V min {k} r
2
Since M is p.d. symmetric, we can write
1 m r 2 V 1 m r
2 1
2 2
2
(m1 , m2 are constants)
Where the assumption m1 x xT M (q) x m2 x
2
Let
V
2
was used.
2min {k}
, which leads to
m2
2min {k}
V V V
m2
V (t ) V (0) exp( t ) 1 exp( t )
Therefore, the system is GUUB.
On a practical note, high gains cause noise to corrupt actual experiments.
58
Observers
Mechatronics
Based Solution
Advanced
Nonlinear Control
Design Techniques
+
Realtime
Hardware/Software
New
Control
Solutions
Nonlinear Lyapunov-Based Techniques Provide
– Observers or Filters for State
Measurement Replacement
u
·
y g ( x , y, u )
Nonlinear
Controller
Nonlinear
Observer
?
·
x f ( x, y )
x
Observers Alone
u
·
y g(x, y,u)
Estimate
of the
State
?
Nonlinear
Observer
·
x f (x, y)
x
Observers
Given system x f ( x, x ) g ( x )u we have assumed that all states could be measured
and used in feedback (full-state feedback (fsfb)).
Ex: Motor with robotic load: sin( )=
Standard approach: measure and to control .
Could we reduce cost or improve reliability if we didn't need to measure ?
Example: If angle is measured
with an encoder then the
velocity must be estimated, e.g.
using backwards difference.
Velocity Estimate
Position
Encoder Measured Position
Backwards difference may yield
noisy estimate of actual velocity
61
Observers (continued)
Solution for linear systems was to design an observer for unmeasurable states.
Consider the linear system with plant
x Ax Bu
y Cx
use a formula to find k
A full-state feedback control would look like u fsfb kx based on plant parameters
Specify a Luenberger Observer as
xˆ Axˆ Bu Ly
yˆ Cxˆ
use a formula to find L based on plant parameters
where y y yˆ
Modifying the above control, an observer-based feedback control
would use state estimate xˆ and look like uo kxˆ ( for the plant the observer)
The separation principle (linear systems ONLY) says that uo for the
plant works just like u fsfb for the plant and the observer.
In a linear system, can design the Observer and the Controller Separately
62
Observers (continued)
What about a nonlinear system? Consider the system
x f ( x) g ( x)u (nonlinear)
y h( x) not all x appear in y so you will want an observer!
xˆ 1 ( xˆ , u , y )
1 () and 2 () are designed
u 2 ( xˆ )
Then, you could try (Using the Linear Systems Approach)
xˆ f ( xˆ ) g ( xˆ )u Ly
yˆ h( xˆ )
Difficult to prove stability result
u kxˆ
Note what this means:
if x x xˆ then u kxˆ kx kx
This estimation error term could
destabilize the system (Kokotovic peaking)
In a nonlinear system, may not be able to design the Observer and the Controller Separately
Can't assume the Separation Principle holds for nonlinear systems.
63
Observers (continued)
Let's try to develop an observer for the scalar (x 1 ), second-order nonlinear system of the form
x f ( x, x) u (this will not be a general result)
if we knew x and x
The nonlinear system above can be represented by two cases:
then we know f ( x, x)
2 4
Case 1) f () is known, but unmeasurable e.g. f ( x, x) x x
Case 2) f () is uncertain and unmeasurable, e.g. f ( x, x) ax 2 x 4 {a is unknown}.
We will address Case 1 with an observer (Case 2 is more difficult)
For Case 1, we can estimate x:
a) Open-loop observer : xˆ f ( x, xˆ ) u
No feedback. Other possible approaches include
a Kalman or particle filter as an estimator.
b) Closed-loop observer : x f ( x, x) f ( x, xˆ ), where x x xˆ
We now seek to design a closed-loop observer.
64
Observers (continued)
Start with the estimation error, x x xˆ, then x x xˆ and x x xˆ
Substituting the system dynamics x f ( x, x) u xˆ
We need x, described by the dynamics in x, to go to zero, this seems similar
to our previous use of the Lyapunov functions to design the controllers.
We can see a hint of what the observer should do (via xˆ ) to make the estimation
error dynamics go to zero:
1) Cancel f ( x, x ) u
2) Add feedback (stabilizing) terms
A filtered tracking error (a change of variables),
that transforms a second-order into a first-order problem can be defines as:
s x x
This linear system can be transformed (Laplace transform) into
1
S(s ) if s(t) 0 then x(t ) 0,
s
then x (t ),s(t) 0 x (t ) 0
X (s )=
65
Observers (continued)
Motivated by the use of the filtered tracking error (and a lot of trial and error),
let's apply the change of variables. Substitution from the system dynamics yields:
s x x x xˆ x f ( x, x ) u xˆ x
Anticipating the Lyapuniov analysis, propose an observer
xˆ
f ( x, xˆ ) u k01 x k02 x
Mathematically, this may make x go to zero but it
includes x - the quantity we are trying to estimate!
There is a solution that we will see later.
66
Observers (continued)
Substitute observer
x x xˆ k01 x k02 x f ( x, x ) f ( x, xˆ ) k01 x k02 x f
Note this can be arranged as a linear system: x k01 x k02 x f
should be able to pick k01 and k02 to make x go to zero (if f =0)
Now substitute x into the s-dynamics
s x x
k01 x k02 x f x
Make k01 k and k02 k 1, then
s k x k 1 x f x
= kx kx x f
ks x f
Just substitute filter
x x s
67
Observers (continued)
Consider the Lyapunov candidate:
V 1 x2 1 s2 1 zT z
2
2
2
where
(Here z x, s )
T
V x ( x s ) s( ks x f )
Done if f 0
V x ks sf
2
2
Assume f 1 x 2 s , then
V x 2 ks 2 1 x s 2 s
We can use the property
x y
x
2
Note: We used x s x
Note:
x
x
2
y
2
x
y
y
0
Where
is
a
positive
constant
2
x
2
y 2 x y x y
2
2
y
2
which allows us to write
2
2
2
V ( 1 ) x k 2 1 s
If and k are selected large enough,
negative definite, so x and s 0!
All signals bounded! (Can you show this?) Here we assume that x, x L
68
Observers (continued)
Clean-up: remember we introduced the xˆ to make x go to zero but it
included x - the quantity we are trying to estimate. We need to fix that now!
Start with the orginal xˆ
Not measurable
xˆ f ( x, xˆ ) u k01 x k02 x
and introduce a new variable p with derivative p. Rewrite as two first-order equations
implementable, closed loop observer
p f ( x, xˆ ) u k02 x
All signals are measurable!
xˆ p k01 x
This is a trick to make the observer implementable i.e. can be applied using measurable quantities.
To see how it works, differentiate xˆ p k01 x
xˆ p k01 x
Term we needed to stabilize the observation error dynamics (not measurable)
Terms that we don't want to differentiate go in p and appear here.
69
Observers (continued)
Example: Design an observer to estimate x in the open-loop system:
u 0
x x x u
( x is measureable but x is not).
Define:
x x xˆ
s x x (similar to the filtered tracking error r ) then s x x x xˆ x
propose V 12 x 2 12 s 2
V xx ss xx s x xˆ x
rearrange definition of s: x s x
V x s x s x xˆ x
x sx s x xˆ x
2
Implement the closed-loop observer
xˆ x x x s x x x 1 x 1 x
xˆ p x 1 x
substitute the open-loop system (x with u=0):
V x 2 sx s x x xˆ x
p x 1 x
we would like to have only x 2 and -s 2 in V , design xˆ to make this happen:
xˆ x x x s
cancel
V x 2 s 2
stabilize
x
cancel cross term
Observers (continued)
What kind of terms can we put in f ( x, x ) and cancel directly with xˆ?
For open-loop observer, analysis leads to
V x 2 sx s f ( x, x ) xˆ x
Two-part implementation of the filter:
xˆ p (terms to get differentiated to make xˆ )
p terms that don't get differentiated to make xˆ
xˆ f1 ( x) 1 x f 2 ( x, x) 1 x
put in xˆ
put in p
Implementable observer:
xˆ p
f
2
( x, x)dt 1 x
p f1 ( x) 1 x
Basically we need to be able to find
f ( x, x)dt
2
Examples of favorable terms:
f 2 ( x, x ) x, xx f 2 ( x, x )dt x, 12 x 2
Examples of unfavorable terms:
f 2 ( x, x ) x 2 f 2 ( x, x )dt ?
71
Observers (continued)
Example: Design an observer to estimate x in the open-loop system:
u 0
x x u
2
( x is measureable but x is not).
Define:
x x xˆ
s x x (similar to the filtered tracking error r ) then s x x x xˆ x
propose V 12 x 2 12 s 2
V xx ss xx s x xˆ x
rearrange definition of s: x s x
V x s x s x xˆ x
x 2 sx s x xˆ x
xˆ x x x s x x x 1 x 1 x
xˆ p x 1 x
substitute the open-loop system (x with u=0):
V x 2 sx s x 2 xˆ x
Implement the closed-loop observer
p x 1 x
we would like to have only x 2 and -s 2 in V , design xˆ to make this happen:
xˆ xˆ 2 x s
cancel
stabilize
x
cancel cross term
V x 2 s 2 x 2 xˆ 2
Can't cancel the term with x 2
Observers (continued)
Example (cont): Design an observer to estimate x in the open-loop system:
u 0
Definition of derivative f ( x) and the Mean Value Theorem:
V x 2 s 2 x 2 xˆ 2
x 2 xˆ 2 f
d
f (c )
note f (c)
f (y)
x
dy
x xˆ
Apply norm and rearrange:
Assume f 1 x 2 s , then
V x 2 ks 2 1 x s 2 s
2
f f (c ) x
We can use the property
x y
x
2
y
2
Since f x, x is a known function: f (c) 2c
0
f 2c x
Triangle Inequality
f 2c x 2c s x
which allows us to write
2
V ( 1 ) x k 2 1 s
2
2c
s
x
1 x 2 s
73
Combining Observers & Controllers
(continued)
Tool for Lyapunov analysis - "nonlinear damping"
zy kz 2 ? (assume k is a positive gain)
zy kz z y k z z y k z
2
2
if k z y then z y k z 0
if k z y then z y / k and
z y k z
y
k
y =
y
y k z
y
2
k
Thus we have the greatest upper bound zy kz 2
y
2
k
74
Observers (continued)
Modification to previous observer design
Motivated by the use of the filtered tracking error (and a lot of trial and error),
let's apply the change of variables. Substitution from the system dynamics yields:
s x x x xˆ x f ( x, x) u xˆ x
Anticipating the Lyapunov analysis, propose an observer
xˆ fˆ ( xˆ , xˆ ) u k x k x
01
02
Use estimate in place of x.
75
Observers (continued)
Still considering that f () is known function but
want to distiguish the fact that we are using an estimate
of a measurable quantity.
Suppose we redefine fˆ :
fˆ () fˆ ( xˆ, xˆ ) so now it depends on xˆ instead of x
If f c1 , then we can use the Mean Value Theorem to state
f ( x, xˆ , x, xˆ ) ( x, x, x, x)
x
Mean Value Theorem (in one variable)
x
x
where f f ( x, x) fˆ ( xˆ , xˆ ).
f ( x)
We can then write
f ( x, x, x, s )
f 1 ( x, s )
x
s
x
s
f ( xˆ )
assume
f ( x) x
f (c)
f ( xˆ ) f ( x )
f
xˆ x
x
f x
x̂
x
f (c) x x , x
if x 0 then f 0
76
Observers
(continued)
For the observer problem:
x
V x 2 ks 2 s 1 ( x, s )
s
remember we found that V x 2 ks 2 sf
let 1 and k kn 1. Then,
V x s z 1 z
2
2
s
2 z
1
k
kn s , z
2
x
s
z2
n
General approach to find upper bound for N x k x :
2
N z 1 () and L N s kn s
2
Nonlinear Damping
There are two different cases:
Case 1) kn s N s
2
L0
N
Case 2) kn s N s so, s
kn
2
N2
then L
kn
N2
L
kn s
kn
2
77
Observers (continued)
So,
V z
2
12 z
kn
12
V 1
z
kn
V z
2
2
2
if kn 12 z , where is a positive constant.
Recall that V 1
2
2
z . So, we can write
V 2 V if kn 12
2V
V (t ) V (0) exp( 2 V ) if kn 12
1
z (t )
2
1
2V (t )
z (0) exp( 2 t ) for kn 12
2
2
2
z (t ) z (0) exp( t ) for kn 1 z (0)
2V (0) 12
2V (t )
This gives us a semi-global exponential result! Why not global?
78
Observer + Controller
u
·
y g(x, y,u)
Nonlinear
Controller
?
Nonlinear
Observer
·
x f (x, y)
x
Combining Observers & Controllers
Can we develop a combined observer/controller for the previous system?
x f ( x, x ) u
In the observer alone, we assumed f ( x, x ) L if x, x L but we couldn't measure x.
Our control objective is to force x xd when only x is measureable.
The observer/controller is more complex since all signals must be shown
to be bounded.
We can choose from two different error systems
Case 1) e1 xd xˆ e1 xd xˆ
Closed-loop observer
Case 2) e xd x e xd x
xˆ p x k01 x
p fˆ () k x u
Let's use Case 1 since e1 is measureable.
This gives us
02
Inject a new term by pd
... p pd pd
e1 xd xˆ xd p k01 x xd k01 x pd p
where p pd p
80
Combining Observers & Controllers
We are using pd to facilitate the stability analysis (seen later).
Now, given
e1 xd k01 x pd p
Here we see it makes e1 "nice":
2
2
if
we
had
V
...
e
V
...
e
e
V
...
k
e
1
1
1
c
1
1
Letting pd xd k01 x kc1e1 gives
e1 kc1e1 p
"Interconnection term"
In this step we have enhanced the role for the observer.
Recall that e1 is our tracking error. We will see that the
observer will act to promote the stability of e1.
81
Combining Observers & Controllers
Closed-loop observer
(continued)
Recognizing that p p p, we can write
xˆ p x k x
p x k x k e fˆ () k x u
p fˆ () k x u
d
01
d
01
c1 1
02
02
p xd kc1 kc1e1 p fˆ () k02 x k01 x u
W1 measureable
We can design the control as follows
Feedback
"Interconnection Buster"
u W1 kc 2 p e1 Vaux
Vaux is a control input designed during the stability proof
p kc 2 p e1 k01 x Vaux
Taking the following Lyapunov functions
Controller Lyapunov function
Vc 1 e12 1 p 2
2
2
Vc kc1e12 e1 p kc 2 p 2 e1 p p (k01 x Vaux ) kc1e12 kc 2 p 2 p (k01 x Vaux )
e1
Vo 1 x 2 1 s 2
2
2
Vo x 2 ks 2 sf
Observer Lyapunov function
where the combined Lyapunov function can be written
V Vo Vc 1 x 2 1 s 2 1 e12 1 p 2
2
2
2
2
82
Combining Observers & Controllers
(continued)
Now, we can write the derivative of the combined Lyapunov function as
L
V x 2 ks 2 kc1e12 kc 2 p 2 sf pk01 x pVaux
good terms (Why?)
Unmeasurable mismatch
bad terms (Why?) injected term
Using the definition of x : pk01 x pk01 ( x s )
and letting Vaux Vaux1 k01 x kn1k012 p lets us write
Vaux1 will be designed later
L sf pk01 ( x s ) pVaux1 p ( k01 x k n1k012 p ) sf pVaux1 pk01s kn1k012 p 2
Now, we can say
L ( ) k01 p s kn1k012 p
L ( )
s
2
kn1
So, we can write
Nonlinear damping on one term
2
k01 p s kn1k01 p
if kn1k01 p s then k01 p s k n1k01 p 0
if kn1k01 p s then k01 p s / k n1
k01 p s kn1k01 p
1 2
2
2
V x k
s kc1e1 kc 2 p sf pVaux1
kn1
2
s
s
kn1
s = k
2
n1
Done if f 0
83
Why not use ND on f ?
Combining Observers & Controllers
(continued)
Recall that
f f ( x, x) fˆ ( xˆ, xˆ )
Let's assume that f c1 , then
x
x
f ( x, x, xˆ, xˆ ) z , where z and z
s
s
It can be shown that
( x, x, xˆ, xˆ ) 1 ( xˆ, xˆ, x, x) 2 ( xˆ, xˆ, x, s) 3 ( xd , e1 , xˆ , x, s ) 4 ( xd , xd , e1 , e1 , x, s )...
... 5 ( xd , xd , e1 , p, x, s ) 6 (e1 , p, x, s)
Then we can write
Variables in Lyapunov function
1 2
2
2
V x 2 k
s kc1e1 kc 2 p s 6 ( e1 , p , x , s ) z
kn1
we let Vaux1 0
Turns out that
we don't need it
84
Combining Observers & Controllers
(continued)
From the previous slide:
1 2
2
2
V x 2 k
s kc1e1 kc 2 p s 6 ( z ) z ,
kn1
z
where z e1
Combined all 4 states into
p
If we let , kc1 , kc 2 1 and k
V z
2
s 6 ( z ) z k F s
N3
If k F s 6 ( z ) z , then
If k F s 6 ( z ) z , then
which gives,
1
k F 1, we can write
kn1
V z
2
2
x
s
z e1
p
(See the nonlinear damping argument)
N3 s 6 ( z ) z k F s 0
N3
6 2 z
6 2 ( z ) z
kF
z
2
kF
2
85
Combining Observers & Controllers
(continued)
From previous slide:
V z
2
6 2 ( z ) z
kF
6 2 ( z
So, V 1
kF
V z
2
2
)
z
2
if k F 6 2 z
Remembering that V 1
V 2 V if k F 6 2
2
2
z , we can say
2V
V (t ) V (0) exp(2 t ) if k F 6 2
2V (0)
Now, we can write
1
2
z (t ) 1
z (0) exp(2 t ) if k F 6 2 z (0)
2
2
2
z (t ) z (0) exp( t ) if k F 6 2 z (0)
Semi-global exponential
tracking
86
Combining Observers & Controllers
(continued)
goes to zero
Remember that
x xˆ
x
x
s
x xˆ ( x xˆ )
x
x
z
xd xˆ
xd xˆ
e1
p xd k01 xˆ kc1e1 ( xˆ k01 x) xd k01 x kc1 ( xd xˆ ) xˆ k01 x xˆ
Finally, we have semi-global exponential stability, and we can say
x xˆ , xˆ xd , xˆ x, xˆ xd
Occurs exponentially fast!
so x xd , x xd
Recall that you can't measure 6 (), which came from knowing f ( x, x) c1. Using the
Mean Value Theorem and the fact that something is c1 tells us that
h( x) h( xˆ ) xˆ , x
x
For example,
x 2 xˆ 2 ( xˆ x) x xˆ x
x
What is bad about the observer approach?
Need to know the function f( )
87
Filter Based Control
Assuming velocity is not measureable in second-order system,
build a "stunt double" for velocity.
Solves the same general problem as the estimator but in a different way.
Assume we have the same system:
x f ( x, x ) u
Make e go to zero
without knowing f ( ) or x
where only x is measureable, and the structure of f ( x, x) is uncertain.
We will assume that
( x, x ) f ( x, x )
() is a positive scalar function
Why couldn't we use this function in the control (if we know )?
It depends on x (which we don't know)!
Example:
x 2 cos( x)a a x 2 In the analysis we use the fact that () exists
( x, x )
f ( x,x )
The inequality is true, but () depends on x
88
Filtering Control (continued)
is similar to the filtered
tracking error r that we used earlier
Let's define the following:
but is not measureable
e xd f ( x, x) u Why the 2nd derivative? We need it for the control.
We will need e and e, but e is not measureable. So, we come up with another variable:
e e e f a filter to help us with the e problem
We now have three error systems:
error system 1 )
e e e f xd f ( x, x) e e f u
error system 2 )
e f e f k e This is the filter design
error system 3 )
e e f e
From the definition of (t )
Our next step is to develop a Lyapunov candidate.
Can't implement in this
form since it contains
V 1 e 1 ef 1
2
2
2
Since is not measureable (due to the fact that e is not measureable), we cannot use
2
2
2
it in the control. Later we show that e f is measurable.
89
Filtering Control (continued)
Taking the derivative of our Lyapunov candidate gives
e
ef
V e( e e f ) e f (e f k e) ( xd f ( x, x) u )
( e e f ) ( e f k e)
e into
e f into
This is the only part contributed by the system,
everything else is from the filter design and error definition.
V e 2 e f 2 (k 1) 2 xd f ( x, x) u (k 2)e f e
Is e f measureable?
e f e f k e e f k e e e f e k 1 e f k 1 e ke
Let's develop a new variable p, where
can we use
e f in the
control?
p function1
Can't implement in this
e f p function2
form since it contains e
We will need to find function1 and function2 . Differentiate e f to get
e f p derivative function2 function1 derivative function2
So, that means that
function1 (k 1)e f (k 1)e
function2 ke
90
Filtering Control (continued)
High-pass filter:
s
H ( s)
sa
H ( s)
2
2 a2
frequency
H ( s) H ( s) s
which means the filter acts as a differentiator
over a certain range of frequencies
High-pass filter:
s
H ( s)
sa
In previous slide:
e f k 1 e f k 1 e ke
H ( s)
frequency
e f k 1 e f k 1 e ke
Laplace tansform:
se f k 1 e f k 1 e kse
ef
ks k 1
e
s+ k 1
The filter e f acts as a differentiator of e
over a certain range of frequencies
91
Filtering Control (continued)
Now, we know
p ( k 1)e f ( k 1)e and e f p ke
So, e f is measureable, which leaves f ( x, x ) and as the unmeasureable variables in
V e 2 e f 2 (k 1) 2 xd f ( x, x ) u (k 2)e f e
We design the control:
u xd ( k 2)e f e, let k 2 kn
Now,
V e e f f ( x, x ) k n
2
2
2
Use this feedback term to deal with the
unknown, unmeasureable function f ()
Also, we define
f f ( xd , xd ) f ( x, x ) Note fd d
fd
If f C 1 , then f 1 ( xd , xd , e, e)
f 2 e , e
e
e
New approach
to define f
1
By the Mean Value Theorem, since f C
e
e
92
Filtering Control (continued)
From the previous slide:
f 2 e , e
e
e
Let's come up with a new variable z, where
e
z e f
So, we know (because e e f e)
f 3 z
z
Add/sub f d =f ( xd , xd ) and use defininition of f
Our Lyapunov function becomes
V 1 e 2 1 e f 2 1 2 1 z T z with derivative
2
2
2
2
V z T z f f d kn z T z 3 z
z f d kn
Letting kn kn1 kn 2 and f d d allows us to write
V z T z 3 z
z
kn1
2
d kn 2
2
2
93
Filtering Control (continued)
As seen on the previous slide,
V z T z 3 z
z k n1
2
d kn 2
Now, we can say
z 32 z
2
V zT z
kn1
3 z
V 1
kn1
2
32 z
V 1
kn1
So, we can write
z
2
Nonlinear Damping
2
d
Nonlinear damping on one term
kn 2
Given a x y ab x with a,b>0
d2
if ab x y then a x y ab x 0
kn 2
z 2 , where d 2
kn 2
V z , if kn1 32 z
2
V 2 V , if kn1 32
2
2V (t )
if ab x y then a x y ab x a x y
a x y ab x
Note: V
y
b
2
y
b
1 T
z z
2
V 2 V s (t ) s (t ) 0
Proof to show
Semi-global Uniformly Ultimately Bounded Tracking (SGUUB)
94
y
b
2
Filtering Control (continued)
Continuing from the previous slide:
V 2 V s (t ) s (t ) 0
Solving the differential equation gives
t
t
0
0
V (t ) exp( 2 t )V (0) exp( 2 t ) exp( 2 ) d exp( 2 t ) s( ) exp(2 )d
1
exp(2 t 1
2
V (t ) exp(2 t )V (0)
1 exp(2 t )
2
0 since s is neagtive
95
Filtering Control (continued)
Continuing from the previous slide:
V (t ) exp( 2 t )V (0)
1 exp( 2 t ) if kn1 32
2
So, V (t ) is bounded such that V (t ) V (0)
2
2V ( t )
choose k n1 3 2 V (0)
2
2
We can then write
2
V (t ) exp( 2 t )V (0)
1 exp( 2 t if kn1 3 2 V (0)
2
2
which means
z (t )
2
z (0) exp( 2 t ) 1 exp( 2 t ) if k n1 3
2
z(0)
2
So, we have semi-global ultimate uniform boundness. We can easily show that all
signals are bounded. We don't show that e goes to zero, only show
that it can be made smaller by choice of gains.
Specifically, decrease
d2
kn 2
by increasing kn 2
96
Adaptive Approach
Reconsider the previous system:
x f ( x, x ) u
e f ( x, x) u xd
e e ef
e f e f k e
e e f e
e e e f (k 1) xd f ( x, x) u 2e f
Let
u xd 2e f e u ff ke f
where u ff is a feed forward term, which was not included in our previous control.
This gives
(k 1) e f f ( x, x) u ff ke f
97
Adaptive Approach (continued)
Consider the Lyapunov candidate
V 1
z
2
2
which gives
(where z [ e e f ]T )
V e 2 e f 2 (k 1) 2 f ( x, x) u ff
Assume f ( x, x) W ( x, x) Assume LP
We now write
L f ( x, x) u ff f ( x, x) f ( xd , xd ) f ( xd , xd ) u ff
0
L f W ( xd , xd ) u ff
f ( xd , xd )
If we can show that f z
Recall that f f ( x, x) f ( xd , xd )
ˆ , then
z , and we let u W ( x , x )
ff
d
d
ˆ
L f W ( xd , xd ), where
98
Adaptive Approach (continued)
Now, consider the Lyapunov candidate:
e
V 1 z T z 1 T ,
2
2
z ef
ˆ
where
ˆ
V e 2 e f 2 (k 1) 2 f W ( xd , xd )
Our system can now be written
x W ( x, x) u , where we assume that W ( x, x ) c1
ˆ
u x 2e e ke W ( x , x )
d
f
f
d
d
We know that f xd , xd , z z is true since
f W ( x, x) W ( xd , xd ) and W ( x, x) c1
Let's create a variable, p, where
p (k 1)e f (k 1)e and e f p ke
Let e f e f k e
99
Adaptive Approach (continued)
If we let k kn 2, then
V zT z
z kn
2
ˆ
W ( xd , xd )
ˆ W ( x , x ) is NOT measureable!
Where we let
d
d
We address this below
We can know say
V z
2
2 () z
2
kn
2
2
V 1
z
kn
We need to use integration by parts:
t
measurable
ˆ W ( x , x )(e e
f
d d
unmeasurable
e
)d , where is just a dummy variable
0
t
de
t
L1 W ( xd , xd ) d W ( xd , xd )e |0 W ( xd , xd )e d Unmeasurable part
dt
0
0
t
100
Adaptive Approach (continued)
As seen on the previous slide:
t
L1 W ( xd , xd )e | W ( xd , xd )ed
t
0
0
Finally,
dxd
dW
,
x
(
)
t
d d
e d
L1 W ( xd , xd )e W xd (0), xd (0) e(0)
d
0
The apadtive update law can now be completed and then we can say
V z
2
if kn 2 z (t )
Our result is semi-global asymptotic. Why is it not exponential? V has more
terms in it than just z.
We can also write
V z
2
for kn 2
2V (t )
2V (0)
2V T z
V z
2
for kn 2
101
Adaptive Approach (continued)
As seen on the previous slide:
V z
2
for kn 2
2V (0)
ˆ L . Why do we care if z L ?
It can be shown that z, L e, e, e f , e f , , ,
We want z(t ) L , which would mean lim z (t ) 0. Remember, z has e, e f ,
t
and in it. So, they go to zero also. This has been an example of output feedback
adaptive control. It gave us semi-global asymptotic tracking.
Why didn't we use an observer (we used a filter)? We don't have exact model
knowledge (there is uncertainty in the model)!
102
Variable Structure Observer
Consider the system:
x h( x, x) G ( x, x)u , where we observe x with only measurements of x.
We also make the assumption that x, x, x, u, u, h( x, x ), G ( x, x ) L , where
h( x, x) , G ( x, x) C 1 and are uncertain. Why do we make the assumption about
boundness? We want to build a xˆ, so we want to ensure that xˆ x.
For our problem, we define
x x xˆ
x x xˆ
Let xˆ p ko x, where p k1 sgn( x) k2 x
Then,
Observer
xˆ k1 sgn( x) k2 x ko x
x h( x, x) G ( x, x)u k1 sgn( x) k 2 x ko x
Observation error system
103
Variable Structure Observer (continued)
Let's create a new variable, r , where
r xx
r h( x, x) G ( x, x)u k1 sgn( x) k 2 x ko I x
Let k2 ko I . ((k0 )ij 1; i j ) Now, we can write
r N o ( x, x, t ) k1 sgn( x) k 2 r
So, we have
N o h( x, x) G ( x, x)u
We can let our Lyapunov function be
t
Vo 1 r r Po t , where Po t bo Lo d we must prove that Po t 0
2
T
t0
Lo (t ) r T N o k1 sgn( x)
So, we can now write
Vo r T N o k2 r k1 sgn( x) r T N o k1 sgn( x)
Po Lo ( t )
104
Variable Structure Observer (continued)
From the previous slide:
Vo r T N o k2 r k1 sgn( x ) r T N o k1 sgn( x )
Next, we get
Vo r T k2 r
Using the Rayleigh-Ritz Theorem lets us write
Vo min {k2 } r
2
So, Vo 0 and Vo g (t ), where g (t ) 0. If g (t ) L , then lim g (t ) 0.
t
Here, g (t ) min {k2 }r T r and g (t ) min {k2 }2r T r
Therefore, r L r L , then r 0 x, x 0!
But, we must show that Po (t ) 0, which requires
k1i N oi N oi , where i denotes the ith component for vectors
105
Variable Structure Observer (continued)
So, our task then is to prove that
Useful Math Notes :
t
bo Lo d
t
t0
t0
y 2 t y t y t
y | y t y t0 and
y
y t y t
t
Let M Lo d , so we get
2
t0
dx
M
d
t0
t
T
N
t
y d y
y
d
d y
y d
y
t0
t
t
t0
k1 sgn x d x
o
T
N
o
k1 sgn x d
t0
t
t
dx
dx
T
M
N
d
k
sgn
x
d
x
o
1
N o k1 sgn x d
d
d
t0
t0
t0
T
t
T
t
M x t N o | x
T
t
t0
T
d N o
d
t0
t
... x
T
N
o
n
d k1i xi t ...
i 1
t
0
k1 sgn x d
t0
106
Variable Structure Observer (continued)
Continuing from the previous slide:
d N o
T
T
M x N o
k1 sgn x d x t N o t x t0 N o t0 ...
d
t0
t
T
n
n
i 1
i 1
... k1i xi t k1i xi t
n
M k1i xi
t0 i 1
t
d N oi t
N oi
dt
n
n
i 1
i 1
T
k1i d x t N o t ...
... x t0 N o t0 k1i xi t k1i xi t
T
The term x t N o t can be written
T
n
M k1i xi t xi t0 N oi t0
i 1
n
x t
i 1
i
N oi t , which gives
So, if we define bo M , then Po 0. Notice that u is not in this observer; so, we
can't exploit it for a controller!
107
Filtering Control, Revisited
Let's consider the following system:
M ( x) x f ( x, x) u, x is measureable
M ( x), f ( x, x) c 2
M ( x), M ( x) L if x, x L
f ( x, x), f ( x, x) L if x, x, x L
Assumptions
Let e xd x and M ( x) be such that
M ( x) M ( x) M ( x) upper and lower bounded
Let u k1 sgn(e e f ) (k2 1)rf e
Let our error system be defined by three equations:
error system 1) e e rf
error system 2) rf rf (k2 1) e e f
error system 3) e f e f rf
Crafted to make the analysis work
Where did come from? We invented it.
108
Filtering Control, Revisited (continued)
We define
p rf (k2 1)(e rf ) e e f
rf p (k2 1)e
e e rf
Design such that xd x 2rf e f k2
Then, by multiplying through by M ( x) gives
M ( x) M ( x)( xd 2rf e f ) k 2 M ( x) f ( x, x) u
M ( x) k2 M ( x) N ( x, x, t ) M ( x)(2rf e f ) u
where N M ( x) xd f ( x, x)
Then, if we add and subtract an N d (N d N ( x, x, t ) | x xd is bounded apriori)
x xd
We get,
M ( x) k2 M ( x) N N d u 1 M ( x)
2
Remember that N N N d . We can now put in our control:
M ( x) k2 M ( x) N N d k1 sgn(e e f ) (k 2 1)rf e 1 M ( x)
2
where N N N d M ( x)(2rf e f ) 1 M ( x)
109
2
Filtering Control, Revisited (continued)
As seen on the previous slide:
N N N d M ( x)(2rf e f ) 1 M ( x)
2
We can show
e
e
N z z , z f
rf
Our next step is to use the Lyapunov function.
V 1 M ( x) 2 1 e f 2 1 rf 2 1 e 2
2
2
2
2
Where taking the derivative yields
V 1 M ( x) 2 M ( x) e f e f rf rf ee
2
110
Filtering Control, Revisited (continued)
Continuing from the previous slide :
V 1 M ( x) 2 M ( x) e f e f rf rf ee
2
2
2
V e 2 er f e e f e f rf rf rf (k 2 1) r f e rf e f 1 M ( x) 2 ...
2
~
... 1 M ( x) 2 (k 2 1)rf e N k 2 M ( x) N d k1 sgn( e e f )
2
2
2
2
V e 2 e r z z k M N k sgn( e e )
f
f
2
1
d
1
f
where M 1 M ( x) M 2 ( x).
Let k 2
1
(k n 1). Then, we can write
M1
2
V z z z k n N d k1 sgn( e e f )
V z
2
2 z z
2 z
V 1
kn
2
N d k1 sgn( e e f )
kn
z
2
N d k1 sgn( e e f )
111
Filtering Control, Revisited (continued)
From the previous slide :
2 z
V 1
kn
Keep in mind that
z
2
N d k1 sgn( e e f )
2
min{ M 1 ,1} z V max{ M 2 ( x),1} z
Rewriting V gives
2
2 z 2
z L(t )
V 1
k n
where L(t ) N d k1 sgn( e e f )
t
Let Vnew V b L( )d , where b is a constant.
0
P (t )
We have Vnew V L(t ); so,
2 z
Vnew 1
kn
z
2
112
Filtering Control, Revisited (continued)
We have
P (t ) b (e e e f e f ) N d k1 sgn( e e f ) d
0
(t )
t
t
P (t ) b ( ) N d k1 sgn( ) d We' ve done this before. Work is done!
0
We know from previous results that P 0 if k1 N d N d
Let b k1i e(t0 ) e(t0 ) N d (t0 )
Now, we have to complete the proof :
min
M2
2
2
{M 1 ,1} y Vnew max{
,1} y , where y z T
2
2
1
P
T
2
We can then say
2
Vnew z for k n 2 z
2
Vnew z for k n 2
y
113
Filtering Control, Revisited (continued)
Continuing from the previous slide :
V (t )
2
Vnew z for k n 2 new
1
Vnew (0)
2
2
Vnew z for k n
1
2
Vnew z for k n 2 2 y (0)
1
So V 0 and V g (t ), where g (t ) 0.
Here g (t ) 2 z T z, and we know if g (t ) L , then lim g (t ) 0.
t
Therefore z , e, e, e f , rf , 0.
114
Summary
Control Design Framework:
V special function of everything we want to go to zero
State Error (from zero equilibrium)
Tracking error
Filtered tracking error (r) is a trick to convert a 2nd-order
system into a 1st-order system (can use with other controls)
Parameter Estimation Error
State Estimation Error
V
derivative of the special function
V
dynamics of everything we want to go to zero control input
Feedback Linearization
Simplest case uses exact model knowledge
Adaptive Control
Observer
Filter
115
Summary
System: x f ( x) u
Let the tracking error, e, be defined as
e xd x e xd x e xd f ( x) u
1) Control Objective-make x xd (xd is a desired trajectory), assuming xd , xd L .
2) Hidden Control Objective-keep everything bounded (ie., x, x, u L ).
Design a controller based on the tracking error dynamics.
Note that if xd constant equilibrium point and u 0 then
e x f ( x), the basic Lyapunov stability analysis tools (Chapter 3) can be used.
116
Homework A.0
1. For the example on Slide 9, simulate the system in Simulink. Be sure to plot the evolution of the states.
x1 x2
x2 kx1
if x1s 0
1
k
if x1s 0
1
where s=x1 x2
Homework A.1
1. Design a controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an constant. Simulate system for a 1 plot the state and control.
Known structure and known parameters -> exact model knowledge control
2. Design a contoller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant. Simulate system for a 1 plot the state, control, and
and the parameter estimates.
Known structure but unknown parameters -> adaptive
3. Design a robust controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant but you do know a <a then a q < a q 2 +1
Simulate system for a 1 and a 3 plot the state and control comparing VR1 , VR 2 , VR 3 controllers.
Partially known structure (unknown component) -> robust
4. Design a learning control for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant. Simulate system for a 1. Plot the state q(t ), tracking error,
and control signal u (t ).
Partially known structure (unknown component), repetitive task -> learning
Homework A.1-1 (sol)
1. Design a controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an constant. Simulate system for a 1 plot the state and control.
e qd q cos(t ) q
e sin(t ) q qd aq u
Closed-loop system :
V 12 e 2
q sin(t ) k cos(t ) kq
V ee e( sin(t ) aq u )
design u sin(t ) aq ke
V ke 2
V is PD, radially unbounded, V is ND e 0
e 0 q qd , cos(t) bounded q is bounded
e 0, q is bounded, sin(t ) bounded u is bounded
q is bounded, u is bounded q is bounded
Homework A.1-1 (sol)
Exact Model Knowledge, k=1
1
qd
0 q
-1
-2
0
2
4
6
8
10
2
1
u
0
-1
-2
0
2
4
6
8
10
Homework A.1-1 (sol)
Exact Model Knowledge, k=10
1
0.5
qd
q
0
-0.5
-1
0
2
4
6
8
10
2
4
6
8
10
10
5
u
0
-5
0
Homework A.1-2 (sol)
1. Exact model control:
q
0
exactly cancel aq
sin(t ) k cos(t ) kq
Homework A.1-2 (sol)
2. Design a controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant. Simulate system for a 1 plot the state, control, and
and the parameter estimates..
e qd q cos(t ) q
e sin(t ) q sin(t ) aq u sin(t ) W u
where W [q ] and [a ]
V 12 e 2 12 T where ˆ
V ee ˆ e( sin(t ) W u ) ˆ
design u sin(t ) W ˆ ke
V ke 2 e W W ˆ ˆ ke 2 eW ˆ ke 2 W T e ˆ
t
T
2
ˆ
ˆ
design W e qe qqd q ( q cos(t ) q 2 )dt
0
V ke 2
V is PD, radially unbounded, V is NSD e and are bounded
Homework A.1-2 (sol)
Design a controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant. Simulate system for a 1 plot the state, control, and
and the parameter estimates..
V ke 2
V is PD, radially unbounded, V is NSD e and are bounded
e bounded q qd e cos(t ) q q is bounded
Closed loop error system:
e sin(t ) W (q) u ke W (q) and e, , q are bounded e is bounded
V 2kee and e,e are bounded V is bounded V 0 e 0
e 0, q is bounded, ˆ is bounded u is bounded
q is bounded, u is bounded q is bounded
Homework A.1-2
(sol)
1
Adaptive, k=1
qd
0.5
0
q, qd-:
States x1, x2(:) with k=1
q
-0.5
-1
-1.5
-2
0
1
2
3
4
5
time
6
7
8
9
10
Paremter Estimate
1.4
1.2
a
1
a, ahat-:
0.8
0.6
â
0.4
0.2
0
-0.2
0
1
2
3
4
5
time
6
7
8
9
10
Homework A.1-2 (sol)
Adaptive, k=10
States x1, x2(:) with k=10
1.5
qd
1
q, qd-:
0.5
q
0
-0.5
-1
-1.5
0
1
2
3
4
5
time
6
7
8
9
10
Paremter Estimate
1.2
1
a
a, ahat-:
0.8
0.6
0.4
â
0.2
0
-0.2
0
1
2
3
4
5
time
6
7
8
9
10
Homework A.1-2 (sol)
1. Exact model control:
q
sin(t ) k cos(t ) kq
0
exactly cancel aq
2. Adaptive closed-loop system:
t
q aq q ( q cos(t ) q 2 )dt sin(t ) k cos(t ) kq
0
aq
Homework A.1-3 (sol)
3. Design a robust controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant but you do know a <a then a q < a q 2 +1
Simulate system for a 1 and a 3 plot the state, control. VR1 controller.
e qd q cos(t ) q
e sin(t ) q sin(t ) aq u
V 12 e 2
V ee e( sin(t ) aq u )
design u sin(t )
e
ke where a q 2 +1
e
e
2
V ke e aq a q +1
e
e2
2
2
V ke (a q e a q +1 ) ke 2 (a q a q 2 +1 ) e ke 2
e
2
0 by definition of
the bounding function
V is PD, radially unbounded, V is ND e 0
Robust VR1 , k 1
Homework A.1-3 (sol)
2
1
0
qd
q
-1
-2
0
10
2
4
6
8
6
8
10
k=1
u
5
0
-5
-10
0
2
4
10
Homework A.1-2 (sol)
1. Exact model control:
q
sin(t ) k cos(t ) kq
0
exactly cancel aq
2. Adaptive closed-loop system:
t
q aq q (q cos(t ) q 2 )dt sin(t ) k cos(t ) kq
0
aq
3.a Robust - Sliding Mode
q aq a q +1
2
cos(t ) q sin(t ) k cos(t ) kq
cos(t ) q
compensation for unknown aq
Homework A.1-3 (sol)
Design a robust controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant but you do know a <a then a q < a q 2 +1
Simulate system for a 1 and a 3 plot the state, control, VR 2 controller.
e qd q cos(t ) q
e sin(t ) q sin(t ) aq u
V 12 e 2
V ee e( sin(t ) aq u )
design u sin(t )
1
2 e ke where a q 2 +1
1
V ke e aq a q 2 +1
2
2
e
Homework A.1-3 (sol)
(cont )
2
1
2
V ke e aq a q +1 e
2
1
1
V ke 2 ( a q 2 +1 e a q 2 +1 e2 ) ke2 a q 2 +1 e (1 a q 2 +1 e )
2
positive
if a q 2 +1 e , then V ke 2
if a q 2 +1 e , then V ke 2
V 2kV
follow derivation in notes to show the system is
Globally Uniformly Ultimately Bounded (GUUB),
and all signals are bounded.
Robust VR2 , 2, k 1
Homework A.1-3 (sol)
q, qd(:) with k=1, eps=2
1.5
qd
1
q, qd
0.5
q
0
-0.5
Note that the analysis only guaranteed Ultimate Bounded tracking error.
-1
-1.5
0
1
2
3
4
5
u
time
2
3
4
5
time
6
6
7
8
9
10
6
7
8
9
10
5
4
u
u
3
2
1
0
-1
-2
0
1
Robust VR2 , .1, k 1
Homework A.1-3 (sol)
q, qd(:) with k=1, eps=.1
1.5
qd
1
q
q, qd
0.5
0
-0.5
-1
u
2
-1.5
100
0
1
2
3
5 u
time
4
6
7
8
91.5
10
1
80
0.5
u
u
60
0
u
-0.5
40
-1
20
-1.5
-2
0
-20
0
1
2
3
4
5
time
6
7
8
0
1
9
2
3
10
4
5
time
6
7
8
9
10
Homework A.1-2 (sol)
1. Exact model control:
q
0
sin(t ) k cos(t ) kq
exactly cancel aq
2. Adaptive closed-loop system:
t
q aq q (q cos(t ) q 2 )dt sin(t ) k cos(t ) kq
0
aq
3.a Robust - Sliding Mode
cos(t ) q
2
q aq a q +1
sin(t ) k cos(t ) kq
cos(t ) q
compensation for unknown aq
3.b Robust - High Gain
2
1 2
q aq q +1 cos(t ) q sin(t ) k cos(t ) kq
compensation for unknown aq
Homework A.1-3 (sol)
Design a robust controller for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant but you do know a <a then a q < a q 2 +1
Simulate system for a 1 and a 3 plot the state, control, VR 3 controller.
e qd q cos(t ) q; e sin(t ) q sin(t ) aq u
V 12 e 2
V ee e( sin(t ) aq u )
design u sin(t )
2e
ke where a q 2 +1
e
2
2
2
2
2
a
q
+1
e
a
q
+1
e
ke 2 e a q 2 +1
V ke 2 e aq
a q 2 +1 e
a q 2 +1 e
a q 2 +1
V ke 2
2
a q +1 e
e e a q +1 a q +1
2
V ke 2
follow derivation from notes.
2
2
2
2
a q 2 +1 e
e 2
2
ke
a q 2 +1 e
1
Homework A.1-3 (sol)
2
Robust VR3 , 0.5
qd
0 q
-2
0
2
4
6
8
10
8
10
Robust VR3 , 0.5
2
u
0
-2
0
2
4
6
Homework A.1-3 (sol)
2
Robust VR3 , 0.05
qd
1
q
0
-1
-2
0
2
6
8
10
8
10
Robust VR3 , 0.05
2
1
4
u
0
-1
-2
0
2
4
6
Homework A.1-2 (sol)
1. Exact model control:
q
0
sin(t ) k cos(t ) kq
exactly cancel aq
2. Adaptive closed-loop system:
t
q aq q (q cos(t ) q 2 )dt sin(t ) k cos(t ) kq
0
aq
3.a Robust - Sliding Mode
cos(t ) q sin(t ) k cos(t ) kq
q aq a q 2 +1
cos(t ) q
compensation for unknown aq
3.b Robust - High Gain
2
1 2 2
q aq a q +1 cos(t ) q sin(t ) k cos(t ) kq
compensation for unknown aq
3.c Robust - High Frequency
2
2
2
a q +1 cos(t ) q
q aq
sin(t ) k cos(t ) kq
2
a q +1 cos(t ) q
compensation for unknown aq
Homework A.1-2 (sol)
4. Design a learning control for the following system so that q tracks qd cos(t ).
q aq u
where a is an unknown constant. Simulate system for a 1.
Plot the state, tracking error, control.
One of the advantages
of the repetitive learning scheme is that the requirement that
the robot return to the exact same initial condition after each learning
trial is replaced by the less restrictive requirement that the desired trajectory
of the robot be periodic.
Homework A.1-2 (sol)
System
Homework A.1-2 (sol)
a=1, k=kd=5
qd
q
Learning Control
Homework A.2
1. In preparation for designing an adaptive contoller,
write a linear parameterization for the following system:
1
q aq aq 2 b sin(q ) q 3 d cos(q 2 ) e 2 q abq u
c
where a, b, c, d , e are unknown constants.
2. Design an adaptive tracking contoller for the following system:
q q au
where a is an unknown constant.
3. Use backsteppping to design an adaptive contoller for the following system:
q1 aq12 q2
q2 u
where a is an unknown constant.
Homework A.2-1 (sol)
1. In preparation for designing an adaptive contoller,
write a linear parameterization for the following system:
1
q aq aq 2 b sin(q ) q 3 d cos(q 2 ) l 2 q abq u
c
where a, b, c, d , l are unknown constants.
Linear parameterization for the system:
q [(q q 2 ) sin(q ) q 3
a
b
1
cos( q 2 ) q q ] c u W (q ) u
d
2
l
ab
1
Note, will adapt for " " not for "c ", for "l 2" not for l , for "ab " in addition to "a " and "b "
c
individually
Homework A.2-2 (sol)
2. Design an adaptive tracking contoller for the following system:
q q au
where a is an unknown constant.
Looks harmless but note that anything we put in u will get multiplied by " a ".
Can't include
1
in u since a is unknown.
a
Rewrite as:
1
1
q q u W u
a
a
1
1
1
1
e qd q qd W u W1 u where W1 [ qd q ] and
a
a
a
a
1 2 1 T
V
e 2 where ˆ
2a
1
1
1
V ee 12 T 12 T ee 12 ˆT 12 Tˆ ee ˆ
a
a
a
substitue e-dynamics: V e( W1 u ) ˆ
design u W1ˆ ke
V ke 2 eW ˆ ke 2 W T e ˆ ke 2 W T e ˆ
1
1
1
design ˆ W1T e [ qd q]e
V ke 2
V is PD, radially unbounded, V is NSD e and are bounded
Homework A.2-3 (sol)
3. Use backsteppping to design an adaptive tracking controller for the following system:
q1 aq12 q2
q2 u
where a is unknown constants.
Tracking in upper subsytem:
e1 q1d q1
e1 q1d q1 q1d aq12 q2
Introduce the embedded control:
e1 q1d aq12 2 q2 d where 2 q2 q2 d
Design adaptive "control input" q2 d :
V1 12 e12 12 T where ˆ and let a and ˆ aˆ
V1 e1e1 ˆ e1 (q1d aq12 2 q2 d ) aaˆ
ˆ 12 k1e1
Design q2 d q1d aq
V1 e1e1 ˆ k1e12 e1 2 e1aq12 aaˆ k1e12 e12 e1q12 aˆ a
Design aˆ e1q12
V1 k1e12 e1 2
Homework A.2-3 (sol)
u q e q q
ˆ 12 2aq
ˆ 1q1 k1e1
2 q2 q2 d u q1d aq
2 q2 q2 d
2
1 1
1d
V2 V1 22
2
1
ˆ 1 aq12 q2 k1 q1d aq12 q2
2aq
ˆ 1 aq12 q2 k1 q1d aq12 q2
V2 V1 22 k1e12 e12 2 u q1d e1q12 q12 2aq
ˆ 1q2 k1 q1d q2 e12 2 uaux
Design u q1d e1q14 2aq
ˆ 13a ak1q12 uaux
V2 k1e12 22 2 2aq
What if we just use our aˆ that we already designed?
ˆ
2aq
ˆ 13 k1q12 a 2aq
ˆ 13 k1q12 aˆ
V2 k1e12 22 2 2aq
V2 k1e12 22
2
3
1
k1q12 a
This is a problem because we can't deal with a
Homework A.2-3 (sol)
What if we repeat our previous adaptation approach?
1
V3 V2 a22 where a2 a aˆ2
2
V3 V2 a2 aˆ2
ˆ 13a ak1q12 uaux a2 aˆ2
V3 k1e12 22 2 2aq
ˆ 13 k1q12 aˆ2
Design uaux 2aq
ˆ 13 k1q12 2 aˆ2 a2
V3 k1e12 22 2aq
ˆ 13 k1q12 2
Design aˆ2 2aq
V2 k1e12 22
Homework A.3
Homework A.3
2.
3.
4.
Homework A.3
Homework A.3-1 (sol)
Homework A.3-2 (sol)
2.
Homework A.3-3 (sol)
3.
4.
Homework A.3-4 (sol)
Homework A.3-5 (sol)
Homework A.3-5 (sol)
Homework A.3-5 (sol)
Homework A.3-5 (sol)
Homework A.4
1. Design an observer to estimate x in the open-loop system:
x 2cos( x 2 ) u
( x is measureable but x is not).
2. Design an observer to estimate x and a tracking controller for x in the system:
x 2cos( x 2 ) u
( x is measureable but x is not).
3. Design a filter and a tracking controller for x in the system:
x 2cos( x 2 ) u
( x is measureable but x is not).
Homework A.4-1 (sol)
1. Design an observer to estimate x in the open-loop system:
u 0
x 2 cos( x 2 ) u
( x is measureable but x is not).
Define:
x x xˆ
s x x (similar to the filtered tracking error r ) then s x x x xˆ x
propose V 12 x 2 12 s 2
V xx ss xx s x xˆ x
rearrange definition of s: x s x
V x s x s x xˆ x
x 2 sx s x xˆ x
substitute the open-loop system (x with u=0):
V x 2 sx s 2 cos( x 2 ) xˆ x
we would like to have only x 2 and -s 2 in V , design xˆ to make this happen:
xˆ 2 cos( x 2 ) x s
cancel
V x 2 s 2
stabilize
x
cancel cross term
Homework A.4-1 (sol)
1. (cont) Design an observer to estimate x in the open-loop system:
x 2 cos( x 2 ) u
But that estimate has velocity
measurement in it?
( x is x measureable but is not).
V 12 x 2 12 s 2
xˆ 2 cos( x 2 ) x s x
Designed:
xˆ 2 cos( x 2 ) x s
cancel
stabilize
x
cancel cross term
Two-part implementation of the filter:
xˆ p (terms to get differentiated to make xˆ )
V x 2 s 2
p terms that don't get differentiated to make xˆ
V is PD, V is ND
x, s 0 xˆ x
Rewrite the observer by replacing s=x x and regrouping
x s x 0 xˆ x
2 cos( x 2 ) 1 x 1 x
observer is bounded if x, x are bounded
xˆ 2 cos( x 2 ) x x x x
put in p
put in xˆ
Implementable observer:
xˆ p 1 x
p 2 cos( x 2 ) 1 x
Prove that it works:
xˆ p 1 x 2 cos( x 2 ) 1 x 1 x
This is a simple example because there is no x term in the system dynamics
Homework A.4-2 (sol)
2. Design an observer to estimate x and a tracking controller for x in the system:
x 2cos( x 2 ) u
( x is measureable but x is not).
Define:
x x xˆ
s x x (similar to the filtered tracking error r ) then s x x x xˆ x
e xd x
Follow the same approach as previous problem with u 0
VO 12 x 2 12 s 2
VO xx ss xx s x xˆ x
VO x 2 sx s 2 cos( x 2 ) u xˆ x
Implementable observer:
xˆ p 1 x
p 2 cos( x 2 ) 1 x u
VO x 2 s 2
Homework A.4-2 (sol)
2. (cont) Design an observer to estimate x and a tracking controller for x in the system:
x 2 cos( x 2 ) u
( x is measureable but x is not).
Define:
x x xˆ
s x x (similar to the filtered tracking error r ) then s x x x xˆ x
Follow the same approach as previous problem with u 0
VO 12 x 2 12 s 2
VO x 2 s 2
Control design:
e1 xd xˆ e1 xd xˆ (note that this is a measureable signal)
Substitute from the filter equation (implementable form of the observer) and inject pd
e1 xd p 1 x pd pd
Reminder of the implementable observer from previous problem:
Define p pd p p pd p
xˆ p 1 x
e1 xd 1 x pd p
Propose:
V VO 12 e12 12 p 2 12 x 2 12 s 2 12 e12 12 p 2
V VO e1e1 pp x 2 s 2 e1e1 pp
x 2 s 2 e1 xd 1 x pd p pp
p 2 cos( x 2 ) 1 x u
Homework A.4-2 (sol)
V x 2 s 2 e1 xd 1 x pd p pp
Design pd xd 1 x e1
stabilize
cancel
V x 2 s 2 e12 e1 p pp x 2 s 2 e12 e1 p p pd p
Differentiate pd : pd xd 1 x e1 and sub. and sub. from observer:
V x 2 s 2 e12 e1 p p xd 1 x e1 2 cos( x 2 ) 1 x u
x 2 s 2 e12 e1 p p xd e1 2 cos( x 2 ) 1 x 1 x u
Replace x with x s x since we have s in Lyapunov function but not x (both unmeasureable):
V x 2 s 2 e12 e1 p p xd e1 2 cos( x 2 ) 1 x 1 s x u
x 2 s 2 e12 e1 p p xd e1 2 cos( x 2 ) 1 x 1 x 1 s u
Design part of u :
u xd e1 2 cos( x 2 ) 1 x 1 x
cancel
V x 2 s 2 e12 p 2 p 1 s pu1
e1
cancel crossterm
p u1
stabilize
Homework A.4-2 (sol)
From previous slide: V x 2 s 2 e12 p 2 p 1 s pu1
V x 2 s 2 e12 p 2 1 p s pu1
Design u1 kn1 p 1
2
V x 2 s 2 e12 p 2 1 p s k n1 p
worst case: 1 p s kn1 p
2
1
2
2
1
2
s
1 kn1
p
Drop the negative term to find a new upper bound
V x 2 s 2 e12 p 2 1 p s
s
V x 2 s 2 e12 p 2 1 s
1 k
n1
V x s e p
2
2
2
1
2
s
2
kn1
1 2 2
2
V x 2 1
s e1 p
kn1
choose kn1 1 GES tracking
This is a simple example because there is no x term in the system dynamics
Homework A.4-3 (sol)
3. Design a filter and a tracking controller for x in the system:
x 2cos( x 2 ) u ( x is measureable but x is not).
Define:
e xd x e xd x e xd x
e e ef e e ef
e f e f k e
e e e f xd x e e f
xd 2 cos( x 2 ) u e e f
Propose: V 12 e 2 12 e 2f 12 2
V 12 e 2 12 e 2f 12 2
V ee e f e f
e e e f e f e f kn e e 2 e 2f e ke f
e e e ke x 2 cos( x ) u
e e e k e
e e k 1 x 2 cos( x ) u k 2 e
e 2 e 2f e ke f xd 2 cos( x 2 ) u e e f
2
2
f
2
f
d
f
2
2
f
f
2
2
d
f
e
Homework A.4-3 (sol)
Assume for now e f is measureable:
2
V e e k 1 xd 2 cos( x ) u k 2 e f e
cancel
cancel
Design u xd 2 cos( x 2 ) k 2 e f e
2
2
f
2
V e 2 e 2f k 1 2
Choose k 1 GES tracking
This is a simple example because there is no x term in the system dynamics
Is e f measureable? (have to ask since we defined e f e f k e)
e f e f k e e e f e k 1 e f k 1 e ke
Two-part implementation of the filter:
e f p (terms to get differentiated to make e f )
p terms that don't get differentiated to make e f
e f p ke
p k 1 e f k 1 e
© Copyright 2026 Paperzz