Robotics 2
Iterative Learning
for Gravity Compensation
Prof. Alessandro De Luca
Control goal
!
regulation of arbitrary equilibium configurations in the
presence of gravity
!
!
!
without explicit knowledge of robot dynamic coefficients (nor of
the structure of the gravity term)
without the need of “high” position gain
based on an iterative control scheme that uses
1. PD control on joint position error + constant feedforward term
2. iterative update of the feedforward term at successive steadystate conditions
!
Robotics 2
derive sufficient conditions for the global convergence
of the iterative scheme with zero final error
2
Preliminaries
!
robot dynamic model
!
available bound on the gradient of the gravity term
!
!
Robotics 2
ANY factorization of
centrifugal/Coriolis terms
regulation attempted with a joint-based PD law
(without gravity cancellation nor compensation)
at steady state, there is a non-zero residual error
3
Iterative control scheme
!
control law at the i-th iteration (for i = 1,2,…)
with a constant compensation term ui−1 (feedforward)
!
!
!
!
q0 is the initial robot configuration
u0 = 0 is the easiest initialization of the feedforward term
.
at the steady state of the i-th iteration (q=qi , q=0), one has
update law of the compensation term
(for next iteration)
[
Robotics 2
← for implementation →
]
[ for analysis ]
4
Convergence analysis
Theorem
(a)
(b)
guarantee that the sequence {q0, q1, …} converges to qd
.
from any initial value q0 (and q0), i.e., globally
!
!
condition (a) is sufficient for the global asymptotic stability
of the desired equilibrium state when using
with a known gravity term
the additional sufficient condition (b) guarantees the
convergence of the iterative scheme, yielding
Robotics 2
5
Proof
!
!
let
be the error at the end of the i-th iteration;
based on the update law, it is
and thus
on the other hand, from the update law one has
and combining the two relations, we have
or
Robotics 2
6
Proof
(cont)
condition (b) guarantees that the error sequence {e0, e1, e2, ...}
!
is a contraction mapping, so that
with asymptotic convergence from any initial state
⇒ the robot progressively approaches the desired configuration
through successive steady-state conditions
!
!
Robotics 2
KP and KD affect each transient phase
! drives the convergence rate of intermediate steady states to the final one
7
Remarks
!
!
!
combining conditions (a) and (b) in the control law, leads to
the sufficient condition for the proportional gain
for a diagonal KP, this condition implies a (positive) lower
bound on the single diagonal elements of the matrix
again, it is a sufficient condition
!
!
the scheme may converge even if this condition is violated ...
the scheme can be interpreted as using an integral term which
!
!
Robotics 2
is updated only in correspondence of a discrete sequence of time
instants
has guaranteed convergence and global asymptotic stability
8
Numerical results
!
3R robot with uniform links, moving in the vertical plane
!
with saturations of the actuating torques
!
three cases, from the downward position
I: qd = (!/2,0,0)
II: qd = (3!/4,0,0)
III: qd = (3!/4,0,0)
as before
Robotics 2
9
Case I: qd=(! /2,0,0)
joint position errors (zero after 3 updates)
Robotics 2
control torques
10
Case II: qd=(3! /4,0,0)
joint position errors (zero after 5 updates)
Robotics 2
control torques
11
Case III: qd=(3! /4,0,0), reduced gains
joint position errors (limit cycles, no convergence!)
Robotics 2
control torques
12
Final comments
!
!
only few iterations are needed for obtaining convergence, learning
the correct gravity compensation at the desired qd
sufficiency of the condition on the P gain
!
!
!
this shows how ‘distant’ is sufficiency from necessity
analysis can be refined to get lower bounds on the KPi (diagonal case)
that are smaller, but still sufficient for convergence
!
!
even if violated, convergence is still obtained in the first two cases; in the
third case, a limit motion cycle takes place between two equilibrium
configurations that are both incorrect
intuition says that lower values for KPi should be sufficient for distal joints
in practice, update of the feedforward term occurs when the robot is
close enough to a steady state (joint velocities and position variations
are below suitable thresholds)
Robotics 2
13
Control experiments with flexible robots
without gravity
video
rest-to-rest maneuver in given motion time
for a single flexible link (PD + feedforward)
video
end-effector trajectory tracking for FlexArm
—a planar 2R robot with flexible forearm
Robotics 2
14
Extension to flexible robots
!
the same iterative learning control approach has been extended to
position regulation in robots with flexible joints and/or links under gravity
!
!
!
at the motor/joint level
at the Cartesian level (end-effector tip position, beyond flexibility), using a
double iterative scheme
experimentally validated on the DIS two-link FlexArm robot
with supporting base
tilted by approx " = 6°
from horizontal plane
(inclusion of gravity)
Robotics 2
15
Experimental results for tip regulation
3 iterations!
0g
sin "#
motion task:
(0,0)→(90°,0)
first link position
tip angle w.r.t. first link
final deflection
double iterative scheme
(De Luca, Panzieri: Int J Adapt Cont & Sign Proc, 1996)
Robotics 2
second link deflection
16
© Copyright 2026 Paperzz