Grasping Unknown Objects with a Humanoid Robot

Visual Perception and Robotic Manipulation
Springer Tracts in Advanced Robotics
Chapter 6
Hybrid Position-Based
Visual Servoing
Geoffrey Taylor
Lindsay Kleeman
Intelligent Robotics Research Centre (IRRC)
Department of Electrical and Computer Systems Engineering
Monash University, Australia
Overview
•
•
•
•
•
Motivation for hybrid visual servoing
Visual measurements and online calibration
Kinematic measurements
Implementation of controller and IEKF
Experimental comparison of hybrid visual
servoing with existing techniques
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
2
Motivation
• Manipulation tasks for
a humanoid robot are
characterized by:
– Autonomous planning
from internal models
– Arbitrarily large initial
pose error
Metalman: upper-torso
– Background clutter and
humanoid hand-eye system
occluding obstacles
– Cheap sensors  camera model errors
– Light, compliant limbs  kinematic calibration errors
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
3
Visual Servoing
• Image-based visual servoing (IBVS):
– Robust to calibration errors if target image known
– Depth of target must be estimated
– Large pose error can cause unpredictable trajectory
• Position-based visual servoing (PBVS):
– Allows 3D trajectory planning
– Sensitive to calibration errors
– End-effector may leave field of view
• Linear approximations (affine cameras, etc)
• Deng et al (2002) suggest little difference
between visual servoing schemes
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
4
Conventional PBVS
• Endpoint open-loop (EOL):
– Controller observes only the target
– End-effector pose estimated using kinematic model
and calibrated hand-eye transformation
– Not affected by occlusion of the end-effector
• Endpoint closed-loop (ECL):
– Controller observes both target and end-effector
– Less sensitive to kinematic calibration errors but
fails when the end-effector is obscured
– Accuracy depends on camera model and 3D pose
reconstruction method
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
5
Proposed Scheme
• Hybrid position-based visual servoing using
fusion of visual and kinematic measurements:
– Visual measurements provide accurate positioning
– Kinematic measurements provide robustness to
occlusions and clutter
– End-effector pose is estimated from fused measurements using Iterated Extended Kalman Filter (IEKF)
– Additional state variables included for on-line
calibration of camera and kinematic models
• Hybrid PBVS has the benefits of both EOL and
ECL control and the deficiencies of neither.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
6
Coordinate Frames
Hybrid
EOL
ECL
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
7
PBVS Controller
• Conventional approach (Hutchinson et al, 1999).
• Control error (pose error):
G
H O (W H E E H G ) 1 W H O
• WHE estimated by visual/kinematic fusion in IEKF.
• Proportional velocity control signal:
G
Ω  k1 G O G A O
G
V  k 2 G TO  G ΩG TO
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
8
Implementation
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
9
Visual Measurements
• Gripper tracked using active
LED features, represented by
an internal point model
Gi
image
plane
camera
centre
gi
measurements
C
• IEKF measurement model:
L, R
3D gripper
model
gˆ i  L, R PC HW W Ĥ E E Gi
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
10
Camera Model Errors
• In practical system, baseline and verge angle
may not be known precisely.
left camera
centre
2b*
right camera
centre
left image
plane
-*
right image
plane
reconstruction
2b
affine
reconstruction
scaled
reconstruction
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
11
Camera Model Errors
• How does scale error affect pose estimation?
• Consider the case of translation only by TE:
– Predicted measurements:
L, R
ˆ )
gˆ i  L, R PC HW W Ĥ E ( E Gi  T
E
– Actual measurements:
L, R
gi  L, R PC HW W Ĥ E K1 ( E Gi  TE )
– Relationship between actual and estimated pose:
ˆ  f ( K , b, G , T ) K T
T
E
1
i
E
1 E
• Estimated pose for different objects in the same
position with same scale error is different!
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
12
Camera Model Errors
• Scale error will cause non-convergence of PBVS!
• Although the estimated gripper and object frames
align, the actual frames are not aligned.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
13
Visual Measurements
• To remove model errors, scale term is estimated
by IEKF using modified measurement equation:
L, R
gˆ i  L, R PC HW W Ĥ E ( Kˆ 1 E Gi )
• Scale estimate requires four observed points with
at least one in each stereo field.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
14
Kinematic Model
• Kinematic measurement from PUMA is BHE
• Measurement prediction (for IEKF):
B
Ĥ E  B HW W Ĥ E
• Hand-eye transformation BHW is treated as a
dynamic bias and estimated in the IEKF
• Estimating BHW requires visual estimation of
WH , and is therefore dropped from the state
E
vector if the gripper is obscured.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
15
Kalman Filter
• Kalman filter state vector (position, velocity,
calibration parameters):
x(k ) (W p E (k ),W rE (k ), B pW (k ), K1 (k ))T
• Measurement vector (visual + kinematic):
y (k ) ( L g 0 (k ), R g 0 (k ), , B p E (k ))T
• Dynamic models:
– Constant velocity model for pose
– Static model for calibration parameters
• Initial state from kinematic measurements.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
16
Constraints
•
•
•
•
•
Three points required for visual pose recovery
Stereo measurements required for scale estimation
LED association required multiple observed LEDs
Estimation of BHW requires visual observations
Use a hierarchy of estimators (nL,R = no. points):
– nL,R < 3: EOL control, no estimation of K1 or BHW
– nL > 3 xor nR > 3: Hybrid control, no K1
– nL,R > 3: Hybrid control (visual + kinematic)
• Excluded state variables are discarded by setting
rows and columns of Jacobian to zero
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
17
LED Measurement
• LEDs centroids measured with red colour filter
• Measured and model LEDs associated using a
global matching procedure.
Observed
LEDs
Predicted
LEDs
• Robust global matching requires  3 LEDs.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
18
Experimental Results
• Positioning experiment:
– Align midpoint between
thumb and forefinger at
coloured marker A
– Align thumb and forefinger
on line between A and B
• Accuracy evaluation:
– Translation error: distance between midpoint of
thumb/forefinger and A
– Orientation error: angle between line joining
thumb/forefinger and line joining A/B
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
19
Positioning Accuracy
Hybrid controller, initial pose
(right camera only)
Hybrid controller, final pose
(right camera only)
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
20
Positioning Accuracy
ECL controller, final pose
(right camera only)
EOL controller, final pose
(right camera only)
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
21
Positioning Accuracy
• Accuracy measured over 5 trial per controller.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
22
Tracking Robustness
Initial pose: gripper outside
FOV (ECL control)
Gripper enters field of view
(Hybrid control, stereo)
Final pose: gripper obscured
(Hybrid control, mono)
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
23
Tracking Robustness
EOL
Hybrid
stereo
Hybrid
mono
Translational component
of pose error
EOL
Hybrid
stereo
Hybrid
mono
Estimated scale (camera
calibration parameter)
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
24
Baseline Error
• Error introduced in calibrated baseline:
– Baseline scaled between 0.7 to 1.5
• Hybrid PBVS performance in presence of error:
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
25
Verge Error
• Error introduced in calibrated verge:
– Offset between –6 to +8 degrees
• Hybrid PBVS performance in presence of error:
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
26
Servoing Task
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
27
Conclusions
• We have proposed a hybrid PBVS scheme to
solve problems in real-world tasks:
– Kinematic measurements overcome occlusions
– Visual measurements improve accuracy and
overcome calibration errors
• Experimental results verify the increased
accuracy and robustness compared to
conventional methods.
Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics
28