Eye in hand: Towards GPU accelerated online grasp planning

Eye in hand: Towards GPU accelerated
online grasp planning based on
pointclouds from in-hand sensor
Andreas Hermann, Felix Mauch, Sebastian Klemm,
Arne Roennau
Presented by Beatrice Liang
Overview and Motivation
•
Use in-hand depth cameras + GPU based
collision detection algorithms for grasp
planning on the fly
•
Targets anthropomatic multi-fingered
hands with complex kinematics and
geometries
•
Service robots have multifunctional hands
 Multiple joints
 Numerous Degrees of Freedom
Schunk SVH Hand with PMD Nano Depth Sensor
Robotic Hands
•
Generally have 5 to 20 active degrees of freedom to perform grasp
•
SCHUNK SVH hand
 20 DOF actuated by 9 motors
Grasp Planning
•
Simulate contact between fingers and grasped object to find
appropriate joint angles
•
Databases to store precomputed grasps for known objects
•
Define grasps for geometric primitives and fit primitives to visible
parts of target object
•
Fit a set of spheres into detected objects
•
Estimate backside of objects by mirroring visible part at the shadow
edge
Hand-Eye Calibration
•
Sense-Plan-Act Cycle
 Object is Perceived
 Grasp is Planned
 Grasp is Executed without sensory input
•
Visual Servoing
•
Haptic Grasping
Proposed Method
•
Visual exploration via in-hand camera to generate object models
•
Highly parallelized algorithms
•
Individual finger specific motion planning
•
Compatible with further tactile or force based refinement
•
Don’t require a meshed based surface representation
Implementation
GPU based Collision Detection
with GPU-Voxels
•
Voxel based collision detection
•
GPU Octrees, Voxelmaps, and Voxellists
•
Models consist of dense pointclouds
•
Volumetric representations of motions
•
Voxels can be processed independently of each other
Pinch-Grasp-Swept-Volume
3D Data Acquisition
Offline Grasp Rendering
•
Generate Swept-Volumes for every supported grasp
•
Grasps are defined by the joints’ start/end angles
and by the ratio of their coupling
•
For each grasp there are
 N = 5 animated DOF
 K = 250 IDs (limited by memory restrictions
•
The size of identifiable intervals per finger motion
:
Sensor Data Processing
•
Used exact extrinsic calibration
•
Accumulate measurements in a
probabilistic 3D Octree
•
Avoided surface reconstruction
(algorithm does not require mesh
representations)
•
Stitched output fed to tabletop
segmentation algorithms
•
Produce pointcloud representation
of object’s surface
Suppress grasps in unknown
regions
Transformations into virtual
workspace
Optimization problem
Input Dimensions:
Geometrical transformation between object & hand (6 DOF)
Joint angles of N fingers
Object Geometry
Hand geometry
Grasp Planning Reward
Function
Reward function:
Grasps:
Hybrid Particle Swarm
Optimization (PSO)
•
Particle
describes translation and rotation of
object in relation to hand
•
Optimization Problem:
where
•
Grasp Function:
•
Hybrid Optimization Approach
Grasp Model Processing
1.
Choose if precision or power grasps should be
planned
2.
Evaluate grasp
 Object pointcloud transformed into stretched out
hand
 Pull object out of hand until there are no more
collisions
 Intersect precalcuated Swept Volume with object
3.
Angle of fingers at first collision:
Evaluation