class5

CS 326A: Motion Planning
Probabilistic Roadmaps
1
The complexity of the robot’s
free space is overwhelming
2
 The cost of computing an exact representation of the
configuration space of a multi-joint articulated object
is often prohibitive
 But very fast algorithms exist that can check if an
articulated object at a given configuration collides with
obstacles
  Basic idea of Probabilistic Roadmaps (PRMs):
Compute a very simplified representation of the free
space by sampling configurations at random using some
3
probability measure
Initial idea:
Potential Field + Random Walk
 Attract some points toward their goal
 Repulse other points by obstacles
 Use collision check to test collision
 Escape local minima by performing random walks
But many pathological cases …
Illustration of a Bad Potential
“Landscape”
U
q
Global minimum
Probabilistic Roadmap (PRM)
Space
n
forbidden space
Free/feasible space
7
Probabilistic Roadmap (PRM)
Configurations are sampled by picking coordinates at random
8
Probabilistic Roadmap (PRM)
Configurations are sampled by picking coordinates at random
9
Probabilistic Roadmap (PRM)
Sampled configurations are tested for collision
10
Probabilistic Roadmap (PRM)
The collision-free configurations are retained as milestones
11
Probabilistic Roadmap (PRM)
Each milestone is linked by straight paths to its nearest neighbors
12
Probabilistic Roadmap (PRM)
Each milestone is linked by straight paths to its nearest neighbors
13
Probabilistic Roadmap (PRM)
The collision-free links are retained as local paths to form the PRM
14
Probabilistic Roadmap (PRM)
The start and goal configurations are included as milestones
g
s
15
Probabilistic Roadmap (PRM)
The PRM is searched for a path from s to g
g
s
16
Multi- vs. Single-Query PRMs
 Multi-query roadmaps
 Pre-compute roadmap
 Re-use roadmap for answering queries
 Single-query roadmaps
 Compute a roadmap from scratch for
each new query
17
Procedure BasicPRM(s,g,N)
1.
2.
Initialize the roadmap R with two nodes, s and g
Repeat:
Sampling strategy
a. Sample a configuration q from C with probability measure p
b. If q  F then add q as a new node of R
c. For some nodes v in R such that v  q do
If path(q,v)  F then add (q,v) as a new edge of R
until s and g are in the same connected component of R or R
contains N+2 nodes
3.
If s and g are in the same connected component of R then
4.
Return a path between them
Else
Return NoPath
This answer may occasionally be incorrect
18
Requirements of PRM Planning
1. Checking sampled configurations and
connections between samples for collision can
be done efficiently.
 Hierarchical collision detection
2. A relatively small number of milestones and
local paths are sufficient to capture the
connectivity of the free space.
 Non-uniform sampling strategies
PRM planners work well in practice
Why?
20
PRM planners work well in
practice. Why?
 Why are they probabilistic?
 What does their success tell us?
 How important is the probabilistic
sampling measure p?
 How important is the randomness
of the sampling source?
21
Why is PRM planning
probabilistic?
 A PRM planner ignores the exact shape of F.
So, it acts like a robot building a map of an unknown
environment with limited sensors
 At any moment, there exists an implicit distribution
(H,s), where
• H is the set of all consistent hypotheses over the shapes of F
• For every x  H, s(x) is the probability that x is correct
 The probabilistic sampling measure p reflects this
uncertainty.
[Its goal is to minimize the expected number of remaining iterations
to connect s and g, whenever they lie in the same component of F.]
So ...
 PRM planning trades the cost of computing F
exactly against the cost of dealing with
uncertainty
 This choice is beneficial only if a small roadmap
has high probability to represent F well enough
to answer planning queries correctly
[Note the analogy with PAC learning]
 Under which conditions is this the case?
Relation to Monte Carlo
Integration
x2
f(x)
I=  f(x)dx
x1
A=a×b
b
# red
I
A
# red# black
(xi,yi)
x1
a
x2 x
Relation to Monte Carlo
Integration
x2
f(x)
But a PRM planner
I=
f(x)dx
 a path
must construct
x1
The connectivity of F may
A depend
= a × bon small
b
regions
# red
Insufficient sampling of such
regions
may lead A
I
the planner to failure
# red# black
(xi,yi)
x1
a
x2 x
Visibility in F
 Two configurations q and q’ see each other if
path(q,q’)  F
 The visibility set of q is V(q) = {q’ | path(q,q’)  F}
ε-Goodness of F
 Let μ(X) stand for the volume of X  F
 Given ε  (0,1], q  F is ε-good if it sees
at least an ε-fraction of F,
i.e., if μ(V(q))  εμ(F)
 F is ε-good if every q in F
is ε-good
F
V(q)
q
 Intuition:
If F is ε-good, then with high
probability a small set of configurations
sampled at random will see most of F
Here, ε ≈ 0.18
Connectivity Issue
F1
F2
Connectivity Issue
F1
Lookout of F1
F2
Connectivity Issue
F1
The β-lookout of a subset F1 of F is the set of
F2 in F1 that see a β-fraction of
all configurations
F2 = F\ F1
β-lookout(F1) = {q  F1 | μ(V(q)F2)  βμ(F2)}
Lookout of F1
Connectivity Issue
The β-lookout of a subset F1 of F is the set of
F2 in F1 that see a β-fraction of
all configurations
F2 = F\ F1
F1
β-lookout(F1) = {q  F1 | μ(V(q)F2)  βμ(F2)}
Lookout of F1
F is (ε,α,β)-expansive if it is ε-good and each
one of its subsets X has a β-lookout whose
volume is at least aμ(X)
Intuition: If F is favorably expansive, it should be relatively easy to
capture its connectivity by a small network of sampled configurations
Comments
 Expansiveness only depends on volumetric
ratios
 It is not directly related to the
dimensionality of the configuration space
E.g., in 2-D the expansiveness
of the free space can be made
arbitrarily poor
Thanks to the wide passage at
the bottom this space favorably
expansive
This space’s expansiveness is worse
than if the passage was straight
Many narrow passages might be
better than a single one
A convex set is maximally expansive,
i.e., ε = α = β = 1
Theoretical Convergence of
PRM Planning
Theorem 1
Let F be (ε,a,β)-expansive, and s and g be two configurations
in the same component of F.
BasicPRM(s,g,N) with uniform sampling returns a path
between s and g with probability converging to 1 at an
exponential rate as N increases
g = Pr(Failure) 
Experimental convergence
Linking sequence
Theoretical Convergence of
PRM Planning
Theorem 1
Let F be (ε,α,β)-expansive, and s and g be two configurations
in the same component of F.
BasicPRM(s,g,N) with uniform sampling returns a path
between s and g with probability converging to 1 at an
exponential rate as N increases
Theorem 2
For any ε > 0, any N > 0, and any g in (0,1], there exists αo
and βo such that if F is not (ε,α,β)-expansive for α > α0 and
β > β0, then there exists s and g in the same component of F
such that BasicPRM(s,g,N) fails to return a path with
probability greater than g.
What does the empirical
success of PRM planning tell us?
It tells us that F is often favorably expansive
despite its overwhelming algebraic/geometric
complexity
In retrospect,
is this property surprising?
 Not really!
Narrow passages are unstable features under small
random perturbations of the robot/workspace
geometry
  Poorly expansive space are unlikely to occur by
accident
Most narrow passages in F are
intentional …
… but it is not easy
to intentionally
create complex
narrow passages in F
Alpha puzzle
PRM planners work well in
practice. Why?
 Why are they probabilistic?
 What does their success tell us?
 How important is the probabilistic
sampling measure π?
 How important is the randomness
of the sampling source?
How important is the
probabilistic sampling measure π?
 Visibility is usually not uniformly favorable across F
good visibility
small lookout sets
small visibility sets
poor visibility
 Regions with poorer visibility should be sampled more densely
(more connectivity information can be gained there)
Impact
s
Gaussian
[Boor, Overmars,
van der Stappen, 1999]
g
Connectivity expansion
[Kavraki, 1994]
 But how to identify poor visibility regions?
• What is the source of information?
 Robot and workspace geometry
• How to exploit it?
 Workspace-guided strategies
 Filtering strategies
 Adaptive strategies
 Deformation strategies
How important is the randomness
of the sampling source?
Sampler = Uniform source S + Measure π
 Random
 Pseudo-random
 Deterministic
[LaValle, Branicky, and Lindemann, 2004]
Choice of the Source S
 Adversary argument in theoretical proof
 Efficiency
s
g
 Practical convenience
Conclusion
 In PRM, the word probabilistic matters.
 The success of PRM planning depends mainly and
critically on favorable visibility in F
 The probability measure used for sampling F
derives from the uncertainty on the shape of F
 By exploiting the fact that visibility is not
uniformly favorable across F, sampling measures
have major impact on the efficiency of PRM
planning
 In contrast, the impact of the sampling source –
random or deterministic – is small