Convex Hull for Probabilistic Points

Convex Hull for Probabilistic Points
F. Betul Atalay
Sorelle A. Friedler
Dianna Xu
Dept. of Computer Science
Saint Joseph’s University
5600 City Avenue
Philadelphia, PA 19131, USA
[email protected]
Dept. of Computer Science
Haverford College
370 W. Lancaster Avenue
Haverford, PA 19041, USA
[email protected]
Dept. of Computer Science
Bryn Mawr College
101 N. Merion Avenue
Bryn Mawr, PA 19010, USA
[email protected]
Abstract
We introduce an analysis of the convex hull problem on points with locations determined
based on some probability distribution. Such probabilistic points are common in applied settings
such location-based services using machine learning determined locations, sensor networks, and
computer vision object location detection. Our probabilistic framework is able to determine
the convex hull of such points to precision within some correctness determined by a user-given
confidence value. In order to precisely explain how correct the resulting structure is, we introduce a new error model for calculating and understanding approximate geometric error based
on the fundamental properties of a geometric structure. We show that this new error model
implies correctness under a robust statistical error model for the problem of maintaining the 1D
maximum and for the convex hull problem.
1
Introduction
The Convex Hull Problem is the problem of determining a minimum convex bounding polygon
that covers n points in the Euclidean plane. This problem is a classically studied problem in
computational geometry, with well known solutions including Graham’s scan and a divide and
conquer algorithm (both take O(n log n) time) [7]. In this paper we are interested in approximately
computing and maintaining the convex hull for a finite set of points whose locations are given by
some probability distribution, as is often the case when machine learning algorithms are used to
predict the point locations.
Within the machine learning community, determining an object’s location is a well-studied problem, generally divided into the global localization problem (sometimes known as the “kidnapped
robot problem”) and the local trajectory problem [8]. The global problem is that of determining an
object’s location with no knowledge of its previous location, only taking advantage of environmental
observations. The local trajectory problem is that of maintaining the object’s location over time
once starting from a known location. We are most interested in handling the point data arising from
the former problem. Techniques for solving these problems can be broadly categorized as geometric
techniques (e.g., “triangulation”) or fingerprint-based techniques (for a survey, see [9]). We focus
on fingerprint-based techniques, which generally work by associating observations with locations
in training data and then use techniques such as decision trees [5] or Monte Carlo methods [8] to
learn and predict locations from an object’s current observations. The output of these models is
a probability distribution over the possible locations. While this body of work does not speak to
methods to calculate geometric structures, it is instructive when considering what knowledge about
point location is “reasonable.”
1
Approximate correctness of a geometric structure has been considered under a number of different models, including interpretations where the structure is considered to be fully correct some
percentage of time or where it is considered to be partially correct every time the algorithm is run.
We are most interested in this second interpretation, within which partial correctness has been
considered under the absolute error model [6], the relative error model [2], and the robust error
model [12]. Within the absolute error model a structure is considered to be correct up to some
given fixed error bound ε that is constant for any set of points [6]. Under the relative error model
a structure is considered to be correct up to some percentage based on the geometric structure [2].
The robust error model is a per-point error model under which a structure is correct based on the
percentage of points which are correct [12]. We will compare the error model we introduce to this
robust error model and will introduce the per-problem interpretations of the model in the sections
to follow.
While classical computational geometry assumes exact knowledge of point location, goals of
relaxing such assumptions have spurred a few recent papers from the computational geometry
community. Loeffler and Kreveld [11] have considered approximate convex hulls under an imprecise
point setting, where exact point location was unknown within a region but guaranteed not to be
outside of it. They consider the convex hull under multiple variants of the relative error model and
achieve running times that range from O(n log n) to O(n13 ). When considering approximate nearest
neighbor searching, a model where points are described as probability distributions over their
possible locations has also been considered [1]. This latter model of point location, commonly used
in application domains, is the same as the one we use here and is restated below for completeness.
1.1
Probabilistic Points
We define a probabilistic point p = (D, v) to be an object with
a probability distribution D over its possible locations and a
value v that is the most likely location for the point p given
distribution D. For the remainder of this paper we will refer
to these probabilistic points simply as points and will assume
that the distribution is Gaussian. When referring to the value
of a specific point p we may denote it as vp . Note that in a nonprobabilistic setting the point and its value are often conflated.
Here, we will need to separate the true point, which could be at
any location that the probability distribution deems possible,
from the value that is our best guess at its location. In reality,
these two locations may be very far apart as we move away
from the center of the distribution.
Within a machine learning context, these probabilistic
points would be generated by a model that, when given a point
identity and environmental data, would return the point’s location probability distribution from which a most likely location
value could be calculated based on some desired confidence
threshold. These assumptions are standard within machine
learning [9]. For example, consider a model M predicting a
cell phone user’s current location. M would take as input Figure 1: A Google Maps screeninformation about the specific user, i, including signals in the shot showing an example probasurrounding environment E (such as GPS or WiFi), along with bilistic point.
a desired confidence level φ. The model would then return a
2
location, p, with an accompanying confidence circle with a radius based on φ. Note that while we
will use this confidence circle to represent the point’s possible locations, we do not truncate the
distribution and so maintain the low probability possibility that the point is in fact outside of this
confidence region. Under a Gaussian distribution, p is placed at the mean location and the radius
is large enough to cover at least φ percent of the mass of the probability density function centered
around the mean. When the model’s confidence is low (the probability distribution computed is
flat), the circle’s radius will become larger for a fixed φ (e.g., when the GPS on your phone is turned
off). See Figure 1.1 for an example of what such a point looks like on Google Maps - the central
blue dot is p and the lighter blue circle is the confidence circle indicating that with probability φ
you are within that region. More details about models that satisfy our assumptions can be found
in a survey of location models [9].
1.2
Certificates
Given the per-point model location predictions, we develop a framework that approximately maintains a geometric structure up to some expected correctness. To that end, we define certificates
that guarantee certain local geometric relationships crucial to the correctness of the entire structure. These certificates are inspired by those typically maintained in classic kinetic data structure
(KDS) settings [4], but we modify the KDS understanding of certificates to take into account the
probabilistic nature of the points. A certificate is defined to be a boolean function C giving true
or false value b on a set of k points P such that CP = b. The number of points involved in the
certificate, k, is assumed to be a small constant. We additionally assume that certificates do not
represent perfect guarantees, but rather a probability of φ that the certified relationship is true.
For example, a simple certificate defined on P = {p1 , p2 } could certify that p1 is above p2 . We
would interpret this to mean that p1 is above p2 with probability φ. (See Section 2 for a more
extensive example of a problem using such certificates.) Taken together, all certificates on the full
set of input points certify the correctness of the geometric structure being calculated.
We require these certificates to involve only a constant number of points so that local errors
are captured in a limited number of certificates. In addition, since the set of all certificates must
guarantee the entire structure, we need certificate definitions that propagate these local guarantees
to certify a global correctness. Interestingly, these certificates can be thought of as certifying the
steps of certain locally constrained algorithms and incrementally constructed problem solutions.
Divide and conquer algorithms often make good candidates for such problem certification mechanisms; Each decision in the merge process would constitute a certificate. Sweep line algorithms
similarly make good candidates for certification generation.
The correctness of a certificate is entirely dependent on the probabilistic correctness of the
point locations for the k points involved in a certificate. At a minimum, φk correctness can be
achieved when each point is independently φ-correct. Clearly, φ-correctness is obtained in this case
by setting the points’ model confidence thresholds equal to φ1/k . However, this is a conservative
lower bound on the correctness of the certificate. In the example certificate CP given above, we
would be guaranteeing that the confidence circles of points p1 and p2 did not intersect. Instead,
we could calculate directly from the underlying distribution the probability that p1 is above p2 and
set our certificate confidence accordingly to achieve φ-correctness for certificate CP .
1.3
Results
We introduce an approximate solution to the convex hull problem for probabilistic points of the
type commonly generated within the machine learning community to predict object locations. In
3
conjunction with our point prediction model, we allow the user of the system to input a desired
confidence value, φ, indicating the probability that the resulting geometric structure will be correct.
A confidence value of φ = 1 indicates that the geometric structure has no error. We introduce a
certificate error model in which a structure is considered some percentage, φ, correct if φ percent
of the total certificates used to calculate the structure are correct. We will show that approximate
correctness under the certificate error model implies approximate correctness under the robust error
model for the problems of maintaining the 1D maximum and the convex hull.
Importantly, while the absolute, relative, and robust error models must be defined based on
the characteristics of each problem’s geometric structure, the certificate error model is directly understood from the certificates. Any geometric structure which can be reduced to its local boolean
properties can now be understood in an approximate context based only on these properties. Our
reduction to the traditional and more geometrically intuitive robust error model for specific problems shows that despite the local nature of these properties, a global correctness can be obtained
(though it must be proven true for each problem). The advantages of this certificate model come
from the ease of calculation and description of error once a problem is reduced to its fundamental
certificate properties.
In summary, the rest of this paper will show the following main results:
1. The introduction of an approximate solution to the convex hull problem for probabilistic
points.
2. The introduction of a certificate error model and accompanying problem-specific proofs that
the robust error model is implied by this new, more geometrically generalizable, error model.
2
Certificate Error Model
A geometric structure is defined to be φ-correct under the certificate error model if at least φ percent of the total certificates guaranteeing its structure are correct. Under the robust error model,
more commonly known as a robust statistical estimator, a structure is robust to outliers up to
some breakdown point [12]. In order to show the utility of the certificate error model introduced
here, and since the meaning of robust error is defined per-problem, we consider the robust error
of the 1D maximum problem as a clarifying example in this section. In the following section, we
consider the convex hull problem. We show that for both of these problems, φ-correctness under
the certificate error model implies φ-correctness under the robust error model.
2.1
1D Maximum Certificates
Here, we begin with an example problem that demonstrates the value and structure of the error
model without the complexity of the convex hull certificates that will be described in Section 3.
The 1D Maximum Problem is the problem of determining the moving one-dimensional point which
currently has the maximum value when given a set of n such points. We will assume for ease of
presentation that these points are in general position, i.e. that no two points have the same value.
We will use the same certificates as used for the KDS solution to this problem, which relies on
a max heap that maintains the full ordering of the points. The n − 1 certificates guarantee the
parent-child relationships in the heap [4]. The exact answer to the problem is the single point of
maximum value. When considering probabilistic points, the goal is to return a value that is larger
than all other points.
4
For example, consider the max heap shown in Figure 2
maintaining points m, p, q, r and s. These one-dimensional
10
m
points can be represented by their mean values. Let vp de10
8
note the value of point p. There are four certificates: vm >
p
s
vp , vp > vq , vp > vr and vm > vs associated with this heap.
6
6
8
5
The values at which these points are believed to be are shown
q
r
within each node. However, the true value of a point may dif5
2
2
fer from what is believed, which may cause a certificate to be
incorrect. For example, if point q has a true value of 12, it
makes the certificate vp > vq false. Moreover, as in this case,
Figure 2: 1D Maximum
an incorrect certificate may mean that q’s real value (12) is
higher than the maximum given by the heap (10).
We define the 1D Maximum problem as being φ-correct
within the robust error model if the value returned as a proposed solution is in fact greater than φ percent of the points’
β(m, ρ)
true locations. Note that, since we consider the reported maximum value in relation to the remaining points’ true locations,
m
we do not consider it an error if the true location of the reported maximum point is in fact much smaller than the other
points’ true locations. Since these true locations are unknowable, we focus only on returning a value that we can expect
to be larger than these points, whatever their true locations.
Specifically, let the boundary of the maximum value, denoted Figure 3: A 1D point and its upper
β(m, φ), be the point at the upper boundary of the φ percent boundary.
interval for point m (see Figure 3). We will report β(m, φ)
where m is the root point of the max heap.
2.2
1D Maximum Error Model Correctness
The certificate error model applied to the 1D maximum problem means that φ percent of the total
certificates are correct. Here, we show the relationship of the certificate error model to a more
commonly understood robust error model for this 1D maximum problem.
Theorem 2.1. If the max heap is expected to be φ-correct under the certificate error model, the
value β(m) reported by the heap is expected to be φ-correct under the robust error model.
Before we prove this theorem, we will consider some more examples to explain the error model
in greater detail. Suppose that point m in Figure 2 is in fact at mean value 1. Then only the
certificates vm > vp and vm > vs are invalid, but the point m is in fact the minimum point.
However, because we reported the value β(m, φ) > 10 as the maximum value, the reported value
is still larger than all the points. The worst case for this model occurs when all of the certificate
relationships remain correct, but the reported maximum is in fact the minimum. This happens
when in reality all true points q are at values greater than β(q, φ) and so the reported maximum
value β(m, φ) is in fact be a minimum. Since Theorem 2.1 refers only to the expected correctness
of the structure, this worst case example does not constitute a counterexample. In fact, because
each point q’s true value is greater than β(q, φ), this case will only occur with probability (1 − φ)n .
Despite this, we will now show that in expectation the reported value is φ-correct under the robust
error model.
5
Proof. Let Cpq be a certificate guaranteeing that vp > vq for parent point p and child point q with
values vp and vq , respectively. If Cpq is incorrect, then one of the points is not at the value we
believed and so vp0 < vq0 for the true values of vp0 and vq0 associated with p and q respectively. (Note
that our general position assumption allows us to ignore the issue of if vp and vq are equal.) Each
incorrect certificate can cause at most one point to be greater than the heap’s maximum value.
Since the heap is expected to be φ-correct under the certificate error model, we expect 1−φ percent
of the certificates to be incorrect. There are n − 1 certificates in the heap. Each of these will cause
at most one point to be greater than the maximum, so at most (1 − φ)(n − 1) points will be above
the maximum value give by the heap. Since we will be reporting the value (and not its associated
point), this is the only way for some other point to be greater than this maximum value. We also
need to consider the chance that the maximum point is above its boundary value. By the definition
of β(m, φ), the true value of m is greater than β(m, φ) with probability at most 1 − φ. Now we
have that at most (1 − φ)n points will be above the reported maximum value. Thus, we expect
that the reported maximum value will be within the top 1 − φ percent of the points and so the
result is expected to be φ-correct under the robust error model.
3
Convex Hull
Recall that the convex hull is defined as the smallest convex region containing a set of points.
In order to determine certificates that guarantee a solution to this problem, we turn to the KDS
definition of convex hull certificates. The KDS solution for this problem makes use of a divide and
conquer algorithm to find the upper envelope in the dual setting, where a point (a, b) is represented
by the line y = ax + b [3]. This dual version is required when creating certificates because a classic
point representation of the problem does not admit a divide and conquer algorithm. Since our
analysis will also rely on certificates, we will make use of this algorithm as well. We describe it
briefly here.
We focus only on the upper envelope of dual lines, the lower envelope computation is symmetric.
To find the upper envelope of a set of lines, the lines are partitioned into two subsets of roughly equal
size, their upper envelopes are computed recursively, and then the two resulting upper envelopes—
let them be called red and blue— are merged. The merge step is performed by sweeping a vertical
line through all of the vertices of the red and blue upper envelopes, from left to right. As the line
sweeps, the most recently encountered red and blue vertices are maintained. When a red (blue)
vertex is encountered, the algorithm determines if the vertex is above or below the edge following
the last encountered blue (red) vertex. If it is above, that vertex is added to the merged upper
envelope. If it is a different color than the last added vertex, that means the red and blue envelopes
have crossed, and the intersection point is also added to the merged upper envelope.
The comparisons done during the merge step lead to certificates proving (i) the x-ordering of
the vertices and (ii) y-ordering of a vertex with respect to an edge. However, in order to make
the KDS local—avoiding linearly many y-ordering certificates per edge—, certificates that involve
comparisons between line slopes are also needed.
This algorithm takes time O(n log n) [7]. Since we work with certificates created during the
running of this algorithm, our convex hull algorithm on probabilistic points will take time O(n log n)
as well. We now describe more precisely the certificates and our resulting algorithm.
3.1
Convex Hull Certificates
In a tree, we keep track of all levels of the divide and conquer algorithm’s merge step and certify
the properties that determine each choice in the merge of two recursively determined upper enve6
a
b
c
b
a
d
c
c
a
b
Figure 4: Convex hull certificates in the dual setting showing all lines that are involved in each
certificate. The two chains involved in the merge are shown in different colors - one chain in grey
and the other in black. Left: Tangent certificates guarantee that line c is below the vertex ab and
that the slope of line c is greater than the slope of a and less than the slope of b. Center: Intersection
certificates guarantee that the vertex ab is to the left of vertex cd and below line c and that vertex
cd is to the right of vertex ab and below line b. Right: Diverging certificates guarantee that b’s slope
is less than the slope of line a and that the vertex cb is below line a. The diverging certificate above
is shown only for the right side of the chain, a left side diverging certificate guarantees analogous
information for the mirror image.
lope chains [4]. The certificates, as presented in [4], come in groups within which all certificates
are dependent on each other, that is, the invalidation of any one certificate invalidates all other
certificates in that group, and the existence of any one certificate implies the existence of all other
certificates in the group. For simplicity of presentation, we will refer to these groups as certificates
themselves. These certificates are tangent certificates between an edge in one chain that is below
a vertex in the merging chain and the two vertex adjacent edges that its slope is between, chain
intersection certificates between intersecting edges from different chains, and diverging certificates
between two edges in different chains with slopes that guarantee they will never intersect (see Figure
4). Diverging and tangent certificates represent the two ways in which an edge will not be involved
in the resulting upper envelope while intersection certificates represent a merge point of the two
chains. For more precise details defining these certificates see [4]. One assumption implicit in the
presentation of these certificates and in the statement of the original divide and conquer algorithm
is that the points are in general position, i.e., no two lines in the dual setting are parallel. We make
the same assumption.
Robust Error Model for the Convex Hull The robust error model states that the convex hull
is φ-correct if at least φ percent of the points are contained within the hull. This is a restatement
of an idea Tukey referred to as “peeling” in which some number or percentage of outlying points
are iteratively deleted and the convex hull is computed on the remaining points [10]. As we did in
the 1D maximum problem, we will consider the values of the points as defining the convex hull.
The goal will be for those values to enclose at least φ percent of the points (whatever their true
locations). The certificate error model remains the same, that is, a convex hull is φ-correct if at
least φ percent of the certificates are correct.
Resulting Convex Hull The values returned represent the boundary of the points believed to
be on the hull. Specifically, given points H determined by the certification process as the hull, we
report {β(h, φ), ∀h ∈ H}. In this context, β(h, φ) is defined as the intersection of the tangent lines
of the boundary circles of two pairs of consecutive points on the hull. For example, in the partial
hull h1 , h2 , h3 shown in Figure 5), the boundary β(h2 , φ) is determined by the intersection of two
tangent lines; The line tangent to the confidence circles for h2 and h3 and the line tangent to the
confidence circles for h1 and h2 . These tangent lines are chosen to be the outer tangent lines for
the two circles with respect to the convex hull. In the case where one confidence circle completely
7
β(h2 , φ)
h2
h1
h3
Figure 5: The boundary value for point h2 calculated from the shown tangent lines.
encloses another, the interior enclosed point is dropped from the hull. These boundary values are
reported as our convex hull.
3.2
Convex Hull Error Model Correctness
Theorem 3.1. If a convex hull is expected to be φ-correct under the certificate error model, then
it is expected to be φ-correct under the robust error model.
In order to reason about the certificate correctness of the convex hull, we will first need to
understand the properties of these certificates in more detail. We will say that a point p has been
excluded from the convex hull by some certificate when believing that certificate’s false assertion
causes p to be outside of the resulting convex hull. Note that if a certificate incorrectly causes p
to be on the convex hull, p has not been excluded. We will call the tree of certificates that show
the choices in the divide and conquer convex hull algorithm the merge tree. Recall from [4] and [3]
that the leaf nodes of the merge tree are single lines and the root node contains all line segments
determining the resulting upper envelope. Here, we consider the levels of the merge tree in which
a single point (a line in the dual setting) participates.
Lemma 3.1. Each point p ∈ P with associated dual line ` can only be excluded from the convex
hull by an incorrect certificate in the highest level L of the merge tree in which ` participates.
Proof. Suppose there is an incorrect certificate that involves ` in some level L0 below level L. Despite
this incorrect certificate, ` advanced to level L0 + 1 ≤ L, so ` was found to be on the upper envelope
for all lines in its subtree at level L0 . Points reported as on the convex hull (lines in the upper envelope in the dual setting) have not been excluded from the hull. So as long as ` advances to some level
above L0 , incorrect certificates in level L0 can not exclude `. So ` can only be excluded from the convex hull by an incorrect certificate that keeps it from being reported on the convex hull. This level
is, by construction of the merge tree, the highest level at which ` participates in the merge tree.
Next, we examine the properties of certificates from a single line’s point of view.
Lemma 3.2. Each point p ∈ P with associated dual line ` can be excluded from the convex hull by
at most one certificate when considering a single level L of the merge tree.
Proof. First, we consider which lines have the potential to be excluded from the hull for each type
of certificate. Referring to the line labels of Figure 4, note that only c may be excluded by an
8
incorrect tangent certificate, a or d may be excluded due to an incorrect intersection certificate,
and only b may be excluded by an incorrect diverging certificate, since these are the lines not
believed to be on the hull. Note also that some of the lines in these certificates (a and d in the
intersection certificates and c in the diverging certificates) only contribute a vertex to the geometric
relationship being guaranteed between the lines. We will say that such lines participate as a vertex
in the certificate, while the other lines will be said to participate as a line.
Suppose that ` takes the role of c in the tangent certificate. In this case, we will consider what
other certificates ` could participate in. Since ` already participates in one certificate as a line, it
can not participate in any others as a line. That leaves participation as a vertex. Depending on
the number of lines participating in the merge at this level, ` may have zero, one, or two endpoints
contributing to its upper envelope segment. At any existing endpoint, ` could participate as a vertex
in a diverging certificate or an intersection certificate, but may participate in at most one certificate
per vertex. For either a failing diverging or intersection certificate to exclude `, the vertex of ` that is
participating in the certificate must in reality be above the line from the other chain (a or b as labeled
for the tangent certificate). But if one of the endpoints of ` is above the other chain, then the tangent
certificate is invalid. So for ` to be excluded, the tangent certificate it participates in must be invalid.
A similar analysis when ` takes the role of b in the diverging certificate argues that for ` to
be excluded in that case, the diverging certificate it participates in as b must be invalid. Finally,
remember that if ` participates as a line in an intersection certificate it can not be excluded from
the hull as it is already found to be on the hull (for this level). We have considered all possible cases
when ` participates as a line and in each of them there is a single certificate that must be wrong for
` to be excluded from the hull and no other certificate may exclude ` from the hull on its own.
Before proving Theorem 3.1, we consider the worst case for this algorithm. Suppose that all
certificates are correct, but the entire point set has been shifted to the right far enough so that it
lies completely outside of the reported hull. Then the hull is 0-correct under the robust error model
but 1-correct under the certificate error model. This happens with probability (1 − φ)n . As for
the 1D maximum problem, since we are proving that φ-correctness is expected, this low probability
event does not constitute a counterexample.
Now we can put these lemmas together for the proof of Theorem 3.1.
Proof. By Lemmas 3.1 and 3.2 we know that each point can be excluded from the hull only by
an incorrect certificate in its highest level of the merge tree and that each point has at most one
certificate per level that can exclude it from the hull, so each point has at most one certificate in the
whole merge tree that can exclude it. If a convex hull is φ-correct under the certificate error model,
then each certificate has a 1 − φ probability of being incorrect and thus each point not on the hull
has at most a 1 − φ probability of being excluded. Each point on the hull may in fact be outside
of the associated boundary value reported, however because by definition each boundary value is
outside of the point’s confidence circle, this happens with probability at most 1 − φ. Therefore each
point (whether on the hull or not) is outside of the reported hull with probability at most 1 − φ. By
linearity of expectation, this means that the expected number of points outside the reported hull is
at most (1 − φ)n. So the convex hull is expected to be φ-correct under the robust error model.
4
Conclusion
In this paper, we introduced a new way of understanding and quantifying approximate geometric
correctness via our certificate error model. Accompanying this, we showed how this error model
could be applied to probabilistic points of the type generated by machine learning models in the
9
context of two problems - the 1D maximum and the convex hull problems. The strength of the
certificate error model lies in its generalizability to any problem that can be stated in terms of
its component boolean properties. These results also hint at the ability to work with probabilistic
points in a motion setting, since certificates are also the basis for kinetic data structure (KDS)
results.
Interesting related open problems include the consideration of probabilistic points and the
associated certificate error model for all other classic computational geometry problems including
the Delaunay triangulation, the closest pair of points, and the calculation of kd-trees. While we
were able to show that the certificate error model implies the robust error model for the problems
considered here, that question remains open for these problems. It would be interesting to determine
if this is a general rule that applies to any certificate-based representation of a geometric structure.
References
[1] Pankaj K. Agarwal, Boris Aronov, Sariel Har-Peled, Jeff M. Phillips, Ke Yi, and Wuzhou
Zhang. Nearest neighbor searching under uncertainty II. In PODS, 2013.
[2] Sunil Arya and David M. Mount. Approximate range searching. Computational Geometry:
Theory and Applications, 17(3-4):135–152, 2000.
[3] Mikhail J. Atallah. Some dynamic computational geometry problems. In Computers & Mathematics with Applications, volume 11(12), pages 1171–1181, 1985.
[4] Julien Basch and Leonidas J. Guibas. Data structures for mobile data. J. of Algorithms,
31(1):1–28, April 1999.
[5] Leo Breiman, Jerome Friedman, Charles J. Stone, and R.A. Olshen. Classification and Regression Trees. Chapman and Hall/CRC, Monterey, CA, 1984.
[6] Guilherme D. da Fonseca and David M Mount. Approximate range searching: The absolute
model. Computational Geometry: Theory and Applications, 43(4):434–444, 2010.
[7] Mark de Berg, Otfried Cheong, Marc van Kreveld, and Mark Overmars. Computational Geometry: Algorithms and Applications. Springer Berlin Heidelberg, 2008.
[8] Dieter Fox, Wolfram Burgard, Frank Dellaert, and Sebastian Thrun. Monte carlo localization:
Efficient position estimation for mobile robots. In Proc. of the National Conference on Artificial
Intelligence, pages 343–349, 1999.
[9] Jeffrey Hightower and Gaetano Borriello. Location systems for ubiquitous computing. Computer, 34(8):57–66, 2001.
[10] Peter J. Huber. The 1972 wald lecture robust statistics: A review. The Annals of Mathematical
Statistics, 43:1041–1067, 1972.
[11] Maarten Löffler and Marc van Kreveld. Largest and smallest convex hulls for imprecise points.
Algorithmica, 56:235–269, 2010.
[12] Peter J. Rousseeuw and Annick M. Leroy. Robust Regression and Outlier Detection. Wiley &
Sons, NY, 2005.
10