The Art-Gallery Problem:
A Survey and an Extension
GÖRKEM
SAFAK
Master of Science Thesis
Stockholm, Sweden 2009
The Art-Gallery Problem:
A Survey and an Extension
GÖRKEM
SAFAK
Master’s Thesis in Computer Science (30 ECTS credits)
at the School of Computer Science and Engineering
Royal Institute of Technology year 2009
Supervisors at CSC were Patric Jensfelt and Alper Aydemir
Examiner was Danica Kragic
TRITA-CSC-E 2009:126
ISRN-KTH/CSC/E--09/126--SE
ISSN-1653-5715
Royal Institute of Technology
School of Computer Science and Communication
KTH CSC
SE-100 44 Stockholm, Sweden
URL: www.csc.kth.se
Abstract
One of the ultimate goals in robotics is creating autonomous systems that will
accept and execute tasks without further human intervention. Accomplishing
progress in autonomous robots affects many areas such as manufacturing, wastemanagement, undersea work, space exploration, assistance for disabled and
surgery. The coverage of a certain workspace by mobile or static guards is called
the “art-gallery problem”. This problem has found much interest among
researchers for more than three decades and many different algorithms have
been developed for different variations of the art-gallery problem. This thesis
aims to give a general survey of these algorithms, focusing on the results rather
than proofs of techniques used. Integrating objects into motion planning process
has also been a topic for research as studies have shown that objects play an
important role in definition of space as a human concept. In this thesis, an
approach for efficient object search during motion planning is presented, using
one of the novel techniques called “randomized sampling algorithm” which has
been developed and widely used in recent art-gallery methods.
Sammanfattning
Konst galleri problemet: Undersökning och utvidgning
Ett huvudsakligt mål inom robotteknik är att skapa autonoma robotar som kan ta
emot och utföra arbetsuppgifter utan mänsklig inblandning. Framsteg inom
autonom robotteknik kan ge stora förbättringar inom många områden som
tillverkning, avfallshantering, marin- och rymdutforskning, stöd för
funktionshindrade och kirurgi. Övervakning av ett område med rörliga eller
orörliga övervakare kallas ”konstgalleriproblemet”. Detta problem har rönt stort
intresse bland forskare i mer än tre årtionden och många olika algoritmer har
utvecklats för flera typer av konstgalleriproblem. Detta exjobb har som mål att
ge en sammanställning av dessa algoritmer, med fokus på resultat snarare än
bevis för tekniker som använts. Planering av rörelse för att söka objekt har
också ägnats mycket forskning. Studier har visat att objekt spelar en väldigt
viktig roll i definitionen av omgivningen som ett mänskligt koncept. Detta
exjobbet presenterar ett tillvägagångssätt för effektiv objektssökning som
bygger på den senaste tekniken inom sampelbaserade metoder.
Acknowledgment
I would like to thank several people who helped me both during my research
and my education at KTH. First of all, I would like to thank Dr. Patric Jensfelt
who has tirelessly answered all my questions and kindly guided me while
carrying out my research. I also thank him for being a supportive and excellent
teacher during my past two years at KTH. I would also like to express my
gratitude to Alper Aydemir for sharing his experience and showing guidance on
my research subject. Lastly, I would like to thank my mother and father for their
endless support both during my two years at KTH and during my whole life.
Table of Contents
1 Introduction ................................................................1
1.1 Robots in Daily Lives ........................................................ 1
1.2 Statistics in Robotics ........................................................ 4
1.3 Navigation ....................................................................... 5
1.4 Outline............................................................................. 6
2 Problem Description and Motivation ..........................7
2.1 Motion Planning History .................................................. 7
2.2 Background and Motivation ............................................ 9
2.2.1 Next-Best-View (NBV) Problem................................................ 11
2.2.2 Art Gallery Problem.................................................................. 12
2.2.3 Motivation ................................................................................ 14
3 Definitions and Preliminary Results...........................15
3.1 Definitions and Terminology.......................................... 15
3.1.1 Curves, paths and routes.......................................................... 16
3.1.2 Visibility and Visibility Polygon................................................. 17
3.1.3 Watchman’s Route ................................................................... 20
3.1.4 Polygon Classes ........................................................................ 21
3.1.5 Polygon Decomposition ........................................................... 26
4 A Survey on Guarding Art Galleries ...........................30
4.1 Art Gallery History ......................................................... 30
4.1.1 Orthogonal Galleries ................................................................ 34
4.1.2 Other Variations ....................................................................... 35
4.1.3 Generalized Guards and Holes ................................................. 36
4.1.4 Watchman Routes.................................................................... 38
4.1.5 Recent Studies.......................................................................... 39
5 Randomized Art Gallery Problem and Extensions .....41
5.1 Extended Visibility Constraints and Effects .................... 43
5.1.1 Effect of Constraints................................................................. 45
5.2 Randomized Algorithm for Art-Gallery Problems........... 47
5.3 Occupancy Grid and Elevation Maps.............................. 50
5.3.1 Occupancy Grids....................................................................... 50
5.3.2 Elevation Maps ......................................................................... 54
5.4 Extended Art Gallery Algorithm and Implementation.... 57
5.4.1 Table-Boundary Coverage based on Grid weights................... 57
5.4.2 Incorporating Statistical Data for Efficient Search ................... 62
6 Conclusions and Future Works..................................66
6.1 Conclusions.................................................................... 66
6.2 Future Work .................................................................. 66
Bibliography .................................................................68
Chapter 1
Introduction
For ages, humans have desired to create tools that will ease and improve the
quality of their lives. Most of the age-breaking inventions came as a result of
this desire and passion to facilitate their daily lives. So if we consider the aim of
the technological development through ages to be making life easier for
mankind, then robotics should be the next tool in realizing this aim.
Although Karel Capek was referring to autonomous human-like robots
when he used the word “robot” first-time, industrial robots have had the lead
role in robotics scene until late 20th century. The films, movies and stories about
human-like helpers or companions have a long history but the first fully
autonomous machines only appeared in 1960s and these machines were utilized
for industrial manufacturing, far from dreams about domestic house friends.
However, since then more and more effort, attention and research are being
spent on autonomous robots.
1.1 Robots in Daily Lives
Scientists have always been attracted by the idea of interaction between a robot
and its environment and by the possibility of robots’ interaction with each other
since the early days of biologically inspired robots [1]. Despite these
inspirations, there will be no C-3PO or Robocop in the near future of mankind’s
daily life, disappointing many who have grown up with stories of invading or
savior robots.
1
In fact, currently most domestic robots are far from mimicking humans.
Much work has been done in making robots similar to human appearance as a
design consideration, such as facial expressions. Sparky [2] and Feelix [3] have
simple actuated faces for expressions (Figure 1.1), yet many autonomous robots
lack most of the basic human skills.
Figure 1.1: Actuated faces of Sparky(Left) and Feelix(Right)
(Image taken from [1])
Facts about robotics are rapidly changing though. Robots that are able to operate
autonomously are beginning to enter people's daily life in a rapid fashion.
Although sharing some similarities with their ancestors, i.e. industrial robots,
these autonomous robots need to be able to reason, decide, operate and adapt to
their shifting environments. There are some but basic examples where these
dreams of autonomy have come true. In addition to the traditional role of
servants, these autonomous robots are being designed as pets, companions or
assistants [1].
Toy robots are beginning to make an impact on their market. Sony’s
AIBO (Figure 1.2) is probably the most famous representative of these toy
robots that can interact with people, obey commands and adapt to its
surroundings. A recent study based upon Online AIBO discussion forums [4]
has shown that humans are more than ready to accept autonomous robots in
their daily lives. Results showed that 75% of the participants made remarks
about AIBO being a technological artifact. Beside these normal reactions, the
report further showed that 49% of people believed that AIBO possesses life-like
essences, 60% referred to the presence of a mental essence and 59% of social
rapport [4].
2
Another example of autonomous robots invading the daily lives is
iRobot Roomba which sold over 2 millions units by 2006. Roomba has utilized
one of the most important facts about autonomous robots if they are to be a
frequent visitor at houses: a robot’s cost must be commensurate with its utility,
minimizing complexity at the same time [5]. As stated in [6] by the president of
Robotic Trends, “With over 2 million units sold it is clear that iRobot Roomba
is not a boutique purchase or limited to holiday impulse buying.”
Figure 1.2: Sony’s AIBO
Close and effective interaction between robots, environment and humans
will be important to the success of autonomous robots [1]. Even though the
autonomous robotics history seldom went beyond isolated experiments on
platforms, lately much work has been conducted also on practical projects
where mobile autonomous robots appear in museums, receptions and tour
guides [7].
In 1998, an autonomous robot called Chips (Figure 1.3) was installed at
the Carnegie Museum of Natural History with the goal of being a permanent
member. Chips operated autonomously for almost 4 years, exceeding over 500
kilometers [7, 8]. It was able to move autonomously, greet visitors, give
interactive tour guidance, charge itself and even autonomously detect failures,
removing need to supervise a fully social robot. Two more of robots with the
same aim have been employed and operated with a total travel distance of 840
km [8]. It has also been reported [7] that using tour-guide robots increased the
amount of information absorbed by the visitors, another improvement on
Human-Robot Interaction (HRI).
Blacky is another autonomous robot example that utilizes continuous
localization as well as voice synthesizing for navigational aid. Blacky operated
as an interactive trade fair guide on 3 occasions, accumulating 7 days of
intensive use [9]. Web-based tele-operation interfaces for robots have been
developed and have gained serious interest over the last few years [10, 11].
3
Several more different examples of successful autonomous robotic applications
can be found in [10], [11] and [12].
To get a better understanding of the impact of autonomous robots on
daily life, it should be noted that scientific researches, symposiums and
publications are being carried out about a set of moral rules called roboethics
[13, 14, 15]. European Robotics Research Network(EURON) funded a
roboethics atelier, aiming at drawing roboethics roadmap before robotics get
placed under scrutiny from ethical standpoint by public like Nuclear Physics,
Chemistry or Bioengineering did [14]. A U.S. Army American Research
Office(ARO) funded study [15] discusses the lethality of intelligent robotics
systems in potential warfare use, proposes some ethical conducts and claims
that robots can be made more humane than human beings in military situations,
resulting in a reduction of ethical violation.
Figure 1.3: Chips attracting visitors [7]
1.2 Statistics in Robotics
Statistical figures and data are also showing evidence on how rapidly
autonomous robots are invading the daily lives of mankind. Japan who has a
46% share of world’s total robot demand is facing a declining birthrate, leading
to a rapid aging of its population [16]. Due to this fact, there is a fear that a
decreasing labor force will lead to low productivity and quality. For
encouraging people, companies try to make workspaces friendlier which
requires more usage of robots. According to Japan Economic Monthly [16],
most of the growth in interactive robots currently focuses on robots capable of
cleaning, security and household services. The Japanese Robot Association
predicts that by 2025, the personal robot industry will be worth more than $50
billion a year worldwide, compared with about $5 billion today [17]. The
Ministry of Economy, Trade and Industry of Japan is providing 50% financing
to domestic producers as well as next-generation robotics research has been
made part of national policy in Japan [16].
4
2008 Annual UNECE/IFR study on yearly executive summaries of
world robotics [18] predicts 54,000 new service robots will be installed for
professional use, whereas around 12.1 million units of service robots are
expected to be installed for personal use such as vacuum-cleaning, lawn
moving, entertainment and leisure (Figure1.4).
With more than 70% of all households having broadband connection,
South Korea is one of the most technologically advanced countries and
competes with Japan and United States even though the nation still lags behind
[19]. The South Korean Ministry of Information wants to put a robot in every
household by 2020 and thus mainly focuses on household service robots. In
[19], Oh Sang Rok who oversees this project states that industrial robot market
may be saturated but the market of service robots are infancy and just opening
up, so they are gathering more than 30 companies and 1000 scientists for this
aim.
Figure 1.4: Service robots for domestic use
( Image taken from [18] ).
Bill Gates who was behind the most innovative product of the 20th
century believes that robotics are at the break of the same revolution in 21st
century and continues to state that as the PC has an important effect on our
work, communication and daily life, robotics will lead to the same impact on
every aspect of our lives [17].
1.3 Navigation
As mentioned before, making robots able to interact with their environment is
one of the most important steps in accomplishing autonomy. Being able to
navigate and operate in its environment without needing the assistance of a
human is a milestone to be achieved in mobile robotics [7, 20]. There has been a
5
significant amount of work done in mobile robotics, including mapping,
localization, and navigation [9, 10, 21, 22, 23].
With the expanding field of mobile robotics, the goals set for robots are
becoming more and more ambitious, opening new areas of research [24].
Robots’ interaction with the environment depends mostly on the robots’
interaction with objects in their surroundings. As a consequence of this fact,
object search, view planning based on objects and optimizing the algorithms for
these processes have come under the spotlight of mobile robotics researchers. It
is therefore vital for a robot to efficiently navigate, search, locate and interact in
its own environment to realize the dream of companions which can understand
and utilize concepts to mimic basic human skills.
In the following chapters, many works that have been done for viewplanning will be presented in an extensive manner while a comparatively recent
work that utilizes a novel approach to the next-best-view (NBV) problem will
be examined in more detail. NBV problem is basically a decision problem in
planning views such that the next viewing point is chosen according to a
specific criterion. Also, an extension incorporating object information for
choosing the next-best-view point will be proposed and explained.
1.4 Outline
The thesis is structured as follows:
•
Chapter 2 is about the problem description and motivation as well as
some background information.
•
Chapter 3 presents the definitions and preliminary results needed for
following chapters.
•
Chapter 4 is the survey that reflects the historical evolution of artgallery problem as well as algorithms and extensions focusing on results
rather than proofs.
•
Chapter 5 explains the method chosen for the thesis and presents the
work and proposed algorithm for efficient object research.
•
Chapter 6 is the final chapter that evaluates the work presented in the
thesis, draws conclusions and suggests future work based upon what has
been done.
6
Chapter 2
Problem Description and Motivation
2.1 Motion Planning History
One of the ultimate goals in robotics is creating autonomous robots that will
accept and execute tasks without further human intervention [25].
Accomplishing progress in autonomous robots affects many areas such as
manufacturing, waste-management, undersea work, space exploration,
assistance for disabled and surgery. One of the central themes in autonomous
robots is motion-planning. At first glance, motion planning seems like a simple
action as we humans deal so easily with it. However, in the robotics field, there
is a huge amount of literature, theory and algorithms involved with motion
planning which leads to many questions in fields like Mathematics, Computer
Science, Artificial Intelligence, Image Processing and Medical Engineering.
Motion planning and autonomous navigation is an important subject that
has now been studied for several decades and emerged as a fruitful and crucial
research area in robotics [22, 25, 26 and 27]. It is an algorithmic problem which
involves computational geometry and computational robotics to find better
algorithms for the purpose of designing autonomous robots able to travel to
specific locations without any help.
7
Motion planning can be described as detailing a task into atomic motions
for robots. For instance, if a robot is desired to move to a location in finite time,
the motional planning algorithm must take the task as an input, create specific
set of motions as output while taking into consideration constraints (differential
and/or optimality) as well as uncertainties. Motion planning has a wide range of
applications including automation and robot design as well as robotic surgeries,
architectural designs, computer animation, drug docking and protein
folding(biological molecules)[28].
There has been an extensive amount of work done in motion planning
algorithms such as grid-based methods, planning based on potential fields and
sampling based algorithms. Each of them has their advantages and
disadvantages depending on the problem complexity/dimension.
One of the earliest works in motion planning has been done by NJ
Nilsson in 1969, which makes use of a road map called “visibility graph”. The
shaded areas represent obstacles. The solid lines are the edges of the graph and
connect the vertices of the obstacles. The dotted lines connect the beginning and
end configurations with the roadmap (Figure 2.1). In 1980, Lozano-Perez gave
Figure 2.1 : Nilsson’s Visibility Graph ([25])
the mathematical foundations for configuration space that describes the pose of
the robot, and the configuration space C, the set of all possible configurations
[29]. In 1983, Schwarz and Sharir [30] proposed the exact cell decomposition
technique (Figure 2.2) which presents the exact free space as a set of cells and
builds a path using connectivity of cells. In 1987, Canny [31] sketched the
principle of a general roadmap method called silhouette method that basically
constructs the silhouette of the robot’s free space when it is viewed from a point
in infinity.
In 1986, Khatib [32] made another approach, treating the robot's
configuration as a point in a potential field that combines attraction to the goal,
and repulsion from obstacles. The resulting trajectory was output as the path and
there was little need for computation. However potential-field based methods
have problems with being trapped in local minima of the field. Latombe et al.
[33, 34] proposed in 1997 the usage of visibility constraints as the method for
8
motion planning and this approach has received substantial interest. This
approach will play a major role in the problem that this report will present
shortly.
Figure 2.2 : Exact Cell Decomposition
It is possible to categorize planning algorithms into two main sets, critical-based
motion planning (grid-based algorithms) or sampling based algorithms. Gridbased algorithms overlay a grid on the configuration space and low-dimensional
problems can be solved by using these algorithms or geometric algorithms that
compute the shape and connectivity of free space. However exact motion
planning for high-dimensional systems under complex constraints is
computationally intractable.
Sampling-based methods are based on the principle of sampling the
space of interest, connecting sampled points by path and searching the resulting
graph [74] such as probabilistic road maps [22]. This method is fast, scalable to
many degrees of freedom and complex constraints. They are also
probabilistically complete, meaning the probability that they will produce a
solution approaches 1 as more time is spent. However, they cannot determine if
no solution exists which may be stated as a drawback [25]. Sampling-based
algorithms are currently considered state-of-the-art for motion planning in highdimensional spaces, and have been applied to problems which have dozens or
even hundreds of dimensions (robotic manipulators, biological molecules,
animated digital characters, and legged robots) [28, 35].
2.2 Background and Motivation
Imagine a scenario where a robot is asked to search for and locate certain
objects in a known or unknown environment. In the case of the unexplored
environment, a robot is first required to extract information about the
surroundings and following this exploration, it should be able to search for the
9
objects. Depending on if the search will take place offline or online, different
algorithms can be followed.
Obviously the first and most important task is producing a relatively
correct model of the environment and localizing itself in this map. Simultaneous
localization and map building (SLAM) problem have been a hot topic for
research [21, 23, 47, 48, 49, 50, 51, 52] and been successfully implemented in
real world experiments such as [7, 8, 9, 11, 45, 46]. Based on these successful
applications, it is safe to say that a robot equipped with a range-finder can start
in an unknown location in an unknown environment and then incrementally
build a relatively accurate map while simultaneously using this map to compute
absolute vehicle location.
Object search combined with view planning is comparatively a new
problem although view-planning and object search are both comparatively old
research area on their own. View planning is a thrilling research topic which is
crucial to efficient object search in known or unknown environments, as nonplanned searches are not feasible solutions. To be able to make an efficient
view-planning, there must be some criteria upon which we plan our viewing
strategies.
Surveys have shown that objects constitute a very critical component of
both a representation and a description of the place in which they are located
[36]. We understand places in terms of high-level features like objects that are
present in those environments. Also, doors, boundaries and windows act as an
important component in describing places and their functions. Humans can
easily comment on whether a place is a kitchen or a living room based on
certain objects. In order to make robots approach our complexity of
understanding, it is essential that objects should be taken into consideration
when planning, searching or even localizing. In [37], Torralba presents a
framework for modeling the relationship between the context and properties of
object since context can be an efficient tool for recognizing object location and
scale. In another work [38], Quattoni and Torralba combine usage of objects
and global spatial features for categorizing indoor scenes. Murphy et al. [39]
explains a method that uses local and global features of an image for object
detection. Similar to this work, Oliva and Torralba presents a study [40] on the
effects of contextual information for successful object detection in images.
Also, Torralba and Oliva describe a method [41] where objects are used in an
attention control method for retrieving contextual information. Based on these
studies mentioned above, it is reasonable to say that categorizing objects(or
places) and using these categorizations for object detection(or place detection)
has been an exciting area of research [73]. As a result of that, it is also possible
to think of another scenario where a robot given the knowledge of its
surroundings is required to search for objects and categorize them depending on
the type of room(kitchen, bathroom, etc.) it is currently in.
10
2.2.1 Next-Best-View (NBV) Problem
This research and mentioned examples show us that there is a strong reason for
including object search in navigation. A robot can navigate itself in an
environment based on the objects it has seen. However, involving object search
into maps introduces an efficiency problem [20] which is one of the central
themes of this report. Vision tasks are inherently computationally expensive
algorithms [42] which is an important constraint on the efficiency of object
search. Therefore, it plays a vital role in object search to minimize the number
of object recognition operations such that it is possible to get most of the overall
information needed with minimum instances of algorithms ran. The robot
should position itself in the most efficient way to make worth of the moving
cost and vision algorithm. This puts forth the following question: Is it possible
to construct an efficient strategy to look for objects that maximizes the expected
amount of new information available to the sensor?
This question has found several answers in literature [33, 34, and 43]
and is known as the next-best-view (NBV) problem in [53, 54, 43, 44]: “Where
should the next sensing operation take place to maximize the amount of total
information available and how to accomplish this task?” Determining the
location of best next camera position is not only essential in robotics but also
has been an important subject in machine navigation, automatic inspection and
manipulation [53]. Pito and Bajcsy [54] use NBV to acquire the complete
description of a non-trivial object using range cameras. Several more NBV
techniques have been previously proposed [55, 56]. The NBV algorithm
presented in [56] constructs a 2-D layout model of an indoor environment but it
assumes continuous sensing which is inefficient for our purpose. It can be
concluded that there have been many methods for next-best-view algorithms but
almost none of them are suitable to a mobile robot searching for objects as the
algorithms were proposed for either map building or object reconstruction.
Another problem is since it involves the usage of a mobile robot, there is
also a cost for displacing the robot from one sensing position to another. This
issue is rarely taken into account apart from [43], [44] and [57]. It is also quite
probable that uncertainties will pose a serious problem in a next-best-view task.
A small divergence in localizing at the precomputed NBV point can lead to
many unnecessary movements around a single point, leading to high
computational cost [44, 57].
Motion planning for object search must also take into account the
physical limitations of the sensor. Usually, sensors used in motion planning are
assumed to have full circle field of vision [58] which is not true for a robot with
built-in camera and even if they do, it is computationally quite costly to perform
object recognition on whole 360 degrees [20]. Another simplistic approach is to
11
assume the classical line of sight model – a point on an object is visible if the
line segment connecting the sensor to this point does not intersect with another
object in the environment [44]. In fact, all sensors have range limitations,
minimum and maximum distances which they can give a healthy reading. It is
also possible that objects oriented at grazing angles relative to the line of sight
may not be sensed properly [44]. So a new approach having these visibility
constraints but also possessing an easy-to-implement algorithm, involving
sampling-based methods mentioned above, can lead to an efficient object
search, which constitutes the main idea behind this work.
2.2.2 Art Gallery Problem
Probably, one of the most relevant and influential problem in sensor placement
along with next-best-view problem is the art gallery problem. It is an old
famous computational geometry problem that was posed in 1973 by Victor Klee
to Chvátal in the simple form:
“What is the smallest number of guards needed to guard an art gallery?”
Since those early days, art-gallery problem has inspired a great deal of
interesting research and the scope has broadened to include related problems on
visibility, polygon decomposition and optimization [58]. Chvátal [59] proved
that for simple polygons [n/3] guards is necessary and sufficient where n is the
number of vertex in the polygon. Later, Fisk gave another but more popular
proof on Chvátal’s finding, using the triangulation (decomposing a polygon into
triangles, see Figure 2.3) and coloring of vertex [60]. Lee and Lin [61] proved
that this problem is NP-hard leading to a need for approximate solutions. Many
more work and variations on the art-gallery problem have been studied and it
has accelerated rapidly after publications of O’Rourke [62]. Despite the early
theoretical work on view-planning, implementation oriented results also
appeared such as [63] where a 3D object search planning is considered. A
survey on the art gallery problem has been prepared by Shermer [64] in 1992
but since then many more improvements have been done in art-gallery and nextbest-view problems, including the method chosen for this project. So an updated
survey seemed necessary to present the recent results in the field of viewplanning emphasizing the results rather than the techniques used.
Figure 2.3 : Triangulation of a polygon
12
The art-gallery problem lies at the intersection of robotics, vision,
optimization and computational graphics and visibility remains as the common
factor in all these fields. It is more than a geometric exercise; it has several
applications in practice [58] such as placing TV cameras in a showroom,
arranging the lightning sources in a room or placing radar stations in a mountain
area. Since it has real-life implementations, this poses a problem for the solution
of the art gallery problem, the complexity of the real world. There are a great
variety in shapes of polygons and a global solution that would suffice for all
polygon types have not been found yet. However, there exist numerous
solutions that are able to solve the art-gallery problem for certain polygon types
[58, 65, 68, and 75].
It is possible to describe an art gallery as a polygon if we look at the
gallery from above, thus transforming the environment to mathematical and
geometric concepts [58] with a field of vision inside the polygon. An art gallery
is said to be guarded if the guards see everything inside. The part of the gallery
that a guard sees is the intersection of guard’s visibility polygon and the interior
of the gallery (Figure 2.4). There are different types of guards, such as edge
guards, vertex guards, point guards and mobile guards [58, 62, and 65].
In the case of mobile guards, the art-gallery problem transforms into a
set of new problems called “the watchman’s route problem” where mobile
guards are named as watchmen [66, 67]. It was first proposed by Toussaint and
Avis [69] in 1981 and since then, similar to art-gallery problem, has raised
interest among researchers. Under certain strict assumptions such as the gallery
being a convex polygon and the guards having omnidirectional and unlimited
line of sight, it is reasonable to say that one mobile guard is enough to guard the
interior of a gallery. Guarding refers to being able to see all the points in the
gallery at some point along the guard’s trajectory. However, another
optimization constraint is placed upon the problem: What is the route of
minimum length a watchman can follow such that he will see all the points on
the boundary at some point along his route? This has been a long outstanding
open problem and the first efficient solutions were presented by [58, 62, 70, and
72].
Figure 2.4 : Four cameras guarding a gallery
13
The difference between art-gallery and watchman problem can be
described by a simple example. In the art gallery problem, considering a closed
curve as an electrical cable, it is desired to use the minimum number of light
bulbs on the cable such that whole gallery is illuminated. In the watchmen
problem the shortest closed cable length is crucial to the problem rather than
minimum number of light bulbs. Watchman's problem also found its application
in lawn milling and mowing [71]. Another scenario for watchman's route would
be a rescue robot entering a burning building searching for dangerous materials
and removing them as soon as possible.
As pointed out in [58], an application of art gallery comes from the area
of robotics, where a robot moving along a route can not engage its vision system
continuously and has to stop at points on the route to obtain visibility
information. Also running vision algorithm on moving robot is impractical to
perform as it will cause blur and drastically reduce successful detection rate,
lowering the efficiency of the algorithm.
2.2.3 Motivation
As mentioned above, there are two main motivations behind this work;
•
Having an updated survey on the art-gallery related problems, focusing
on the results rather than proofs and techniques. The only survey has
been done by Shermer [64] in 1992 and many improvements have been
made since then, including the randomized algorithm that will be used in
the later part of thesis. With this motivation, the primary problem
statement for survey can be put down as:
In recent years, many improvements have been achieved in art-gallery
related problems and the latest survey conducted on these algorithms
and results is almost two decades old. Therefore, an updated survey
including latest results should be presented.
•
Using one of the recent, easy-to-implement methods for efficient object
search based on a sampling oriented next-best-view algorithm proposed
by Latombe and Banos [44, 57] as well as extending this technique with
the help of statistical data. The secondary problem statement involving
the algorithm can be summarized as follows:
A robot equipped with a camera is deployed in a room and required to
locate objects that are in the knowledge base. It is assumed that a basic
2D layout of the room is available. The robot is subject to visibility
constraints such as maximum distance and field of view. Also the costly
object recognition algorithm renders excessive movements that yield no
new information inappropriate. Therefore an efficient strategy for
planning movements based on constraints should be developed in order
to achieve optimal object search.
14
Chapter 3
Definitions and Preliminary Results
Art gallery problems make use of extensive computation and representation of
geometric objects. In this part, standard geometric definitions and terminology
such as points, paths, routes and polygons will be presented to the reader as
well as some other related definitions that will later be used in the extension of
methods.
3.1 Definitions and Terminology
In order to be able to define geometric objects that will be used throughout this
report, some basic concepts are needed. If » represents the set of real numbers,
the Euclidean plane E is defined as;
E = » 2 = { p | p = ( px , p y ), px ∈ », p y ∈ »} .
A point p is a pair of real number (px,py) called coordinates of p. The line
segment between two points p and q is a two dimensional interval and can be
denoted by ;
[ p , q ] = {( x, y ) | x = p x + t ( q x − p x ), y = p y + t ( q y − p y ), t ∈ [0,1]} .
The Euclidean distance between two points p = (px,py) and q = (qx,qy) , denoted
|| p, q || and defined as;
p, q = (qx − px )2 + (qy − py )2
15
.
3.1.1 Curves, paths and routes
Definition 3.1: Curve
A parametrized curve is a pair of continuous functions C=(X, Y)
X
with. :[0,1] → » and Y :[0,1] → »
A curve C=(X, Y) is simple if t ≠ t ′ ⇒ C (t ) ≠ C (t ′) and t , t ′ ∈ [0,1] or in other
word if it does not intersect with itself [58]. A curve C is said to be closed if
C(0) =C(1) and open if it is not closed.
A path is a simple open curve and a route is a closed curve as depicted
in Figure 3.1. A Jordan curve C is a curve which is both simple and closed
(Figure 3.1).
Figure 3.1 : A path, a route and a Jordan curve respectively
(Image taken from [58])
According to the Jordan Curve Theorem [76], the Euclidean plane can be
partitioned into two open and simply connected sets, namely the interior of C
and the exterior of C. If a curve consists of the union of a finite number of line
segments such that no two consecutive segments are collinear, then this curve is
said to be polygonal [58, 70]. The lines segments are called edges and points of
intersection between segments are called vertices.
Definition 3.2: Polygon
A polygon is generally defined as an ordered sequence of at least three
points v1,v2, ....., vn in the plane, called vertices and the n line segments v1v2,
v2v3,... vnvn-1 called edges.
A simple polygon is a Jordan curve and divides the plane into three
regions; the polygon itself, the bounded interior and the unbounded exterior
[64]. However, in this work we will use a different notation, using polygon as
the union of polygon itself and the bounded interior.
The parametrization of polygon curves in a computer is usually
represented by the lists of edges as they are encountered during a scan of the
curve such as
C = [v1 , v2 ],[v2 , v3 ],.....,[vn −1 , vn ]
where vn represents the vertices of the curve. There is a common representation
for closed polygonal curves in such a way that if the list of edges are followed
from beginning to end, the interior always lies left of the edges. This means that
16
a counterclockwise order is followed when encountering the edges for polygon
boundaries [58].
Figure 3.2 : Example of vertices.
There are two types of vertices in a polygon. A vertex is said to be reflex if the
exterior angle is smaller than 180 degrees, and a vertex is convex otherwise
(Figure 3.2).
Before giving definitions of visibility, two more concepts are needed to
be explained, convex and concave polygons (Figure 3.3). A polygon is a convex
polygon if every internal angle is less than 180 degrees or every line
segment between two vertices remains inside or on the boundary of the polygon.
A polygon that is not convex is a concave polygon. A concave polygon will
always have an interior angle with a measure that is greater than 180 degrees.
Figure 3.3 : Convex and concave polygons.
3.1.2 Visibility and Visibility Polygon
Definition 3.3: Visibility
Two points, q and p, are visible, if and only if there exists a line segment
between q and p that lies completely within the polygon (Figure 3.4).
17
Figure 3.4 : Point c can see point b but not a .
Visible points are said to see each other. Given a point p, the set of points
visible from p in polygon P is called the visibility polygon (Figure 3.5) and
denoted VP(p) [64].
Figure 3.5 : Visibility polygon of point p (Image
taken from [20] )
The edges of a visibility polygon that are not the edges of the polygon are called
windows and the subpolygons that are cut off by windows and do not contain
visible points are called pockets (Figure 3.6) [58] . Windows are also called free
Figure 3.6 : Pockets and windows
( Unmodified image taken from [58] )
18
lines by Latombe and Banos [57] while the edges of visibility polygon that are
also edges of the polygon are called solid lines [43, 44]. Also Tovar, LaValle
and Murrieta [77] used the term “gap” for windows in the work where a novel
approach was followed for navigation and object search by building a visibility
tree.
If the visibility polygon is bounded, then it is a star-shaped polygon (For
definition of star-shaped polygons, please refer to page 22). For example, this is
the case when the point p is inside a polygon and the obstacles are the edges of
the polygon (Figure 3.7). Therefore, decomposing a polygon into star-shaped
components plays an important role in visibility/art gallery algorithms and an
O(nlogn) algorithm was developed by Avis and Toussaint in 1981 [69] where n
is the number of edges of a polygon.
Figure 3.7 : Visibility polygons are star-shaped polygons if bounded (Image from
Wikipedia).
Definition 3.4: Guards
Let G be a set of points in polygon P. If, for any point in P, there is a
point q in G such that p is visible to q then the set G is a set of guards or guard
cover.
If all the points in G are vertices of P, then G is a vertex guards set and the
guards are called vertex guards. Otherwise G is a point guard set and guards are
point guards [64]. By using a different terminology, it is possible to say that a
guard set G, whose elements are g, is said to cover a polygon P if the union of
visibility polygons VP(g) is equal to P (Figure 3.8).
A subset G of G is called a set of guard points if G is also a guard set for
polygon P. A set opt (G) is the smallest cardinality set of guard points for a
guard set G in P [58].It is evident from the definitions that the art gallery
problem for a polygon P is finding the minimum cardinality covering guard set
G for P [64] as polygon P can be envisioned as the floor plan of the art gallery
and the points of G as the guards.
19
Figure 3.8: A covering guard set
Figure 3.9: Guard cover and vision points
An example of guard cover and guard points can be seen in Figure 3.9. The line
segment L contains infinite number of points and this line L is a guard cover
since every point in the polygon is seen by some point along the line L. The set
{g, g`} is an example set of guard points since all points in polygon can be seen
by these two points together and in fact, opt(L) is {g, g`} thus making them
minimum cardinality cover set for polygon P. From these definitions, a direct
conclusion can be made as definition for watchman route [58].
3.1.3 Watchman’s Route
Definition 3.5: Watchman’s Route
A watchman’s route W of P is a closed polygonal curve inside the
polygon P such that W is a guard cover of P.
If a point d on the border is specified and the watchman’s route is required to
pass through this point, then the route is called a fixed watchman’s route with
point d being the door of the route. In the case when no such point is specified,
it is a floating watchman’s route problem [58, 67]. An example of a shortest
watchman’s tour is shown in Figure 3.10. From the figure it should be noted
that the route changes direction at some of the extensions of edges that are
adjacent to reflex vertices. The reason for that occurrence is the need to see
everything behind polygon edges [70]. This shows that extensions of edges play
a crucial role in watchman’s route problems.
20
Figure 3.10: A shortest watchman’s tour ( Image taken from
[70])
A cut is a directed line segment that has start and end points on the
boundary of the polygon and at least one interior point of this directed line
segment should lie within the interior of polygon P [58]. An extension cut is a
line segment drawn by extending the edges that are adjacent to a reflex vertex.
These extensions have the same direction as the edge they originated from.
It can be stated straightforward that a guard set must contain points on
the left of each extension cut as otherwise the edges collinear to the cuts will not
be seen by the guards [70]. However in the case that where a point p on
watchman’s route is known, a distinction should be made between extension
cuts because only the extension cuts that have the point p to their right are of
interest in maintaining visibility. An extension cut c dominates another
extension cut d if all the points to left of c are also on the left of d. An extension
cut is called an essential cut if it is not dominated by any other cuts (Figure
3.11).
Figure 3.11: c dominates d and is an essential cut ( Image
taken from [70] and modified).
With the help of these definitions, one of the most important lemmas is
presented [58, 64, 66, 67 and 70] such as;
A closed curve is a watchman’s route if and only if it has points lying to
the left of (or on) each essential cut of the polygon edges.
3.1.4 Polygon Classes
As mentioned in Chapter 2, one of the problems in constructing a valid artgallery theorem is the applicability of art-gallery problem to many daily cases
such as illuminating art-galleries, mounting security cameras in banks or airport
21
and placing TVs in showrooms. This means that there are many kinds of
polygonal layouts in which the art-gallery finds its application and it is not
possible to give a theory that is valid for all types of these polygons [58]. So
many art gallery theorems are based on specific subset of polygons and a short
description of different polygons will be given to understand the evolution of
problem over years.
Definition 3.6: Star-shaped Polygon
A polygon P is a star-shaped polygon if there exists a point q in P such
that for every point p in P, the line segment connecting point p and q lies
entirely inside P. This point q or the set of points from which all points inside
the polygon is visible is called the kernel of the polygon P.
Star-shaped polygons are important in computational geometry literature
as it is possible to guard the whole polygon by a single guard located at the
kernel (Figure 3.12).
Figure 3.12: A star-shaped polygon and its kernel colored in red.
However, covering a star-shaped polygon with a single guard usually remains a
theory. As it is possible to have inaccuracies in localization and positioning, a
command to navigate the robot to the single kernel point may cause robot to
make many small movements around the kernel and may prove to be quite
inefficient in using a NBV algorithm [20, 33, 34, 57].
Definition 3.7: Spiral Polygon
A polygon is spiral if its boundary can be partitioned into two chains,
one of them only containing reflex vertices and the other one having convex
vertices. The chains are named reflex chain and convex chain respectively
(Figure 3.13).
22
Figure 3.13 : Spiral Polygon
Many art galleries, shopping centers and museums have the form of spiral
polygons. Studies about spiral polygons can be found in [58, 78].
Definition 3.8: Monotone Polygon
A polygon is called monotone with respect to a line L, if the lines
orthogonal to L intersects the polygon at most twice [79].
Assuming the line L to be collinear with the x-axis, the upmost and lowermost
vertices of the monotone polygon partition the polygon into two polygonal
chains such that the y-coordinates of these vertices are monotonically
decreasing or increasing while traversing these two chains separately (Figure
3.14). Dashed lines show that there are more than two intersections with the
polygon, so the two left-most polygons are monotone polygons while the others
are not.
Figure 3.14 : Blue lines represent two intersections, while green
one and red three instersections.
Another example of a monotone polygon with respect to vertical y-axis is
shown in Figure 3.15. It should also be noted that all convex polygons are
monotone with respect to all lines.
Figure 3.15 : A monotone polygon
23
There are two main reasons why monotone polygons are of interest in
computational geometry [70, 80]. Firstly, many indoor settings such as rooms,
corridors and halls are monotone with respect to some axis. Secondly it has
been shown by A. Fournier and D.Y. Montuno [81] that a monotone polygon
can be triangulated in linear time.
Definition 3.9: Alp Polygon
A polygon is alp if it is monotone and the one of the chains is parallel to
the x-axis (Figure 3.16).
Figure 3.16: Alp polygon
Definition 3.10: Histogram Polygon
A polygon is histogram if it is an alp polygon and all of the edges are
axis parallel (Figure 3.17).
Figure 3.17: Histogram polygon
Histogram polygons can be found easily in many real applications. There are
many indoor locations, corridors and rooms that resemble histogram polygons
and these polygons are actively studied [58].
Definition 3.11: Polygon Walk
Let U and D be a partitioning of the boundary of a polygon having two
end points as s and t. A walk of the polygon is defined as a pair of continuous
functions (U, D) such that:
1. U: [0, 1] → U and D: [0, 1] →D
2. U(0) = D(0) = s and U(1) = D(1) = t and
3. U(x) sees D(x) for all 0 < x <1
As defined in [82], there is a walk in the polygon if two points on s can be
moved to point t, each one of them following one of the two boundary chains U
and D such that they are always visible to each other (Figure 3.18).
24
Figure 3.18 : Walkable polygon
Definition 3.12: Polygon with holes (Doughnut Polygons)
A polygon with hole(s) consists of closed line segments which lead to
polygons inside polygons (Figure 3.19). Polygons with holes have been defined
as doughnut polygons in some related works [58].
Polygons with holes are probably one of the most famous types of polygons in
art-gallery literature since any room with obstacles is a polygon with holes
constructed by these obstacles. Many modifications of the watchmen problem
have been studied under the condition that polygons have holes. The problems
differ at the fact that sometimes polygons created by obstacles are additional
edges to be visited whereas sometimes they are only obstacles to be avoided.
“Zoo- keeper’s problem” [83, 84] and “aquarium keeper’s problem” [85] are
two instances of these algorithms (Please refer to page 45 for definitions of zookeeper and aquarium-keeper problems). It has also been proven that solving artgallery and watchman’s route problems and modifications in polygons with
holes are much more complex and time consuming [58, 62 and 64]. Additional
work on polygons with holes can found in [62, 86, 87].
Figure 3.19 : Polygon with holes
Definition 3.13: Orthogonal Polygons
An orthogonal polygon is a polygon whose edges have alternatively
zero-slope (horizontal) or infinite-slope (vertical edges). They are also called
rectilinear or isothetic.
25
It should be noted that histogram polygons are also orthogonal polygons.
Limiting the polygon of art-gallery problems to orthogonal polygons has created
an interesting subclass of problems. Orthogonal polygons which arise in many
computing applications are of interest because of the ease of manipulating and
representing them [64] as well as the advantages of machines used such as
image scanners and plotting devices. Orthogonal polygons have been
investigated first by Kahn, Kletiman and Klawe [65] by using a technique
similar to triangulation, namely convex quadrilateralization. They mainly based
their work on orthogonal comb polygons (Figure 3.20).Later these results were
improved by [88], [89], [90] and by [91] in 1994.
Figure 3.20: Orthogonal comb polygon
3.1.5 Polygon Decomposition
Considering a point guard in an art gallery, the visibility polygon of a
single point is star-shaped as mentioned in Section 3.1.2. For this reason, the
art-gallery problem can be posed as solving the minimum number of star-shaped
polygons such that their union equals to the polygon P. This general problem is
named as polygon decomposition and can be stated as finding the minimum
number of polygons of a specific class such that their union equals a given
polygon [58].
There are two types of polygon decomposition techniques that are of
interest in art-gallery related research. When the decomposing polygons are not
intersecting, then it is a polygon partitioning problem (Figure 3.21) while a
polygon covering problem occurs when the decomposing polygons can
intersect.
Figure 3.21: Polygon partitioning
26
Polygon Partitioning
Polygon partitioning problem is less complex when compared to polygon
covering problem. For polygons with n edges, the smallest number of convex
partitions is n-2 [92].
The main problem with polygon partitioning is not finding the minimum
cardinality of partitions but actually obtaining those partitions [58]. Therefore, it
is important to find an algorithm which computes the smallest number of pieces
for a given polygon and actually output these pieces.
Chazelle and Dobkin [93] showed that the minimum number of convex
partitions can be found in polynomial time and presented an O(n3) time
algorithm for this problem. It has also been shown that this problem is NP-hard
for polygons containing holes [94]. Later Avis and Toussaint [69] presented an
efficient O(n logn) algorithm for decomposing a polygon with n edges into at
most [n/3] star-shaped components but it does not give the minimum number of
components. In 1985, Keil [95] presented polynomial time algorithms O(n7logn)
for decomposition into minimum number of spiral, monotone and starshaped
components.
In the cases where the polygons to be decomposed are of a specified
class, faster partitioning is possible. A linear time algorithm for partitioning a
monotone rectilinear polygon into starshaped components is presented by Liu
and Ntafos [96].
There are also cases where the decomposed polygons are such restricted
that size of partitions are no longer a part of the optimization criterion. An
example of this case is partitioning into triangles, or triangulation. Similar to
the above cases, faster algorithms are required. In 1978, Garey, Johnson,
Preparata and Tarjan [97] gave the first triangulation algorithm that runs in
O(nlogn) time. However, no non-trivial lower bound existed for the problem.
Tarhan and Van Wyk gave a O(nloglogn) time algorithm for triangulation in
1988 [98]. Lastly in 1990, Chazelle [99] was finally able to give a linear time
algorithm for triangulation. However, it has been mentioned in many works as
well as Chazelle’s own that the linear time algorithm is too complex to be
implemented practically. Skiena [100] states that the algorithm is sufficiently
hopeless to implement. Some practical algorithms were finally discovered and
published by Alexey V. Skvortsov and Yuri L. Kostyuk [101].
Polygon Covering
Finding the smallest number of covers pose a harder problem than finding the
smallest number of partitions of a polygon. A Q-cover decision can be defined
as follows:
“Can a polygon P be covered with smaller polygons having property Q?”
27
If Q is being convex, then the problem is a convex cover problem and the
decision version can be solved in polynomial time if and only if there is a
solution to the minimum cover problem in polynomial time [64]. Similar to Qcover, Q-guarding problem can be defined as:
“Is it possible to cover a polygon P by a number of polygons, having the
visibility polygon of a subset of Q and property Q?”
Before investigating the complexity of these problems, a few numbers of
bounds for polygon covering should be presented. Chvátal [59] and Fisk [60]
proved a n-edged polygon can be covered by n/3 guards by using triangulation.
For the restricted rectilinear case, it has been stated that n/4 guards are always
sufficient [62, 65, 88] and also sometimes n/4 are necessary for covering these
rectilinear polygons [58]. These presented proofs use the idea of convex
quadrilateralization which is similar to triangulation process.
Computational Complexity
For most of the P-covering and P-guard problems, many complexity analyses
has been made, starting with the early works of O’Rourke [62] for the convex
cover problem. The complexity of solving these problems has led many
researchers to consider much more restricted versions of the problem so that
linear algorithms can be developed.
Culberson and Reckhow established that the smallest convex cover
problem and similar cases where only boundaries are to be covered are NP-hard
[102]. They have also shown that rectangular cover decision version is NPcomplete and it follows that the optimization of this NP-complete problem is
NP-hard. Later Lee and Lin [61] showed that P-guard and P-cover problems for
polygons without holes are also NP-hard. They also proved that computing
minimum number of guards located on the vertices of a polygon is NPcomplete, which was later extended by Aggarwal [103] to the point guards, i.e.,
guards that are allowed anywhere inside the polygon. Aggarwal also established
the computation of smallest star-shaped cover for a simple polygon is NP-hard.
As mentioned above, these complexity results led researchers to search
for restricted instances of the classical problem to find polynomial time
solutions. Keil [104] presented an O(n2) algorithm for covering rectilinear
polygons with smallest covers of rectilinearly convex and star-shaped
components. Kleitman and Frenzblau [105] also established that a monotone
rectilinear polygon can be covered in O(n2) time with minimum cardinality of
rectangles.
Motwani, Raghunathan and Saran [106] presented that an O(n10)
algorithm for minimal cover of a simple rectilinear polygon with orthogonally
convex stars can be accomplished. A polygon is an orthogonally convex star if
it is a union of orthogonally convex polygons that have common intersection
[58, 64]. This result was further improved by Rawlins [107], who showed (n +
4)/8 orthogonally convex stars are always sufficient for rectilinear coverage.
28
In the case of algorithms which deal with constant sized polygons, Lee
and Preparata [108] established a linear time algorithm for finding the kernel of
a star-shaped polygon and using it for detecting if a polygon is star-shaped or
not. There is a linear algorithm by Shermer [109] for determining if a polygon is
the union of two convex polygons. Belleville [110] gave a O(n4) time algorithm
for deciding if a polygon is the union of two star-shaped polygons and compute
the cover.
So far, only simple polygons (polygons without holes) have been
considered. However, there has also been significant complexity analysis in the
case of polygons with holes. Lipski et al. [111] established an O(n3) time
algorithm for partitioning a rectilinear polygon with holes into smallest number
of rectangles. This work was later improved by Ohtsuki et al. [112] with time
bound of O(n2.5). Lingas was the first to prove that computing a minimum
partition of a polygon with holes is NP-hard [94]. The cover version of this
problem was proven to be NP-hard by O’Rourke and Supowit [113]. They also
showed that covering polygons with holes with minimum number of star shaped
pieces is NP-hard.
29
Chapter 4
A Survey on Guarding Art Galleries
In 1973, Vasek Chvátal asked for an interesting geometric problem from Victor
Klee who posed the problem of determining the number of guards to cover an nwalled art gallery room. After that, much work and research have been done on
what is known today as Chvátal’s Art Gallery problem. This chapter will focus
on the studies that have been conducted and will present the results obtained in
the last 30 years. The aim is to focus on the results rather than proofs and
techniques.
4.1 Art Gallery History
The original art gallery problem, presented to Chvátal by Victor Klee in 1973, is
to find the smallest number of guards g(n) required to cover a polygon of nedges. In 1975, Chvátal [59] stated that [n/3] guards are occasionally necessary
and always sufficient for covering an n-edged polygon. Chvátal started by
defining the lower bound for the number of guards. It is clear that a triangle can
easily be guarded by one guard under the assumption of omnidirectional
unlimited visibility of each guard, so g(3) = 1. A non-convex quadrilateral can
also be covered with a single guard, setting g(4) = 1. For n = 5, there are three
different types of pentagons, having either 0, 1 or 2 reflex vertices and all of
these pentagons can also be covered with single guard too, g(5) = 1. In the case
of 6 edges (n = 6), there are two types of polygons that require two guards,
setting the case g(6) = 2 (Figure 4.1).
30
Figure 4.1 : Polygons with 6 vertices can need 2 guards whereas polygons with 5 or
fewer verices can be guarded by single guard (Image from [62]) .
Chvátal established g(n) > [n/3] by making use of comb polygons (Figure 4.2).
For any n that is a multiple of 3, there are comb polygons that exist [64] and
[n/3] guards are required for comb polygons because any two “prongs” or
upward triangular regions can not be seen by a single guard and every n-edged
comb polygon has [n/3] prongs.
Figure 4.2: A comb polygon with 15 edges and 5 teeth, requiring 5 guards.
Similar to the case mentioned above, the bounds for art-gallery problems
usually followed the same pattern. It was relatively easy to find a lower bound
“necessary” to cover the polygon but the proving “sufficiency” and setting the
upper bound was a difficult problem [62]. Chvátal [59] also established g(n) <
[n/3] in 1973. Similar to Fisk who gave a simple proof [60] for guarding simple
polygons in 1978, Chvátal used the idea of triangulation for setting the upper
bound but used a relatively complex inductive method.
31
O’Rourke in his monograph on art galleries [62] mentioned some false
approaches that were used for establishing sufficiency numbers, trying to give a
simpler explanation to Chvátal’s complex inductive method, which was
accomplished by Fisk in 1978.
Chvátal’s guard formula g(n) > [n/3] could be interpreted as a single
guard needed for every three vertices and it is enough to place the guard at
every third vertex. In Figure 4.3, it is seen that this strategy is not sufficient for
placing [n/3] guards.
Figure 4.3 : Placing guards on every third vertex will not cover either x0, x1 or x2.
Another approach was reducing the visibility of interior to visibility of
boundary. The basic assumption was if every point on the boundary is seen,
then it is a direct consequence that every point in the interior of the polygon is
also covered. However, Figure 4.4 shows that this is not true. The guards
located at points A, B and C are able to cover the boundary but miss the interior
triangle P.
Figure 4.4 : The boundary is covered but not the interior
(Image from [62] ).
The last approach was the reduction of guards to vertex guards instead of point
guards where guards have no restriction on their location. The question raised
was if the number of vertex guards gv(n) equals the number of guards required
to cover the polygon g(n) . There are some situations where this reduction
weakens the guards’ power such as Figure 4.5 where a single point guard is able
to guard the polygon while two vertex guards are needed. However, it turns out
that reduction to vertex guards is an appropriate approach and gv(n) = g(n) [62].
32
Figure 4.5 : A single point guard more powerful than two vertex guards
( Image from [62] ).
Fisk’s algorithm for Chvátal’s inequality was basically a triangulation method
that included vertex coloring [60]: First, the polygon is triangulated, adding
internal diagonals between vertices until no more lines can be added. Second,
color each vertex of every triangle with one of the three different colors in such
a way that no two adjacent vertices have the same color. From this, it follows
that every triangle in the polygon will have three different colored vertices.
Since every triangle can be guarded by a single guard placed on one of its
vertices, choosing any of the three different colors will result in a guard set from
which every point in every triangle is covered and so the polygon is covered
(Figure 4.6).
It should be noted that every color can not be used more than 1/3 of the
time [62]. This fact can also be explained mathematically. If a, b and c are the
number of coloring occurrences and n, the number of vertices, then a + b + c =
n. Also, a < b < c. If a is greater than [n/3], then the sum of a, b and c will be
greater than n. So it follows that, a < [n/3]. As a result, choosing the color that is
least used and placing guards at those locations give a minimum number of
vertices, [n/3].
Figure 4.6 : Placing guards at red or blue vertices guarantees full
coverage.
As mentioned in the previous chapter, Lee and Lin proved that finding the
minimum number of guards needed to cover a given polygon P is NP-hard by
3SAT reduction [61]. However, Avis and Toussaint [69] using the basic idea of
33
Fisk’s proof developed an O(nlogn) algorithm for locating [n/3] stationary
guards for coverage and partitioning the polygon into [n/3] star-shaped pieces at
the same time.
4.1.1 Orthogonal Galleries
Kahn, Klawe and Kleitman [65] were the first researchers to investigate
the case of orthogonal or rectilinear art galleries in 1983. They established the
lower bound g(n) > [n/4] by using rectilinear comb polygons (Figure 4.7).
Figure 4.7: Orthogonal comb polygon
For the upper bound, they used an idea similar to Fisk. Instead of triangulation,
they partitioned the polygon into convex quadrilaterals and used a four-color
scheme for coloring the vertices of each quadrilateral such that every
quadrilateral has four different colored vertices (Figure 4.8). In 1981, Sack and
Figure 4.8: Convex quadrilaterization (Image from
[65] ).
Toussaint had proposed a linear time algorithm [90] for partitioning rectilinear
star-shaped polygons into convex quadrilaterals. Building on that in 1984,
Edelsbrunner, O’Rourke and Welzl [88] established an O(nlogn) algorithm that
partitions the polygon into L-shaped pieces, a subclass of star-shaped polygons
and locates one guard on each kernel. Also O’Rourke [62] gave an alternative
proof for [n/4] guards stated by Kahn, Klawe and Kleitman. Sack [126]
presented another O(nlogn) algorithm decomposing simple rectilinear polygons
34
into convex quadrilaterals and locating [n/4] guards. In the same report, Sack
also gave an algorithm for checking whether a polygon can be covered by single
guard located on vertex in O(n) time. Franzblau and Kleitman stated that a
rectilinear monotone polygon can be covered by a minimum set of guards in
O(n2) time [105]. In 1988, Sack and Toussaint presented an extensive report on
guard placement in rectilinear galleries [89]. In 2007, Couto, Souza and de
Rezende [75] proposed an important algorithm for orthogonal art galleries
where guards are allowed to be located on vertices. They established that for
number of vertices up to 200, the upper bound [n/4] of Kahn, Klawe and
Kleitman is quite distant.
4.1.2 Other Variations
There are also some different variations of the classic art-gallery problem. One
of these variations takes into account the visibility both inside and outside of the
polygon. The variation where the exterior of a single polygon is taken into
account is called the “fortress” problem. The polygon represents a fortress and it
is desired to see the enemy approaching the fortress. Another variation where
both interior and exterior visibility is of importance is “prison yard” problem.
The interior is guarded because prisoners may try to break out while exterior is
guarded from people trying to break in. Both of these problems were posed by
D. Wood and J. Malkelvitch [62] and these problems raised a significant
amount of attention due to the possibility of practical applications such as
guarding military bases as fortress problem or placing security cameras for
banks.
The fortress problem is similar to the normal art gallery problem.
Aggarwal [103] and O’Rourke [62] stated that [n/3] guards are necessary for the
fortress problem and Shermer [64] stated that apart from two of these guards, all
other guards can be placed on vertices. All of these results were obtained by
simply “turning the polygon inside out”. When the polygon is turned inside out,
the outside covering problem changes into covering the interior region, the
original art-gallery theorem (Figure 4.9).
Figure 4.9: Turning a polygon inside out (Image from [64] ).
For the prison yard problem, O’Rourke made a conjecture that the minimum
number of guards necessary for this variation is [n/2] and he gave an upper
bound of [2n/3]. In 1994, Füredi and Kleitman proved that [n/2] vertex guards
35
suffice to cover both the interior and exterior of a simple polygon of n vertices
[127]. An example of prison yard problem is given in Figure 4.10.
Figure 4.10: Guards covering exterior and interior of the prison yard (Image
from [127]).
4.1.3 Generalized Guards and Holes
In this section, special case of guards and related art gallery problems will be
discussed. Guards will be elements of specific subsets of the polygon instead of
just points.
The idea of generalizing guards and extending this variation was first
proposed by Toussaint in 1981 [64]. He posed the problem of finding the
minimum number of guards if these guards are allowed to patrol edges rather
than staying fixed at some location and he made several conjectures concerning
the number of edge guards, gE(n). By exhibiting a methodology similar to
Chvátal’s, he proposed that gE(n) = [n/4] using the polygon class shown in
Figure 4.11.
Figure 4.11: Polygons requiring n/4 guards (Images
from [62]).
O’Rourke extended the idea of edge guards to more generalized mobile guards
where guards are allowed to patrol an interior line segment (edge or diagonals)
in 1983 [128]. He stated that since mobile guards are more generalized version
of edge guards, gM(n) < gE(n). For the upper bound of mobile guards, he made
use of induction technique similar to the work of Chvátal’s and finally
established gM(n) = [n/4]. Another bound for mobile guards was set by
Aggarwal in 1981, especially on rectilinear or orthogonal art galleries [103]. He
stated that [(3n + 4) / 16] rectilinear line guards are sometimes necessary and
always sufficient to guard simple rectilinear polygons.
36
Diagonal guards that can patrol on diagonals, line segments combining
vertices not adjacent, but that can not patrol on edges were investigated by
Shermer [129] and he has established that the upper bound for gD(n) is [(n-1) /
3] and lower bound is [(2n+2) / 7].
For the edge guards, O’Rourke later stated that inducing the problem of
finding minimum guards to a problem of triangulation for proving [n / 4] edge
guards may not be sufficient enough. In 1992, Bjorling-Sachs and Souvaine
[87] established [(n -2) / 5] edge guards for guarding a monotone polygon and
[(n-2) / 6] edge guards for rectilinear monotone polygon.
Czyzowicz et al. [91] considered the case of guarding rectangular art
galleries. A rectangular art gallery is a gallery made up of rectangle rooms with
doors between rooms that have common edges. Guards have limited locations;
they can be in a room or in the doorway between two rooms. They established
that for a rectangular gallery with n rooms, [n/2] guards are needed (Figure
4.12).
Figure 4.12 : A rectangular art gallery with 8 rooms and 4 guards
(Image from [91]).
Czyzowicz et al. [91] also considered different arrangements of rectangular
galleries such as the polygons where the outer shape is orthogonal and it
consists of holes. They have shown that for a rectilinear gallery with h holes, v
vertices decomposed into r rectangular rooms, the number of guards required
for guarding is [(2r + v -2h -4) / 4] (Figure 4.13).
Figure 4. 13 : A gallery with 12 vertices and 4 rectangular rooms.
37
In the case of polygons with n vertices and h holes, O’Rourke [62] gave the
upper bound proof g(n,h)< [(n + 2h) / 3] and Shermer [129] established the
lower bound as g(n,h)> [(n + h) / 3].
4.1.4 Watchman Routes
The “watchman’s route” is a different category of mobile guard related
problems first proposed by Toussaint in 1981. Shermer [64] names these kinds
of problems as hybrid visibility problems because they do not only involve
visibility but also other geometric concepts or optimization. The watchman’s
route is a hybrid visibility problem in the sense that it is concerned with metric
information; optimizing a route or finding the shortest route for which a
polygon is covered. It was defined [66] as a shortest closed path along which
every point on the boundary of polygon P is seen at some point (Figure 4.14).
Chin and Ntafos [67] gave their proof stating that the watchman’s route
problem is NP-hard if the polygon contains holes. They also proved that the
problem continues to be NP-hard if the polygon is limited to orthogonal. In
1991, Chin and Ntafos were also first to give a polynomial time algorithm O(n4)
for calculating the shortest watchman’s route in a simple polygon [66] by
making use of essential cuts and the fact that a watchman’s route must contain
points on or left of each essential cut. However, their algorithm finds a solution
only if a starting point s is defined beforehand. It is not a critical restriction
since the starting point s can be assumed as the door of a room or the entrance
of a building. Tan and Hirata [130] improved Chin and Ntafos’ work by using a
method they call “divide-and-conquer” in 1993.
Figure 4.14: An example of shortest watchman tour
(Image from [62]).
They gave a O(n2) algorithm for finding a route that passes through a fixed
point and stated that another interesting work would be to find a polynomial
time algorithm without any specific point limitation. Carlsson, Jonsson and
Nilsson [70] presented an algorithm with O(n6) worst case running time and
O(n3) storage for calculating a watchman’s route without any specific starting
point.
38
Another variation of watchman’s route, called zoo-keepers route, was
also investigated by many researchers [83, 84, and 85]. In the zoo-keeper
problem, that aim is to search for the shortest closed patch starting at a given
point and visiting each cage without going inside them. In the cases where
shortest path must be traveled counter-clockwise or clockwise, several O(n2)
algorithms have been given by Ntafos and Chin [84]. Carlsson and Jonsson [72]
presented polynomial time algorithms for the watchman’s routes, zoo-keeper’s
routes, aquarium keeper’s route that visits all edges and postman’s route that
visits all the vertices. In the case of watchman’s route, they presented an O(n6)
algorithm without any specific end and start point and also without the
restriction of having a closed path for search.
All of the works presented above were offline algorithms where a simple
map of the polygon to be explored is known before hand. For the following
discussion about online exploration, the term competitive factor should be
explained. To able to compare performance of online strategies, a factor called
competitive factor which is the ratio of the cost of the solution by the cost of an
optimal solution [131]. Deng, Kameda and Papadimitriou were first to present a
competitive strategy for exploration of a polygonal room with a bounded
number of obstacles in it [132]. They assume that enter and exit points of the
room is known and it is required to start from the given point, explore and exit
from the specified point. They gave a competitive factor of 2016, meaning in
the worst case their solution is 2016 times longer than the shortest watchman’s
route. In their work, they also establish lower bounds for competitive strategies
of rectilinear polygons;
1+ 2
2
•
For exploring the interior of an unknown rectilinear polygon,
•
For exploring the exterior of an unknown rectilinear polygon,
1+ 2
2
•
For exploring an unknown polygon with rectilinear obstacles,
2
In 1997, Hoffman, Icking, Klein and Kriegel [131] improved the result of Deng
et al. and presented a complete strategy giving a competitive factor of 133. They
improved their result using a novel structure called “angle hull” by showing that
an unknown polygon can be explored by a tour at most 26.5 times as long as the
shortest watchman’s tour computed offline [133].
4.1.5 Recent Studies
Currently, one of the most important problems based on watchman’s route is
pursuit and evasion algorithms. They were first introduced by Leonardo da
Vinci and now these problems are being researched for military purposes such
as infiltrating an area or clearing the area of threats. The implementation of
pursuit and evasion in art galleries are more recent. Generally, it is aimed that a
39
number of robots should capture an invader or maintain the surveillance of a
target. The problem can also be seen as an automatic medical robot that tries to
keep the area of interest in continuous sight despite unplanned obstacles such as
people. In [134], LaValle, Banos, Becker and Latombe address this pursuit and
evasion problem. They make certain assumptions; an observer should always
keep the target within its sight, workspace consists of static obstacles limiting
the target and the observer movements and a partial model of the environment is
known and they present two different methods. In one method, they use a
probabilistic model to maximize the time the target will be within the visibility
polygon of the observer. The other method involves maximizing the minimum
time to escape and they can calculate the time required for robot so that it can
reach the nearest edge visible to the observer by making use of robot’s known
maximum speed.
Both in the work presented above and [34, 43 and 44], they introduced a
very important subject that is crucial to mobile robotics applications, the notion
of limited visibility. All research in the past made use of the ideal visibility
model where a sensor could see in 360 degrees and without a range limitation.
Thus, the works made with idealistic models could not go beyond theoretical
geometric and computational concepts. However, as the need to practically
implement art-gallery algorithms in real-life situations and as the theory evolved
into practive, it became obvious that an idealistic model was not sufficient
enough for mobile robotics. [135] was the first to consider limited visibility,
followed by [33, 34, 43, 44, 57, 71].
In the following chapter, the randomized art gallery problem with
visibility contraints will be presented as well as the extension of this method for
efficient object search which is main subject of thesis. The randomized method
for determining next-best-view points was first presented by Latombe and
Banos [57] and now are being implemented in most of the art gallery related
problems. The offline and online watchman’s route algorithms are not suitable
because it requires continuous and synchronized run of computationally
expensive vision algorithms during the exploration. However, the NBV
algorithms proposed in [43, 44, 57] are much more efficient. The realistic
approach of limited visibility is suitable for mobile robots applications. The ease
of computation and the ratio of implementation complexity versus solution
quality are the other key features for choosing randomized art gallery method
among the extensive number of art-gallery algorithms for our work.
40
Chapter 5
Randomized Art Gallery Problem and
Extensions
This chapter will present the randomized art gallery algorithm in detail as well
as present some extensions and implementations of these extensions for efficient
object search.
Building a map of the environment and navigating according to this built
representation is one of the basic tasks for mobile robots. In some situations, the
aim is to use this representation to accomplish certain specific tasks such as
reconnaissance and exploration. Depending on the type of objective, a 2D
environmental map may be sufficient or a 3D representation can be acquired by
utilizing a 2D map known prior. There is a common denominator in all of these
exploration and map building processes: sensing operations must be made
throughout the map to build an accurate model. In other words, sensors have to
be placed at different positions to acquire the representation. The problem
known as next-best-view(NBV) problem which is closely related to the artgallery problem strives to solve the question: where should the sensors be
placed to gather the required information for efficient navigation and
exploration? If a mobile robot is to accomplish this task, the problem transforms
into finding locations where the robot acquires images to run its vision
algorithms on.
41
A robot equipped with a laser range-sensor can acquire information
about its surrounding by performing a rotational sweep and utilizing a number
of scans made during this sweep (Figure 5.1). It is able to build a 3D model of
its environment by using this sweeping technique. However, the model
Figure 5.1: A robot can capture its surrounding by rotational sweeps(Image
acquired from [57]).
resolution depends on the rotational speed during the sweep, leading to the
result that acquiring a high quality 3D model of the environment is a
computationally expensive operation [57]. Thus, it is of interest to find the
minimum number of locations to run rotational sweeps and the mobile robot
should be sent to those locations for building a 3D model of a place with a 2D
layout known beforehand. The NBV algorithm that will be presented aims to
find this minimum number of locations under certain visibility assumptions.
As mentioned in previous chapters, classical art-gallery algorithms
approach the problem of finding a minimum number of guards under a classical
visibility model. This model assumes that guards/sensors have unlimited ranges
and omnidirectional line of sights. Also, two points are said to be visible to each
other if the line segment connecting them does not intersect any object.
However, in practical sensing on mobile robots this assumption does not hold.
Two basic complications for mobile sensing are: the lower and upper bounds for
the sensors and surfaces oriented at certain angles with respect to sensor’s line
of sight cannot be detected reliably [43, 44, 57].
42
Another difficulty with the art-gallery algorithms comes from the
complexity analysis of these kinds of problems. In 1986, Lee and Lin [67]
proved that art-gallery problem, with or without holes, is NP-hard. Most of the
attempts at finding optimal number of guards for guarding an art gallery have
strived for exact solutions [58, 65, 66, 70, 72, and 78]. One of the most detailed
works on art-gallery solutions with different types of polygons has been given
by Bengt J. Nilsson in 1995 and he concluded his work with an open question
for future work: Is it possible to find approximate solutions with good worst
case bound, instead of searching for exact solutions?
In the light of this proposition, a new series of studies have been
conducted for finding approximate solutions. Latombe et al. [33, 34, 43, 44, 57]
studied different aspects of art-gallery problem and presented a novel approach.
The work presented here follows the algorithm established by Banos and
Latombe [57]. This algorithm which will be called “randomized art-gallery
algorithm” gives a near optimal approximation to the computation of set of
guards that complies with range and incidence constraints and tries to beat NPhardness nature of problem by utilizing a random sampling approach. The
algorithm does not guarantee minimum number of guards to be located but
gives an upper bound for the size of the guard set. For n number of edges, c
number of holes (obstacles) and c optimal size of guards, the guard set size is at
most a factor O ( log(n + h) ⋅ log(c log(n + h)) ) .
5.1 Extended Visibility Constraints and Effects
As the classical line-of-sight sensor visibility model is not efficient for real
world applications, the criterion on visibility is modified to compromise with
range limitations and grazing angle restrictions.
Definition 5.1: Visibility
Let W describe the workspace layout and p and q are two points such
that p, q ∈ W . p and q are said to be visible to each other if:
•
The line segment connecting p and q lies entirely within workspace
polygon W.
•
The Euclidean distance between p and q is between a certain
minimum and maximum; dmin< || p, q || < dmax where dmin and
dmax are input constants to the algorithm.
•
If one of the points, p, is on the boundary of W, then the angle
between the line segment connecting p and q and the normal vector n
perpendicular to the boundary at point p should be lower than a
certain angle τ ; ∠(n, p, q ) ≤ τ . τ is also an input constant (Figure
5.2).
43
Figure 5.2 : The incidence angle (Image taken
from [43])
These three constraints on limited visibility can be applied to different sensor
models. For our case where a camera can be used for image recognition, an
additional criterion can be implemented;
•
The line segment connecting p and q lies within the field of view of
the camera.
The basic restrictions are reflected by this visibility model; a point cannot be
seen at grazing angles, or the point is not visible if it lies too far or too close
with respect to the sensor and if it does not lie within the camera cone. The
resulting sensor model under these restrictions can be seen in Figure 5.3.
Figure 5.3 : Sensor model
Both the classical art-gallery problem and the randomized algorithm first
presented by Banos and Latombe [57] only aim to cover the boundaries of the
workspace layout. In our case, the objects can be scattered through the
environment including on the boundary and inside the polygon. The initial
problem can be stated as;
“Given a polygonal layout of a workspace W, find minimum number of
guards such that both the interior and the boundary of W are visible by some
guard set G.”
44
5.1.1 Effect of Constraints
Before moving on to the basic algorithm, the effects of the visibility constraints
presented above should be studied as these effects are among the motivating
factors of the randomized art gallery algorithm.
Using the visibility definition given in 5.1, it is not always possible to
fully cover all layouts. For example in Figure 5.4, the corridor can not be
covered completely if for a given incidence constraint τ , the minimum camera
range rmin is too large.
Figure 5.4 : A large minimum range choice can lead to
incomplete coverage (Image taken from [57]).
Similar to the problem presented above, certain layouts may require an infinite
number of guards for full coverage. If the layout contains walls meeting at acute
angles, there will always remain an uncovered portion of the boundary (Figure
5.5).
Figure 5.5: Walls meeting with acute angles prevents full
coverage (Image taken from [57]).
Another direct consequence of the incident constraint presents itself when the
polygon has an internal angle with less than 90 - τ degrees (Figure 5.6). As an
interesting result it can be stated that no finite guards can cover a triangular
layout if the incidence constraint τ is less than 30 degrees (Figure 5.7).
45
Figure 5.6: A polygon cannot be covered if one of its vertices has an
angle smaller than 90 - τ
(Image taken from [57]).
Figure 5.7: A triangle can be fully scanned if the incidence
constraint is less than 30 degrees (Image taken from [57]).
Figure 5.8 shows an example of this kind of layout where placing a single guard
with dmin = 0, dmax = ∞ and τ = 90 at the kernel (see Definition 3.6)
guarantees full coverage. However any small movement from this location
leaves some part of the polygon boundary uncovered.
Figure 5.8 Any deviation from the center causes
uncovered portion of W (Image taken from [57]).
46
Covers that have similar behavior cannot be covered by random sampling
technique as these types of optimal covers have measure zero and the
probability of sampling such a subset is null [57].
It should be noted that in all these constraint effects mentioned above,
Latombe and Banos assumed in their algorithm [43, 57] that the camera’s
viewing direction is parallel to the normal of the wall the camera is facing.
However in our case, our algorithm follows a different methodology and
constraints’ effects may not be observed in all the situations explained above.
5.2 Randomized Algorithm for Art-Gallery Problems
As presented by Lee and Lin [61], the art-gallery problem is NP-hard so instead
of searching for the exact next-view-points, an approximate approach for
optimal solution should be followed.
The basic approach randomly samples the interior of the workspace and
constructs a sample set G which contains potential sensing/guard points. The
aim is then to find the subset of minimum cardinality of G that can cover the
desired region. If the kernel of the polygon workspace has a non-zero measure,
then as the number of samples grow set G will contain, with increasing
probability, a subset from which the whole workspace is covered [43]. However
as mentioned in the previous chapter, a complete coverage may not be possible
due to visibility constraints and properties of workspace polygons.
The algorithm first starts with getting random sample points from the
interior of defined workspace. Then for each point, the visibility polygon is
calculated based on the visibility constraint, in other words a camera model is
placed on every sampled point (Figure 5.9). The point that yields the highest
information gain is chosen as the best candidate and added to the next-best-view
list. The portion of the workspace visible from this best candidate is marked as
covered and the algorithm continues with sampling the workspace until a certain
user or implementation defined coverage threshold is reached. The algorithm
can be summarized as follows;
Algorithm: Randomized Art-Gallery Algorithm
Input: 1. A polygonal region P that defines the workspace.
2. The visibility constraints of the camera (dmin, dmax, incidence angle).
3. The number of random points, m, to be produced at each sampling.
4. A coverage threshold / percentage for halting the algorithm.
Output: A set of sensing points from which the desired region is covered.
1. Sample the workspace m times for constructing a set of guard
candidates.
47
2. Place a camera model on each candidate point based on dmin, dmax,
and incidence angle.
3. Calculate the new information gain acquired from every candidate
and choose the candidate with highest gain.
4. Mark the portion of the workspace covered by the selected candidate
as “covered”.
5. Go to Step 1 and re-iterate until a desired coverage ratio is reached.
Figure 5.9 : A camera model is placed on every sampled candidate and the
regions they cover are calculated (Image taken from [43]).
The randomized art-gallery is based on the fact that even under the condition
that a full coverage is possible, it is not practical to implement due to errors in
localization positioning and sensing errors, thus the algorithm aims for an
approximate solution. In a real-world scenario, a mobile robot may not be able
to take advantage of an optimal exact solution because of these errors. So for a
specific polygon, an approximate optimal number of random points should be
sufficient.
The algorithm mainly assumes that the randomly selected points have a
balanced density throughout the polygon and the optimal solution of sensing
points are somewhere in the balanced concentration of sampled points [20]. In
the cases where complete coverage is desired, it should be noted that the
algorithm should examine more random points the closer it approaches to the
optimal solution, increasing the computational cost significantly.
This randomized algorithm which samples the interior region of a
workspace has a quadratic dependency on the number of samples which
constitutes an inconvenience [43]. To overcome this inconvenience, an
alternative algorithm can be used which avoid sampling the regions of interior
from which the coverage is not possible. Instead of sampling the domain of the
problem, the constraints of the problem (the points that have to be covered) can
be sampled for this purpose. This method is called “dual-sampling algorithm”.
First, a point q on the region we would like to have covered is chosen. Then the
camera model is placed on q. This is equivalent to finding the points that can
48
see q. After this step, a sampling is made in the visibility polygon of q, and
candidates are chosen. The rest of the algorithm is similar to the algorithm
mentioned above and an example is given in Figure 5.10.
Algorithm: Randomized Art-Gallery Algorithm with Dual Sampling
Input: 1. A polygonal region P that defines the workspace.
2. The visibility constraints of the camera (dmin, dmax, incidence angle).
3. The number of random points, m, to be produced at each sampling.
4. A coverage threshold / percentage for halting the algorithm.
Output: A set of sensing points from which the desired region is covered.
1. Select a random point q from the region that needs to be covered.
2. Place a camera model on q based on dmin, dmax, and incidence
angle.
3. Sample the region that is visible from q and that is inside the
polygon.
4. Calculate the gains of every candidate in the visibility polygon of q
and choose the highest candidate as next-best-view point.
5. Remove the portion that the best candidate can observe from the
list of “needs-to-be-covered” region.
6. Go to Step 1 and reiterate until a desired coverage is achieved.
Figure 5.10 : A point is chosen on the boundary and the visibility
polygon of this point is sampled for candidates
(Image taken fom [43] ).
One of the advantages of these randomized algorithms is the ability to
determine when to stop. It is possible to stop after a certain threshold is reached
or the algorithm can halt when the reduction of unobserved region is not an
important fraction of the total [44]. The usage of threshold and accepting
approximation rather than exactness can be the difference between 100%
coverage with ~1000 candidates and 95% coverage with ~15 candidates.
49
The algorithms presented in [33], [34], [43], [44] and [57] focuses only
on covering the boundaries. However for an efficient object search, the interior
of the polygon must be taken into account as well as the boundaries, both with
different gains (the amount of new information made available by sensing
process). Also in all of the works mentioned above, polygons are presented as
line maps. Polygon representations with line maps are handy when using
theoretical framework. However, it is hard to get such polygon representations
in practice from data and line maps can not represent the interior of polygons in
a convenient way. For these reasons, occupancy grid approximation is utilized
in this extended art-gallery algorithm presentation. Grid approximations are
flexible tools for representing spaces that do not require strong assumptions
about the real world [49, 120]. To be able to direct the object search in a
specific order, we would like to have different weights at different regions of
the polygon such that the algorithm will try to make a search based on priorities.
Thus, occupancy grids provide an efficient way to implement it.
Normally in 2D representations, the layout is an approximation where it
is assumed as a cross section at a specific height [33]. A complete coverage of a
2D environment does not ensure that the real 3D environment is completely
covered. But for complete coverage of the indoor environments, this does not
pose a serious problem as the remaining holes are a small fraction of the entire
scene and can be covered with a few additional sensing operations [43].
However, when we shift our focus from coverage oriented problem to an object
search based problem, we would like to take into account the third dimension,
as objects tend to appear at various heights. Building 3D models as a collection
of points in 3D space is computationally expensive and inefficient [123]. For
this purpose, we will use “elevation maps” that can be seen as a 2,5D
representation of space. So before presenting the extensions to the randomized
art-gallery algorithm, the following part will explain some basic concepts about
occupancy grids and elevation maps.
5.3 Occupancy Grid and Elevation Maps
Most of the algorithms designed to bring a solution to the art-gallery or NBV
problem for a specific type of polygon utilized line maps for manipulating data
and creating paths. In this thesis, a different approach is followed by making use
of occupancy grids for efficient object search.
As mentioned before, we would like to incorporate statistical data into
the search algorithm so that the search is guided by the data collected prior. For
this purpose elevation maps will be utilized. Following two parts will introduce
basic concepts about occupancy grid and elevation maps.
5.3.1 Occupancy Grids
An autonomous robot’s performance in acquiring a meaningful map of its
environment is highly dependent on the quality and the accuracy with which it
50
can perceive its surroundings. As the robot moves and operates in this
environment, it makes use of the information gathered and consequently
incorporates this information into a representation of the spatial model, a map.
The field that works on environment modeling is robotic mapping and it is a
highly active field of research in AI and mobile robotics [49].
As mentioned, one of the aims of mobile robotics is to make
autonomous robots able to navigate and operate in completely unknown
environments, such as required in, for example, planetary and space exploration.
These robots should also possess the ability to handle rough complex
environment without any prior knowledge. In the cases such as working in a
factory or reactor, robots may have access to a basic map of the environment.
The problem in these situations is that maps can be outdated or in long
operating schemes where long distances are traversed, the navigation system
may be subject to substantial errors making it difficult to localize. These
assumptions led to some basic requirements for mobile robots such as the ability
to navigate and operate while relying heavily on sensors and not relying on the
precompiled maps. First approaches included usage of two levels of sensing,
namely low-level and high-level sensing. Low-level sensing were extracting
geometric features like lines or surfaces while high-level sensing operations
made use of templates and prior heuristics. Several issues with this type of
modeling was the heavy reliance on precompiled models and prior heuristics
used [50]. The occupancy grid was developed to answer those shortcomings of
provided methods (Figure 5.11).
Figure 5.11: A comparison of Occupancy Grid framework
(Image inspired by [50]).
Occupancy grids have become one of the dominant methods for
modeling the environment [114]. Basically, an occupancy grid is a probabilistic
tessellated 2D representation where each cell accumulates fine grained
quantitative information about which parts of the robot’s surrounding
environment are occupied or empty [50]. Each cell stores a probability factor if
that cell is occupied or empty based on the sensors readings. However, the
51
creation of Occupancy Grids is not easy as the sensor readings have to be
interpreted by the robot and deductions about the state of cells have to be made.
This process is achieved by utilizing a sensor model that behaves as a tool for
sensory measurement interpretation (Figure 5.12).
Figure 5. 12 : Estimating the grid from sensor data
( Image from [50]).
A stochastic sensor model defined by probability density function p(r | z)
relating the sensor reading r to true range value z is used for interpreting the
sensor measurement. After, a Bayesian estimation process is followed to
determine the probabilities of each cell’s state. Consequently, by using optimal
estimators like maximum a posteriori (MAP) a discrete state value is assigned to
each cell as either occupied, empty or unknown [50]. An advantage of using
occupancy grids is the ability of combining various sensor readings into a single
map model. An example in mobile robotics mapping is given in Figure 5.13.
Figure 5.13: Occupancy grid allows integration of different sensor models
(Image taken from [50] ).
52
The robot explores the environment and acquires information about the world.
Data obtained from reading sensors is called sensor view. A number of various
sensor views acquired from different types and models of sensors can be
combined into separate local maps which later can be fused together into a robot
view framework. Robust data fusion with occupancy grids is an active research
field [115].
One of the most common used sensors in mobile robotics is sonar
sensors due to their low cost, processing speed and ease of use. They can give
relative distance information between the sensing position and object detected
in its sweeping cone (Figure 5.14).
However there is a phenomenon called specularity that can be seen in
these types of sensors. Specular reflections are the reflections that occur when
sonar beams hit a smooth surface and reflects to a single direction at an obtuse
angle. This specularity phenomenon results in faulty readings resulting from
beams that have bounced off many objects or no readings at all. Collins et al.
[116] have reported that addressing to those uncertainty problems provided by
specularity and other means have resulted in an improvement of occupancy
grids.
Figure 5.14: Sonar readings and corresponding Gaussian return (Images
taken from [138]).
For a quick history of Occupancy Grids;
•
First defined in 1985 by Moravec and Elfes [117] at Carnegie Mellon
University in a probabilistic framework.
•
Matthies and Elfes developed a more rigorous Bayesian based map
updating formula in 1988 [118].
•
Sebastian Thrun made a neural network based approach on
Occupancy Grids in 1993 [119].
53
•
In 1997 Kurt Konolige developed an enhanced Bayesian framework
[120].
•
Sebastian Thrun [121] described a very different mapping technique
in which he made use of his sensory model labeled as “Forward
Model”.
For details and developments of these techniques, [49] provide an empirical
evaluation work comparing these methods mentioned above. A comparison of
position estimation techniques using occupancy grids is given in [122]. The
issue of extending Occupancy Grids for creating maps of dynamic indoor
environments is discussed in [48].
As a result occupancy grids in mobile robotics provide a flexible
framework for navigation in previously unknown environments. It should be
noted that many operations on occupancy grids resemble the operations done in
image processing domain. Table 1 [50] presents a short overview of these
operations.
Occupancy Grids
Image Processing Domain
Cell labeling
Thresholding
Handling position uncertainty
Blurring / Convolution
Removing spurious readings
Low-pass filtering
Map Matching/Motion Solving
Correlation(Multiresolution)
Path Planning
Edge tracking
Determining object boundaries
Edge detection
Extracting occupied regions
Segmentation/labeling/region coloring
Table 1 : Operations on Occupancy Grids and Image Processing Domain
5.3.2 Elevation Maps
In mobile robotics, one of the key problems is finding efficient data
structure for 3D data [123]. There are several types of representations for 3D
data;
•
One of the approaches is representing the area as collection of all 3Dpoints (Figure 5.15). It requires a large number of points per scan and
has low utility for navigation.
54
•
Another approach is to divide the area into a three dimensional grid.
While this grid provides higher accuracy, the memory requirements and
huge computational cost prevents it from being efficient.
•
Using 2D grid is the simplest, straight-forward and low cost method
which is efficient for navigation. However it can not capture the exact
information and is no more than an approximation.
Elevations maps which are also known as Digital Elevation Models (DEM) are
used to capture the third dimension, which is missing in 2D Occupancy Grids.
Elevations maps are commonly built using remote sensing as well as land
surveying and geographic information systems commonly use these digital
elevation maps.
Figure 5.15: A typical 3D scan
Elevations maps aim to combine the low-cost 2D grids with 3D grids that have
a higher accuracy and capture the height information [123]. So basically,
elevation maps are 2D grids which additionally store the height-value of each
cell (Figure 5.16). The height can be estimated by various methods depending
on the implementation.
Figure 5.16: Elevation map idea
55
The advantages of using elevation maps can be summarized as follows [123];
•
2,5D representation instead of 3D grids.
•
Constant time access.
•
Straightforward computation of cell traversibility.
•
Path planning like in 2D.
However, there are a few problems with using elevation maps. There is only one
level in height representation so it is not possible to know much about vertical
objects since we can only assign one value to each (x, y) coordinate of the map.
Also the height depends on the viewpoint of the sensor. Extensions of elevation
maps are required to deal with the problem of vertical objects and multi-levelextension (Figure 5.18).
Figure 5.17: A typical elevation map (Images from
[123]).
Figure 5.18: Extended elevation map [123].
56
In Figure 5.18, cells with vertical objects are red, cells with big vertical gap
such as windows, bridges, door frames etc. are colored with blue and cells seen
from above by the robot are represented by yellow color.
Some preliminary results on building fine resolution DEMs by using low
altitude aerial images with only a set of stereovision images as input data from a
tethered blimp can be found in [124]. There is also some research [125] where
elevation map data is used as position correction tool for mobile robots on rough
terrains.
5.4 Extended Art Gallery Algorithm and Implementation
As mentioned above, extensions to the randomized algorithm for the art-gallery
problems are presented and implemented in this section. Also some basic results
will be given and discussed.
5.4.1 Table-Boundary Coverage based on Grid weights
The first algorithm basically deals with the problem where the aim is to search
in a gallery which has exhibitions presented both on the walls and on the tables.
In our scenario, we would like to prioritize the tables so that the search
algorithm will start with looking for objects on the table, followed by searching
and covering the boundaries/walls. The first algorithm was carried out to justify
that the search can be guided by proper usage of cell weighing. This section
basically aims at showing how using weights can direct the search, without
thinking on how those weights can be given.
The scenario begins with the assumption that we have the layout of the
floor prior to the search or have acquired this layout by means of lasers. For
demonstrating the efficiency of occupancy grid, we used a bitmap image where
pixels with values ranging between 0 and 255 represent cells in a layout (Figure
5. 19). The border cells were given a weight of 50 while table cells were given a
Figure 5.19 : Floor layout
higher weight of 76 to make sure the table was given priority. Also a gray-scale
copy of the floor layout is created for marking the unexplored and explored cells
57
in the search. For simulating the camera, a model with a range of 1.5m and 60
degrees field of view and having eight different orientations is created as seen in
Figure 5.20.
Figure 5.20: 8 different orientations of the camera model
In this first scenario, it is assumed that cells that belong to the table and borders
are known so they are used in the process of determining coverage thresholds.
Two different thresholds have been constructed; one for coverage of table
surface and one for coverage of walls. The search algorithm decides to continue
or halt the algorithm based on the current coverage ratio and the threshold given
58
by the user. Although the process is fairly similar to the algorithm explained in
Part 5.2 (Randomized art-gallery algorithm with dual sampling), there are some
points of importance that should be mentioned.
Different from the mentioned algorithm, when a random point is
sampled, for example on the table, also a random orientation is assigned to that
point for the camera direction. It has been previously explained that a point is
randomly chosen among the cells that we would like to cover eventually. For
easing the process of prioritizing search based on weight values of cells, an
importance sampling is carried out when selecting a random point so that it is
more likely that a cell with a higher probability to yield high information gain is
going to be picked up as a sampled point.
It is possible that in some situations all of the candidates will have a low
information gain. For example when a random point in the boundary is selected
with the random orientation, the camera model may be facing towards the wall
resulting in a cone of size 0. Even if the camera model is properly oriented and
the cone is of ideal size, the sampled points in this cone can have orientations
such that none of them sees any portion of the wall due to random nature of the
algorithm. Again, by using a certain threshold on the minimum information gain
required, this problem can be overcome. A new resampling process can take
place and new candidates can be selected by randomization. Another way for
lowering the possibility of zero gain cones is using different orientation schemes
for different parts of the layout. In the case of table searching when the camera
model is placed randomly on a point on the table, it has been observed that
instead of giving every candidate in the cone a random direction, assigning all
of the candidates the orientation opposite of the one with which the camera
model on the table is facing gives better results.
One can use the same threshold to speed up the search process. For
example, in the earlier steps of search algorithm, a tighter threshold can be set
such that each iteration gives a certain amount of new information. As the
coverage ratio approaches to the desired ratio, the threshold can be lessened
since each new iteration will be adding smaller amounts of new information.
The robot size has also been taken into account such that the points where it is
physically not possible for robot to be positioned are rejected directly. Based on
these, the algorithm can be summarized as;
1. Sample a random point in the map in such a way that points with
higher weights (priors) are more likely to be selected (Only regions detected by
laser are given non-zero prior).
2. Select a random orientation with which the camera model will be
placed on the selected random point. If the resulting camera cone is of size 0,
resample.
3. Choose n number of samples in the cone created by the field of view
of camera.
59
4. If the original sampled point is not on the wall, assign all the samples
the same orientation. This orientation is the opposite of the direction with
which the camera on the table is facing. If the original point is on the wall,
randomize the orientations of the candidate points.
5. Calculate the gains of each candidate. If the maximum gain is lower
than a certain threshold, resample the candidates.
6. Choose the candidate with the highest gain and place the camera
model on this point.
7. Mark the points seen by camera as explored.
8. Calculate the total coverage. If the coverage is below the threshold,
resample from the unexplored points, go to Step 1. If the coverage threshold is
satisfied, halt the algorithm.
Figure 5.21 shows the sampling part of the algorithm where the black
point represents the original point where the camera model is placed. The cone
from which candidates should be sampled is shown in orange and the crosses
represent the sampled candidates. Yellow candidate is the one that gives the
highest information gain among the others. The figure below shows the
coverage area of the given yellow point.
Figure 5.21: The sampling process and the best candidate.
60
In Figure 5.22, the resulting best candidates in the upper image and the final
coverage in the lower image can be seen. The black lines on the second image
represent the uncovered portion of the boundary. It can be seen that the table is
almost completely covered. On many tries, it has been observed that the search
algorithm first covers the table surface followed by the boundary coverage.
Each time the algorithm successfully reached the coverage thresholds (95% for
both table and the boundary) and averaged ~17 candidate points for this process.
Figure 5.22 : The coverage of a room with a table.
The usage of a static threshold percentage can lead to problems such as false
terminations or infinite loops. An office layout with small gaps and a bad choice
of threshold value can cause such results. An effective solution in these cases
may be utilizing a dynamic Thresholding. For example, the simulation can be
run with high thresholds and the narrow gaps can be checked if it is worth to
move to that position. Depending on the result of the simulation, the real
threshold can be lowered so the robot can conclude that its task is complete
without trying to venture into small gaps.
61
As mentioned, one of the advantages of the randomization method is the ability
to terminate the algorithm when certain thresholds are met. However, this
advantage can also be evaluated as a disadvantage as it requires a significant
amount of work on the user side, deciding on the correct / efficient threshold
values. It is also reasonable to comment that biasing the search by using various
threshold criterions diminishes the randomized nature of the algorithm which is
a key element in the success of the method. Additionally, the success of the
method is grid-resolution dependent as a high resolution grid may lead long
processing time while a too low resolution map can steer robot to unintended
locations, hindering the performance.
As a result, the algorithm showed that it is possible to conduct a
prioritized object search in a room by using different weights/coefficients for
different grid cells. It should be noted that in contrast with the art-gallery
algorithm where the aim is to minimize the number of candidate points for full
coverage, the goal here is to guide the search based on user’s direction while the
search makes use of the idea of randomized sampling of workspace. In the case
of optimum solution, sensing and planning costs as well as movement cost
between candidates should be taken into account for a complete algorithm.
5.4.2 Incorporating Statistical Data for Efficient Search
The introduction of statistical data into object search has been an area of
research lately [21, 37, 38, 39, 40 and 41]. The aim is to achieve a better
efficiency by guiding the search process on the data provided statistically. In
this part, a simple approach for using statistical data and a basic algorithm will
be presented. Although the algorithm used is similar to the one mentioned in
part 5.4.1, there are some differences.
It is assumed that a floor layout plan with the elevation data is available
prior to the search such as in Figure 5.23. The floor grids have a height of 0 cm.
so they are represented by pixel values of 0 whereas the border walls are given a
value of 255. The table in this example is supposed to have a height of 70 cm.
Figure 5.23 : A basic elevation map representation of a room with a
table
62
For making use of this elevation data, a statistical distribution such as a
histogram for a certain object can be used. The histogram is a way to model the
distribution of objects over height and it can be used together with the elevation
map to build the prior which will then be used to drive the search. A random
histogram used in the simulation is given in Figure 5.24. The y-axis shows the
number of occurrences of a certain object at the given height interval
represented by the x-axis values. The object that is being searched in the room is
more likely to be seen in between 60 and 90 centimeters.
Figure 5. 24: A histogram representing the distribution of a certain object on
certain heights.
For each grid cell in the elevation map, a new weight is given depending on the
number of times an object has been observed in the histogram at that height. So
the usage of the histogram data transforms the elevation map into a map of
probabilistic weights (Figure 5.25). As can be seen, floor cells have zero
probability of containing the object whereas the table has the highest likelihood
followed by border cells, based on the values of the histogram above.
Figur 5.25 : A layout where each grid contains the probability information for
containing a certain object.
63
The rest of the algorithm works similarly to the algorithm mentioned in part
5.4.1 since the probabilities in each cell represents the weights and as the
algorithm suggests, cells with higher weights(higher probabilities) will be
covered first.
One of the problems in the process is assigning the coverage thresholds.
In contrast to the above algorithm, it is assumed that there is no knowledge as to
whether a cell belongs to the table or the border part of the layout. Therefore,
using two separate thresholds for table and boundary coverage is not possible
and this may lead search process to sample from boundary points before a high
percentage of the table is covered due to randomized nature of the algorithm,
causing a higher number of steps for reaching the desired threshold compared to
the previous algorithm in part 5.4.1 . Also since most of the unexplored cells
belong to the table, a high coverage threshold such as 90% will leave majority
of the border cells unexplored as seen in lower image in Figure 5.26. The above
image shows the candidate points where sensing operations should be carried
out in order to cover the room as shown in the below figure. The black pixels in
the lower image represent the uncovered portion of the room.
Figure 5.26 : Best candidate points(upper image) and the
resulting coverage of a room(lower image).
64
As can be seen in Figure 5.26, 90% coverage of the layout is achieved in
23 steps while the search algorithm in Figure 5.22 can achieve 95% coverage in
19 steps. It can be said that results are comparable since the algorithm with 95%
coverage assumes the cells belonging to border or non-border are known and
their weights have been determined manually prior to the search whereas the
other algorithm only has the histogram and the elevation map as the input.
65
Chapter 6
Conclusions and Future Works
6.1 Conclusions
While the art-gallery has been extensively studied under many different
modifications and constraints, there has been only one survey on the results of
these problems in 1992. This thesis attempts to provide an updated survey work
in the field of art-gallery related problems focusing on the results rather than
proof of techniques.
Also one of the novel algorithms that aim to give an approximate
solution to the art-gallery problem has been particularly discussed and some
new extensions have been presented such that an object search can be conducted
in a room whose layout is known prior to the search. It has been shown that by
making use of occupancy grids, which has not been actively used in these
problems before, it is possible to assign certain weights to particular areas of
interest in the map and guide the search process depending on these priorities.
The integration of statistical data into search process has also been addressed by
using a histogram of an object for transforming an elevation map into a map of
likelihood.
6.2 Future Work
There is an extensive literature on art-gallery related problems and even though
this survey may present an initial step, a more detailed survey can be done by
focusing on only a particular modification of the problem(such as watchman’s
route or edge guards) or by explaining the techniques mentioned in much more
detail with proofs provided.
66
For the case of randomized algorithm and object search, our approach is
a simplistic one with some strong assumptions. The cost of movement is never
taken into account and it is usually assumed that sensing new information is
much more important and costly than other actions. However, there may be
situations where displacement dominates sensing, such as rough-terrain robots.
In this case, rather than finding next-best-view candidates, an efficient route
should be drawn and followed.
Also an improvement can be made in detecting which pixels belong to
the table or border or any flat surface by combining neighbor pixels with same
information values into a single entity and labeling them. This way the search
algorithm can detect objects much more efficiently. An improvement in the
elevation map structure can also lead to important results. If there is a
possibility to accessing 3D information, then probability distributions can be
changed accordingly such as giving higher weights to vertical planar surfaces
when searching for paintings on a wall.
The thesis does not provide an object recognition algorithm that will
detect the images in the direction a camera is facing. However, in case of such
available algorithm, learning objects in an environment can be incorporated in
the search process so the prior probabilities can be updated accordingly and as a
robot performs object search more and more in a certain room, it can develop a
much more efficient strategy and optimized performance through learned facts
about objects and locations.
67
Bibliography
References
[1] Terrence Fong, Illah Nourbakhsh and Kerstin Dautenhahn. A
survey of socially interactive robots.
Systems ,
Robotics and Autonomous
42:143-166, 2003.
[2] Mark Schee John , John Pinto , Kris Rahardja , Scott Snibbe
and Robert Tow. Experiences with sparky : A social robot. In
Proceedings of the Workshop on Interactive Robot Entertainment (WIRE) , 2000.
[3] L. Canamero and J. Freslund. I show you how i like you - can
you read it in my face?
and Cybernetics ,
IEEE Transactions on Systems, Man
31(5):454-459, 2001.
[4] Batya Friedman, Peter H. Kahn, Jr. And Jennifer Hagman.
Hardware companions?-What online aibo discussion forums re-
CHI '03: Proceedings of the SIGCHI conference on Human factors in computing systems , pages 273-280, New York, USA, 2003. ACM.
veal about the human-robotic relationship. In
[5] J.L. Jones. Robots at the tipping point:
Roomba.
the road to irobot
IEEE Robotics and Automation Magazine ,
13(1):76-
78, March 2006.
[6] Sales of iRobot Roomba Vacuuming Robot Surpass 2 Million
Units
. http://www.irobot.com/sp.cfm?pageid=86&id=234.
Latest visited: July 16, 2009.
[7] Illah R. Nourbakhsh, Clay Kunz and Thomas Willeke. The
mobot museum robot installations:
A ve year experiment.
Proc. of International Conference on Intelligent Robots and
Systems (IROS 2003) , pages 3636-3641 vol.3, October 2003.
In
68
[8] Thomas Willeke, Clay Kunz and Illah Nourbakhsh. The history
of the mobot museum robot series: An evolutionary study. In
Proceedings of FLAIRS ,
2001.
[9] Diego Rodrigues-Losada, Fernando Matia, Ramon Galan and
Agustin Jimenez. Blacky, an interactive mobile robot at a trade
Proc. Of the 2002 IEEE International Conference on
Robotics & Automation (ICRA'02) , pages 3930 3935, vol.4,
fair. In
May 2002.
[10] Je-Goon Ryu, Kil Se-Kee, Shim Hyeon-Min, Lee Sang-Moo,
Lee Eung-Hyuk and Hong Seung-Hong. SG-robot:
Network-
Proceedings of IEEE International Conference on Intelligence and security informatics (ISI 2006) , pages 633-638, CA, USA, May
operated mobile robot for security guard at home. In
2006. Springer-Verlag.
[11] Wolfram Burgard, Panos Trahanias, Dirk Hähnel, Mark Moors,
Dirk Schulz, Haris Baltzakis, et al. Tele-presence in populated
exhibitions through web-operated mobile robots.
Robots ,
Autonomous
15(3):299-316, November 2003.
[12] Steven J. King and Carl F. R. Weiman. HelpMate autonomous
mobile
robot
navigation
system.
In
Proceedings of SPIE ,
1388:190-198, August 2005.
[13] René von Schomberg. From the ethics of technology towards
an ethics of knowledge policy:implications for robotics.
Society ,
AI and
22:331-348, 2008.
[14] G. Verugggio. The euron roboethics roadmap.
IEEE-RAS 6th
International Conference on Humanoid Robots , pages 612-617,
December 2006.
[15] R.C. Arkin and L. Moskina. Lethality and autonomous robots:
Technology and Society , 2007. ISTAS 2007.
IEEE International Symposium on , pages 1-3, 1-2 June 2007.
An ethical stance.
[16] Ltd. Fuji-Keizai Co. Trends in the japanese robotics industry.
Japan External Trade Organization Journal of Japan Economic
Monthly , March 2006.
[17] Bill Gates. A robot in every home.
January 2007.
69
Scientic American Inc. ,
[18] UNECE/IFR Statistical Department.
ket: An Executive Summary .
2007 World Robot Mar-
World Robotics 2008 Website:
http://www.worldrobotics.org/index.php
[19] Stefan Lovgren. A Robot in Every Home by 2020, South
Korea
says .
Worldwide Web electronic publication ,
2006.
http://news.nationalgeographic.com/news/2006/09/060906robots_2.html
Alper Aydemir. An Approach to Ecient Object
Searching for Mobile Robots . Master's Thesis, Mälardalen Uni-
[20] Osman
versity, March 2008.
[21] Kurt Konolige. Improved occupancy grids for map building.
Autonomous Robots ,
4(4):351-367, October 1997.
[22] Leena Lulu and Ashraf Elnagar. A comparative study between
IEEE/RSJ
International Conference on Intelligent Robots and Systems
2005(IROS 2005) , pages 3263-3268, August 2005.
visibility-based roadmap path planning algorithms.
[23] M.W.M.G. Dissanayake, P. Newman, S. Clark, H.F. DurrantWhyte and M. Csorba. A solution to the simultaneous localisation and map building(SLAM) problem.
on Robotics and Automation ,
IEEE Transactions
17(3):229-241, June 2001.
[24] Dorian Gálvez López, Kristoer Sjö, Chandana Paul, and
Patric Jensfelt. Hybrid laser and vision based object search
Proc. of the IEEE International Conference on Robotics and Automation (ICRA'08) , pages 2636-2643,
and localization. In
Pasadena, CA, USA, May 2008.
[25] Jean-Claude Latombe.
Robot Motion Planning ,
Kluwer Inter-
national Series in Engineering and Computer Science , 1991.
[26] Steven M. LaValle.
Planning Algorithms ,
Cambridge Univer-
sity Press, 2006.
[27] H. Choset, W. Burgard, S. Hutchinson, G. Kantor, L. E.
Kavraki, K. Lynch, and S. Thrun. Pri nciples
of Robot Motion: Theory, Algorithms, and Implementation , MIT Press,
April 2005.
70
[28] R. James Milgram, Guanfeng Liu, Jean-Claude Latombe. On
the structure of the inverse kinematics map of a fragment of
protein backbone.
Journal of Computational Chemistry,
29(1):
50-68, 2008.
[29] T. Lozano-Perez and M. Wesley. An algorithm for planning
collisionfree paths among polyhedral obstacles.
Comm. ACM ,
22(10):560-570, 1979.
[30] J. T. Schwartz and M. Sharir. The piano movers problem ii,
general techniques for computing topological properties of real
Advances of Applied Maths ,
4:298-351,
[31] J. Canny. The complexity of robot motion planning.
ACM Doc-
algebraic manifolds.
1983.
toral Dissertation Award,
MIT Press, 1988.
[32] Oussama Khatib. Real-Time obstacle avoidance for manipulators and mobile robots.
Research ,
The International Journal of Robotics
5(1): 90-98, 1986.
[33] H. Gonzáles-Baños, A. Efrat, J.C. Latombe, E. Mao, and
T.M. Murali. Planning robot motion strategies for ecient
J. Hollerbach and D. Koditschek, editors, Robotics Research The Ninth Int. Symp. , Salt Lake
model construction. In
City, UT, 1999. Springer-Verlag.
[34] H.H. Gonzáles-Baños, L. Guibas, J. C. Latombe, S. M. Lavalle,
D. Lin, R. Motwani, et al. Motion planning with visibility con-
straints: building autonomous observers. In Proc. The Eighth
International Symposium of Robotics Research , Japan, October
1997.
[35] Karsten Berns and Tobias Luksch. Motion planning based on
realistic sensor data for six-legged robots .
Systeme 2007 ,
Autonome Mobile
7:247-253, 2007. Springer-Verlag.
[36] Shrihari Vasudevan, Stefan Gächter, Viet Nguyen and Roland
Siegwart. Cognitive maps for mobile robots-an object passed
approach.
Robotics and Autonomous Systems ,
55(5):359-371,
May 2007.
[37] Antonio Torralba. Context priming for object detection.
national Journal of Computer Vision,
71
Inter-
53(2):169191, 2003.
[38] Ariadna Quattoni and Antonio Torralba. Recognizing indoor
IEEE Conference on Computer Vision and Pattern
Recognition (CVPR) , 2009.
scenes.
[39] Kevin Murphy, Antonio Torralba, Daniel Eaton and William
Freeman. Object detection and localization using local and
Toward Category-Level Object Recognition.
Springer-Verlag Lecture Notes in Computer Science , J. Ponce,
global features. In
M. Hebert, C. Schmid, and A. Zisserman (eds.), 2006.
[40] Aude Oliva and Antonio Torralba. The role of context in object
recognition.
TRENDS in Cognitive Sciences,
11(12):520-527,
November 2007.
[41] Antonio Torralba, Aude Oliva, Monica S. Castelhano and John
M. Henderson. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object
search.
Psychol Rev ,
113(4):766-786, October 2006.
[42] A. R. Nolan, B. Everding and W. Wee. Scheduling of low level
computer vision algorithms on networks ofheterogeneous ma-
Proceedings of Computer Architectures for Machine
Perception(CAMP) , pages 352-358, September 1995.
chines. In
[43] Hector Gonzáles-Baños and Jean-Claude Latombe. Planning
robot motions for range- image acquisition and automatic 3D
model construction.
AAAI Fall Symposium Series ,
1998.
[44] H. Gonzáles-Baños, E. Mao, J.C. Latombe, T.M. Murali and
A. Efrat . Planning robot motion strategies for ecient model
Proceedings of International Symposium on
Robotics Research , Snowbird (UT), October 1999
construction. In
[45] Aniket Murarka,
Joseph Moyadil,
and Benjamin Kuipers.
Building local safety maps for a wheelchair robot using vision
Proceedings of the 3rd Canadian Conference on
Computer and Robot Vision (CRV) , pages 25-33, 2006.
and lasers. In
[46] Kurt Konolige,
Motilal Agrawal,
Robert C. Bolles,
Cregg
Cowan, Martin Fischler and Brian Gerkey. Outdoor mapping
and navigation using stereo vision. In
Experimental Robotics (ISER) ,
Brazil, July 2006.
72
Proc. of Intl. Symp. on
pages 179-190, Rio de Janeiro,
[47] Gamini Dissanayake, Hugh Durrant-Whyte and Tim Bailey. A
computationally ecient solution to the simultaneous localisa-
Proceedings of the
2000 IEEE International Conference on Robotics and Automation , pages 1009-1014 vol.2, San Francisco, CA, April 2000.
tion and map building(SLAM) problem. In
[48] Nikos C. Mitsou and Costas S. Tzafestas. Temporal occupancy
Mediterranean Conference on Control and Automation 2007 , pages
grid for mobile robot dynamic environment mapping.
1-8, Athens, Greece, 27-29 July 2007.
[49] Thomas Collins, J.J. Collins and Conor Ryan. Occupancy grid
mapping: an empirical evaluation.
on Control and Automation 2007 ,
Mediterranean Conference
pages 1-6, Athens, Greece,
27-29 July 2007.
[50] Alberto Elfes. Using occupancy grids for mobile robot perception and navigation.
Computer ,
22(6):46-57, 1989.
[51] Hugh F. Durrant-Whyte. Uncertain geometry in robotics.
IEEE Journal of Robotics and Automation ,
4(1):23-31, Febru-
ary 1998.
[52] Randall C. Smith and Peter Cheeseman. On the representation and estimation of spatial uncertainty.
Journal of Robotics Research ,
The International
5(4):56-68, 1986.
[53] J.E. Banta, Y. Zhien, X. Z. Wang, G. Zhang, M. T. Smith, and
M. A. Abidi. A best-next-view algorithm for three dimen-
Intelligent
Robotics and Computer Vision XIV Session of Intelligent Systems and Advanced Manufacturing Symposium (SPIE) , pages
sional scene reconstruction using range images. In
418-429, 1995.
[54] Richardo Pito and Ruzena Bajcsy. A solution to the next best
view problem for automated CAD model acquisition of free-
Proceedings of the SPIE
Symposium on Intelligent Systems and Advanced Manufacturing , pages 78-89 vol. 2596, 1999.
from objects using range cameras. In
[55] B. Curless and M. Levoy. A volumetric method for building
complex model from range images.
1996.
73
Proc. ACM SIGGRAPH ,
[56] K. Kakusho, T. Kitahashi, K. Kondo and J. C. Latombe. Conti-
Proc.
IEEE Int. Conf. of Syst., Man and Cyb ., Vancouver(BC), 1472nous purposive sensing and motion for 2-d map building.
1477.
[57] Hector
Gonzáles-Baños
randomized-art
gallery
and
Jean-Claude
algorithm
on
sensor
Latombe.
A
placement.
In
SCG'01: Proceedings of the Seventeenth Annual Symposium
on Computational Geometry, pages 232-240, New York, USA ,
2001.
[58] Bengt J. Nilsson.
Guards .
Guarding Art Galleries-Methods for Mobile
PhD thesis, Lund University, 1995.
[59] V. Chvatal. A combinatorial theorem in plane geometry.
Combinatorial Theory Series ,
[60] Steve Fisk. A short proof of Chvatal's watchman theorem.
Comb. Theory, Ser . B ,
J.
18:39-41, 1975.
J.
24(3):374, 1978.
[61] D.T. Lee and Arthur K. Lin. Computational complexity of art
gallery problems. In
ory ,
IEEE Transactions on Information The-
32(2):276-282, March 1986.
[62] Joseph O'Rourke.
Art Gallery Theorems and Algorithms .
Ox-
ford University Press, New York, NY, 1987.
[63] Yiming Ye and John K. Tsotsos. Sensor planning in 3D object
search.
Computer Vision and Image Understanding , 73(2):145-
168, 1999.
[64] Thomas C. Shermer. Recent results in art galleries[geometry].
Proceedings of the IEEE ,
80(9):1384-1399, September 1992.
[65] J. Kahn, M. Klawe and D. Kleitman. Traditional galleries re-
quire fewer watchmen. Society for Industrial and Applied Mathematics Journal on Algebraic and Discrete Methods , 4(2):194206, June 1983.
[66] Wei-Pang Chin and Simeon Ntafos. Shortest watchman routes
in simple polygons. D iscerete
6(1):9-31, 1991. Springer-Verlag.
74
& Computational Geometry ,
[67] Wei-Pang
Chin
and
Simeon
Ntafos.
Optimum
watchman
Proceedings of the Second Annual Symposium on
Computation Geometry , pages 24-33, New York, USA, 1986.
routes. In
[68] Andrea Bottino and Aldo Laurentini. A practical iterative algo-
10th IEEE Conference on Emerging Technologies and Factory Automation 2005 (ETFA 2005) ,
rithm for sensor positioning.
1: 1089-1092, September 2005.
[69] David Avis and Godfried Toussaint. An ecient algorithm
for decomposing a polygon into star-shaped polygons.
Recognition ,
Pattern
13(6):395-398, 1981.
[70] Svante Carlsson, Hakan Jonsson and Bengt J. Nilsson. Finding
ISAAC
'93: Proceedings of the 4th International Symposium on Algorithms and Computation , pages 58-67, London, UK, 1993.
the shortest watchman route in a simple polygon. In
Springer-Verlag.
[71] Esther M. Arkin, Sandor P. Fekete and Joseph S.B. Mitchell.
Approximation algorithms for lawn mowing and milling.
putational Geometry: Theory and Applications ,
Com-
17(2):25-50,
October 2000.
[72] Svante Carlsson and Hakan Jonsson. Computing a shortest
watchman path in a simple polygon in polynomial-time. In
WADS '95: Proceedings of the 4th International Workshop on
Algorithms and Data Structures , pages 122-134, London, UK,
1995. Springer-Verlag.
[73] Aude Oliva and Antonio Torralba. Modeling the shape of the
scene: a holistic representation of the spatial envelope.
national Journal of Computer Vision ,
[74] Jean-Claude Latombe.
Inter-
42(3): 145175, 2001.
Randomized motion planning .
Greno-
ble, 2000. Retrieved from Stanford Artical Intelligence Laboratory Website at http://ai.stanford.edu .
[75] Marcelo C. Couto, Cid C. de Souza, Pedro and J. de Rezende.
An exact and ecient algorithm for the orthogonal art gallery
Proc. of XX Brazilian Symposium on Computer
Graphics and Image Processing (SIBGRAPI 2007) , pages 87problem. In
94, 2007.
75
[76] C. Jordan. Cours d'Analyse.1893. Second edition.
[77] B. Tovar, S.M. La Valle, and R. Murrieta. Optimal navigation and object nding without geometric maps or localization.
Proceedings of IEEE International Conference on Robotics
and Automation 2003 (ICRA '03) , pages 464-470 vol.1, 14-19
In
September 2003.
[78] Bengt J. Nilsson and Derrick Wood. Optimum watchmen
routes in spiral polygons.
University of. Ottawa ,
Proc. 2nd Conf. Comput Geom. The
Canada, pages 269-272, 1990.
[79] Franco P. Preparata and Michael Ian Shamos.
Geometry - An Introduction ,
Computational
1985. Springer-Verlag.
[80] O. Aichholzer, R. Fabila-Monroy, D. Flores-Peñaloza, T. Hackl,
C. Huemer, J. Urrutia, and B. Vogtenhuber. Modem illumina-
Proc. European Workshop on
Computational Geometry EuroCG '09 , pages 167-170, Brustion of monotone polygons. In
sels, Belgium, 2009.
[81] A. Fournier and D. Y. Montuno. Triangulating simple polygons and equivalent problems.
ACM Transactions on Graphics,
3(2):153-174, 1984.
[82] Christian Icking and Rolf Klein. The two guards problem. In
Proc.7th ACM Symposium on Computationl Geometry,
pages
166-175,1991.
[83] W.-P. Chin. The zookeeper route problem.
ences ,
Information Sci-
63: 245-259, 1992.
[84] W.-P. Chin and S. Ntafos. Optimum zoo-keeper routes.
gressus Numerantium
Con-
, 58:257-268, 1987.
[85] S. Ntafos and M. Tsoukalas. On k-aquarium-keeper and zookeeper routes.
Congressus Numerantium ,
83:25-32, 1991.
[86] L. Lulu and A. Elnagar. Guarding polygons with holes for robot
2004 IEEE International Conference on Systems, Man and Cybernetics , pages 923-928, 2004.
motion planning applications.
76
[87] I. Bjorling-Sachs and D. L. Souvaine. An ecient algorithm for
guard placement in polygons with holes.
tational Geometry ,
Discrete and Compu-
13(1):77-109, 1995. Springer-Verlag, New
York.
[88] H. Edelsbrunner, J. O'Rourke and E. Welzl. Stationing guards
in rectilinear art galleries.
age Processing ,
Computer Vision, Graphics and Im-
27:167-176, 1984.
[89] J. Sack and G. Toussaint. Guard placement in rectilinear polygons. In
Computational Morphology G. Toussaint Ed .,
New
York, 1988.
[90] J. Sack and G. Toussaint. A linear-time algorithm for decomposing rectilinear star-shaped polygons into convex quadrilat-
Proc. 19th Allerton Conf. Communication, Control
and Computing , pages 21-30, 1981.
erals. In
[91] J. Czyzowicz, E. Rivera-Campo, N. Santoro, J. Urrutia and
J. Zaks. Guarding rectangular art galleries .
Mathematics ,
Discrete Applied
50(2):149 - 157 , May 1994.
[92] Mark de Berg, Otfried Cheong, Marc van Kreveld and Mark
Overmars. C omputational
tions,
Geometry: Algorithms and Applica-
1998. Springer.
[93] B. Chazelle and D.P. Dobkin. Optimal convex decompostitions.
In G.T. Toussaint, editor,
Computational Geometry ,
pages 63-
133. North Holland, Amsterdam, Netherlands, 1985.
[94] A. Lingas. The power of non-rectilinear holes. In Pr oc.
loq. Automata, Languages and Programming ,
9th Col-
pages 369-383,
1982.
[95] J. M. Keil. Decomposing a polygon into simpler components.
SIAM Journal of Computing ,
14:799-817, 1985.
[96] R. Liu and S. Ntafos. On partitioning rectilinear polygons into
star-shaped polygons.
Algorithmica ,
6:771-800, 1991.
[97] M. R. Garey, D. S. Johnson, F.P. Preparata and R. E. Tarjan. Triangulating a simple polygon.
Letters ,
7:175-179, 1978.
77
Information Processing
[98] R.E. Tarjan and C.J. Van Wyk. An O(nloglogn)-time algorithm
for triangulating a simple polygon.
ing ,
SIAM Journal of Comput-
17:143-178, 1988.
[99] B. Chazelle. Triangulating a simple polygon in linear time. In
Proc. 31st Symposium on Foundations of Computer Science ,
pages 220-230, 1990.
[100] Steve S. Skiena.
The Algorithm Design Manual,
New York,
1998. Springer-Verlag.
[101] Alexey V. Skvortsov and Yuri L. Kostyuk . Ecient algorithms
Geoinformatics:Theory and practice (Tomsk State University ) , 1: 22-47, 1998.
for Delaunay triangulation.
[102] J.C. Culberson and R. A. Reckhow. Covering polygons is hard.
In
Proc. 29th Symposium on Foundations of Computer Science ,
pages 601-611, 1988.
The art gallery theorem: its variations, applications and algorithmic aspects . PhD thesis, Johns Hopkins
[103] A. Aggarwal.
University, 1984.
[104] J. M. Keil. Minimally covering a horizontally convex polygon.
In
Proc. 2nd ACM Symposium on Computational Geometry ,
pages 43-51, 1986.
[105] D. Franzblau and D. Kleitman. An algorithm for covering
polygons with rectangles.
Information and Control , 63:164-189,
1984.
[106] R. Motwani, A. Raghunathan and H. Saran. Covering orthogonal polygons with star polygons: the perfect graph approach.
Journal of Computer and System Sciences ,
[107] G.J.E. Rawlins.
etry.
40:19-48, 1989.
Explorations in Restricted-Orientation Geom-
PhD thesis, University of Waterloo, 1987.
[108] D.T. Lee and F.P. Preparata. An optimal algorithm for nding
the kernal of a polygon.
Journal of the ACM,
26:415-421, 1979.
[109] Thomas C. Shermer. On recognizing unions of two convex
polygons and related problems.
1991.
78
Pattern Recognition Letters,
[110] P. Belleville. Two-guarding simple polygons. In
Canadian Conference on Computational Geometry ,
Proc. 4th
pages 103-
108, 1992.
[111] W. Lipski, E. Lodi, F. Luccio, C. Mugnai and L. Pagli. On
two dimensional data organization.
Fundamenta Informaticae ,
2:245-260, 1983.
[112] T. Ohtsuki, M. Sato, M. Tachibana and S. Torii. Minimum
Transactions of the Information Processing Society of Japan , 1983.
partitioning of rectilinear regions.
[113] J. O'Rourke and K.J. Supowit. Some NP-hard polygon decompostion problems.
IEEE Transaction on Information Theory ,
IT-30:181-190, 1983.
[114]
AI-based Mobile Robots:
Case Studies of successful robot systems . Cambridge, MA, 1998.
R. B. D. Kortenkamp and R. Murphy.
MIT Press.
[115]
Petr Stepan, Miroslav Kulich and Libor Preucil. Robust data
IEEE Transactions on Systems,
Man, and Cybernetics, Part C: Applications and Reviews ,
fusion with occupancy grid.
35(1):106-115, Feb 2005.
[116] T. Colllins, J. Collins, S. O'Sullivan and M. Manseld. Evaluating techniques for resolving reduntant information and specularity in occupancy grids. In
Intelligence ,
[117]
AI 2005: Advances in Articial
pages 235-244, 2005.
H. Moravec and A. Elfes. High resolution maps from wide angle
Proceedings of the 1985 IEEE International Conference on Robotics and Automation , 1985.
sonar. In
[118]
L. Matthies and A. Elfes. Integration of sonar and stereo range
Proceedings of the
1988 International Conference on Robotics and Automation ,
data using a grid-based representation. In
1988.
[119]
S. Thrun. Exploration and model building in mobile robot do-
Proceedings of IEEE International Conference on
Neural Networks , Seattle, Washington, pages 175-180, 1993.
mains. In
79
[120] Kurt Konolige. Improved occupancy grids for map building.
Autonomous Robots ,
4:351-367, 1997.
[121]
S. Thrun. Learning occupancy grids with forward models. In
[122]
Bernt Schiele and James L. Crowley. A comparison of position
Proceedings of the Conference on Intelligent Robots and Systems(IROS'2001) , 2001.
Proc. of IEEE
International Conference on Robotics and Automation 1994 ,
estimation techniques using occupancy grids. In
pages 1628-1634 vol.2, May 1994.
[123]
Wolfram Burgard, Cyrill Stachniss, Giorgio Grisetti, Maren
Bennewitz and Christian Plagemann. Introduction to mobile
robotics:
slides .
mapping
with
elevation
maps,
PowerPoint
Retrieved from University of Freiburg Autonomous
Intelligent
Systems
webpage:
http://ais.informatik.uni-
freiburg.de/teaching/ss08/robotics/slide/new/p_elevationmaps.pdf
[124] S. Lacroix, I.-K. Jung and A.Mallet. Digital elevation map
building from low altitude stereo imagery.
tonomous Systems ,
[125] Shintaro
Robotics and Au-
41(2):119-127, November 2002.
Uchida,
Shoichi
Maeyama,
Akihisa
Ohya
and
Shin'ichi Yuta. Position correction using elevation map for mo-
bile robot on rough terrain. In Proceedings of 1998 IEEE/RSJ
Conference on Intelligent Robots and Systems , pages 582 - 587
vol.1, October 1998.
[126] J. Sack. An
o (n logn )
algorithm for decomposing simple recti-
linear polygons into convex quadrilaterals. In Proc. 20th Allerton Conf. Communication, Control and Computing , pages 64-
74, 1982.
[127] Zoltán Füredi and D. J. Kleitman. The prison yard problem.
Combinatorica ,
14(3):287-300, 1994. Springer-Verlag.
[128] J. O'Rourke. Galleries need fewer mobile guards: a variation
on Chvatal's theorem.
[129]
Geometriae Dedicata ,
14:273-283, 1983.
T. Shermer. Several short results in the combinatorics of visibility.
CMPT TR 91-2 ,
Simon Fraser University, June 1991.
80
[130]
Xuehou Tan and Tomio Hirata. Constructing shortest watch-
ISAAC '93: Proceedings
of the 4th International Symposium on Algorithms and Computation , pages 68-77, London, UK, 1993. Springer-Verlag.
man routes by divide-and-conquer. In
[131]
Frank Homan, Christian Icking, Rolf Klein and Klaus Kriegel.
SODA '97:
Proceedings of the 8th Annual ACM-SIAM Symposium on Discrete Algorithms , pages 166-174, Philedelphia, PA, USA, 1997.
A competitive strategy for learning a polygon. In
[132] Xiaotie Deng, Tiko Kameda and Christos Papadimitriou. How
Proceedings of 32nd Annual Symposium on Foundations of Computer Science 1991 ,
to learn an unknown environment. In
pages 298-303, 1-4 October 1991.
[133] Frank Homan,
Christian Icking,
Rolf Klein,
Kriegel. The polygon exploration problem.
and Klaus
SIAM J. Comput .,
31(2):577-600, 2001.
[134] Steven M. LaValle, Hector H. Gonzales-Banos, Craig Becker
and Jean-Claude Latombe. Motion strategies for maintaining
visibility of a moving target. In
Syst.,Man and Cyb.,
Proc. IEEE Int. Conf. of
1472-1477, 1997.
[135] Simeon Ntafos. Watchman routes under limited visibility.
Comput. Geom. Theory Appl., 1(3):149170, 1992.
[136] Andrew Howard, Maja J. Mataric and Gaurav S. Sukhatme.
An incremental deployment algorithm for mobile robot teams.
Proceedings of the 2002 IEEE/RSJ Intl. Conference on Intelligent Robots and Systems , Lausanne, Switzerland, October
In
2002.
[137] Eshter M. Arkin, Joseph S.B. Mitchell, and Christine D. Piatko. Minimum-link watchman tours.
Letters ,
Information Processing
86(4):203-207, May 2003.
[138] Benjamin Kuipers. Lecture 4:
Occupany Grids,
Intelligent Robotics PowerPoint slides .
CS 395T:
Retrieved from Uni-
versity of Texas,Department of Computer Science webpage:
http://www.cs.utexas.edu/~kuipers/handouts/S07/L4%20occupancy%20grids.pdf.
81
TRITA-CSC-E 2009:126
ISRN-KTH/CSC/E--09/126--SE
ISSN-1653-5715
www.kth.se
© Copyright 2026 Paperzz