A Method for Identifying Irregular Lattices of Hexagonal Tiles in Real

S. E. Ashley, R. Green, ‘A Method for Identifying Irregular Lattices of Hexagonal
Tiles in Real-Time’, Proceedings of Image and Vision Computing New Zealand 2007,
pp. 271–275, Hamilton, New Zealand, December 2007.
A Method for Identifying Irregular Lattices of
Hexagonal Tiles in Real-time
Steven E. Ashley and Richard Green
Email: [email protected]
Abstract
In this paper I propose a real-time method for the detection of hexagonal tiles arranged in an irregular lattice.
The method exploits the geometric properties of hexagons to aid detection. The recognition algorithm works by
detecting objects on the boundary and peeling them away possibly exposing further internal objects. The
algorithm terminates when no further objects can be detected.
Keywords: object recognition, hexagonal lattice, contour.
1
Other techniques that work but are particularly slow
include Hough-transformations and per-pixel image
matching. Haar classifiers seem a viable alternative
however they require extensive training data.
Introduction
This paper proposes a computational strategy for
recognizing irregular lattices of hexagonal tiles in
real-time. One possible application of this
technology is to provide real-time help with board
games that use hexagonal tiles such as Tantrix.
2
Proposed Solution
The proposed solution is based on partial shape
recognition and is broken into several steps.
2.1
Board Identification
To identify the location of the board I apply several
levels of thresholding until an appropriately sized
square can be found. Squares are detecting by
applying Suzuki’s algorithm[1] for connectedcomponent detection, applying the DouglasPeucker algorithm[2] and finally selecting only
polygons with four vertices and angles of 90 ± 10
degrees.
Figure 1: Tantrix pieces on a square sheet of paper.
2.2
The goal of Tantrix is to connect the tiles in such a
way that the colors of all connecting links must
match. There are many other rules that place further
restrictions on the placement of tiles.
Board Projection
The orientation of the board is determined by
calculating the distances from each point in the
square to each corner of the bitmap.
(1)
When tiles are touching (as in Figure 1) many
simple techniques fail as they are unable to detect
the boundary between pieces. These techniques
include simple image-segmentation, blob-tracking
and contour based methods that rely on the contour
being an accurate representation of the outline of an
object.
Equation (1) maps any point (x,y) on the
destination image, to a pixel (u,v) on the source
image.
The projection matrix of the board is calculated by
solving equation (1) for
.
is given as 1. A
271
square bitmap is created with a width and height
equal to the maximum distance between any two of
the corners of the board. Each pixel on the
destination image is projected onto the source
image (using the projection matrix calculated
above) to determine its pixel value.
The resulting contours are shown in Figure 4.
Figure 4: Extracted contours (outlined in red).
2.4
The resulting contour is approximated as a polygon
using
Douglas-Peucker[2]
algorithm
with
approximation accuracy 5.
Figure 2: The projected board in Figure 1.
2.3
Corner Detection
Contour Extraction
As the order of vertices is clockwise (the polygon is
a hole) we can assume that any vertex with a
clockwise angle of 120 degrees (10 degrees
tolerance) is the corner of a tile. Sequences of one
of more corners (chains) are kept for processing.
The projected board is now prepared by passing it
through a grayscale threshold function to (> 170).
Figure 3: Figure 2 after thresholding.
The outline shapes in the above image are found by
applying Suzuki’s algorithm [1]. Any contours that
occupy less than %0.1 of the image are removed,
along with any contours that are not holes.
Figure 5: Vertices are colored green if they conform
to the criteria for being the corner of a tile or red
otherwise.
272
2.4
Tile Identification
2.5 Removal of Identified Tiles
Chains of corners are processed according to the
number of vertices in the chain and whether it loops
or not.
Finally, the identified tiles are removed by painting
over them with the background color. The image is
also eroded slightly to prevent the holes inside each
tile from leaking out. The algorithm is repeated
from the Tile Identification step until no additional
tiles are detected.
Looping chains with exactly 6 vertices are mapped
directly to hexagonal tiles. Looping chains of other
lengths are (perhaps incorrectly) discarded.
Next
Vertex
Angular
Bisection
Known
Vertex
Previous
Vertex
Opposing
Vertex
Figure 6: Calculating remaining hexagon vertices
given a single vertex.
Non-looping chains with 1 to 5 vertices are mapped
to hexagons. Anything else is (perhaps incorrectly)
discarded. The hexagon calculations assume the
pieces have a constant size with respect to the
image size.
Figure 8: Remaining tiles once identified tiles are
removed.
The identification results are shown in figure 7
below.
3
Results
The final result is shown in Figure 9.
Each frame takes 150-160 milliseconds to process.
Test machine is a Pentium 4, 3.06 GHz Dell
Inspiron 5150 Laptop running Windows XP
Professional (140MB ram available).
Contour extraction fails in areas with poor lighting.
I believe this to be because an arbitrary
thresholding value was used for contour extraction.
If the board is removed, changed color, or altered in
size, the algorithm completely fails. In several parts
of the algorithm I assume the shape and color of the
board to be white / square and of a certain size.
Figure 7: Identified tiles are outlined.
As long as the camera can see the entire board, it
copes well with camera rotations and minor lighting
changes.
273
and Image Understanding, vol. 89, issue. 1,
January 2003.
Figure 10 shows a situation where the algorithm
fails by producing an extra hexagon.
[2]
D. H. Douglas, and T. K. Peucker,
“Algorithms for the reduction of the number
of points required to represent a digitized line
or its caricature”, Cartographica: The
International Journal for Geographic
Information and Geovisualization, vol. 10,
issue. 2, pp. 112-122, 1973.
[3]
N. Ansari, and E.J. Delp, “Partial shape
recognition: a landmark-based approach”,
IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 12, issue 5, 1990.
[4]
J. Revaud, G. Lavoué, Y. Ariki, and A.
Baskurt, “Fast and cheap object recognition
by linear combination of views”, Proceedings
of the 6th ACM international conference on
Image and Video Retrieval, pp. 194-201,
2007.
[5]
P. Suetens, P. Fua, and A.J. Hanson,
“Computational
strategies
for
object
recognition”, ACM Computing Surveys, vol.
24, issue. 1, pp. 5-62, 1992.
[6]
J. Gausemeier, J. Fruend, C. Matysczok, B.
Bruederlin, and D. Beier, “Development of a
real time image based object recognition
method for mobile AR-devices”, Proceedings
of the 2nd international conference on
Computer
Graphics,
Virtual
Reality,
Visualisation and Interaction, pp. 133-139,
2003.
[7]
M.W.
Spratling,
“Learning
Image
Components for Object Recognition”, The
Journal of Machine Learning Research, pp.
793-815, 2006.
[8]
H. Noor, S.H. Mirza, Y. Sheikh, A. Jain and
M. Shah, “Model generation for video-based
object recognition”, Proceedings of the 14th
annual ACM international conference on
Multimedia, pp. 715-718, 2006.
[9]
Qiaohui Zhang, Kentaro Go, Atsumi
Imamiya, and Xiaoyang Mao, “Robust objectidentification from inaccurate recognitionbased inputs”, Proceedings of the working
Figure 9: All tiles are identified.
4
Conclusion
Figure 9 above shows the algorithm working in a
single situation. There are a number of situations
(such as Figure 10) where the algorithm fails but I
believe these can be resolved.
Further work includes changing the algorithm to be
more robust to changes in the environment by
removing assumptions such as the existence of the
playing board. General accuracy could also be
improved. Further work is also required to compare
this algorithm with other algorithms such as Hough
transformations and Haar classifiers.
Figure 10: The center hexagon is detected
twice, once for each pair of edges.
5
References
[1]
K. Suzuki, I. Horiba, and N. Sugie, “Lineartime connected-component labeling based on
sequential local operations”, Computer Vision
274
conference on Advanced Visual Interfaces,
pp. 248-251, 2004.
[10] S. Belongie, J. Malik, and J. Puzicha, “Shape
matching and object recognition using shape
contexts”, IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol 24.
issue 4, pp. 509-522, April 2002.
[11] L. De Floriani, “A graph based approach to
object feature recognition”, Proceedings of
the
third
annual
symposium
on
Computational geometry, pp. 100-109, 1987,
[12] L. Stark, and K. Bowyer, “An aspect graphbased control strategy for 3-D object
recognition”, Proceedings of the 1st
international conference on Industrial and
engineering
applications
of
artificial
intelligence and expert systems, vol. 2, pp.
697-703, 1988.
[13] OpenCV
Square
Detector
program,
“http://opencvlibrary.cvs.sourceforge.net/ope
ncvlibrary/opencv/samples/c/squares.c?revisi
on=1.4&view=markup”, rev. 1.4, viewed. 1
July 2007.
275