RESEARCH REPORT Graph and point cloud registration

CENTER FOR
MACHINE PERCEPTION
Graph and point cloud registration
(Version 1.0)
CZECH TECHNICAL
UNIVERSITY IN PRAGUE
Miguel Amável Pinheiro
[email protected]
CTU–CMP–2013–02
Supervisor: Jan Kybic
This was supported by the Fundação para a Ciência e Tecnologia
(FCT) through the Ph.D. grant SFRH/BD/77134/2011, by the
Czech Science Foundation under the project P202/11/0111 and by
the Studentská Grantová Soutěži (SGS) grant number
SGS12/190/OHK3/3T/13.
Research Reports of CMP, Czech Technical University in Prague, No. 2, 2013
ISSN 1213-2365
RESEARCH REPORT
January 2013
Published by
Center for Machine Perception, Department of Cybernetics
Faculty of Electrical Engineering, Czech Technical University
Technická 2, 166 27 Prague 6, Czech Republic
fax +420 2 2435 7385, phone +420 2 2435 7637, www: http://cmp.felk.cvut.cz
Graph and point cloud registration
Miguel Amável Pinheiro
January 2013
Abstract
Often the texture information present on vessel and vascular structures is
insufficient in order to match content in two or more images. One approach to
this problem is to segment the structure, and match its geometric properties
throughout the images. This can be achieved using point or graph matching
techniques. A number of algorithms were tested in order to compare the
performance of these approaches. We propose a technique that searches for
similarities on neighborhoods of nodes, which breaks the problem into smaller
tasks. On this report, we present different methods to match globally and
locally graphs, together with an evaluation of each method.
1
Introduction
Current methods for matching of pairs or several images featuring vessels, vascular
structures or other structures use different types of information that is present in the
images. One can match image information using texture or intensity information [1]
or optimizing the matching defining an energy criterion and finding its minimum.
Further algorithms make use of image descriptors in order to find key aspects within
certain images, enabling a more robust matching between two pictures, such as SIFT
descriptors or salient feature regions [2]. This is possible whenever the images to
be registered present useful common texture information that can be used on the
image matching.
The images we propose to match are extracted from an electron microscopy
(EM) and a light microscopy (LM). The level of similarity between the images
gathered from the two devices is often very low, and also present very different
resolutions [3, 4] – the resolution of a good LM is about 300 nm and the one of
an EM can fall below 1 Ångström (reaching a subatomic level). Although the EM
images give us a better resolution of the observed tissue, the LM images help us
obtain a bigger picture and a better overall map, and therefore it is useful to match
both. The lack of information on images is also present in images extracted with
different devices such as in images of the retina surface [5, 6]. In order to surpass
these difficulties, a common approach has been to segment the structures present on
the images, followed by the matching of those structures based on their geometric
properties.
There are several other scenarios where graph matching is used, such as other
medical imaging, cartography and character recognition. Particularly in medical
1
imaging, we can apply these approaches to match images of retina, lung or heart
vessels. Matching of retina and lung vessel images are particularly interesting in the
case of building an entire map of the vessel structure, since one frame is not enough
to observe it entirely, as in the case of blood or heart vessels one is interested to
observe the changes in the structure over time.
These structures can be described by different properties. In some cases authors
choose to describe them using landmarks [5], graphs [7] or trees. This decision
is often connected to the quality of the images and the segmentation approach.
Nonetheless, the matching approach will then take the form of a point, graph or tree
matching. Part of this report was written based on experiments made around these
types of techniques using a common dataset, resulting on a comparison between the
approaches built on common ground and as fair as possible.
An approach of image registration of graph-like structures generally can be divided into two separate modules. The first is the segmentation module, where a
method extracts the structure from all the considered images. The second uses the
segmented structures and matches them using a matching technique. We assume
the first module has been processed and therefore only consider the problematics of
matching the structures. We not only consider graph matching approaches but also
techniques which match point clouds, therefore disregard the constraints that the
edges in the graphs add.
A variety of graph matching algorithms have been presented in the past years.
Some were specifically developed towards the problem of biomedical image registration and others have been presented as general approaches for graph and point
matching. The tested methods included in this report were the following:
• Point matching
– Coherent Point Drift (CPD) [8]
– Iterative Closest Point (ICP) [9]
– Iterative Closest Reciprocal Point (ICRP) [10]
– Softassign [11]
– TPS-RPM (Thin-plate spline - robust point matching) [12]
– Random Sample Consensus (RANSAC) [13]
• Graph matching
– Spectral Matching with Affine Constraints (SMAC) [14]
1.1
A local approach
The task of matching two sets of points is proved to be N P-hard [15] for sets of
points represented in more than one dimension – we therefore need to find a way
to approximate the result. Most present approaches try to find a solution looking
at the transformation of all the points, vertices or edges of one of the sets into the
other. This can be a hard problem to solve, when the number of outliers or removed
branches in the graphs is high. In some cases [13, 16] the method tries to fit all the
2
points using only a sample of them, however these samples are collected randomly
and the possibility of finding the correct correspondence on the opposite set is often
scarce.
An alternative to these methods is to to try to find local similarities between the
graphs. We reduce the complexity of the problem to the matching of smaller sets or
neighborhoods and we also allow for neighborhoods of outliers, which is a common
issue when matching retina, neuronal or lung vessel images. Experiments were
performed among various approaches, in order to find a good similarity measurement
between local graph neighborhoods. This is a different task than to finding methods
that perform well globally, since we need a relatively fast approach which can identify
a good match but can also discard false positives. Here, we also use both point set
and graph matching techniques, and the tested methods were
• Point matching
– Angle matching (AM)
– Shape context (SC) [17]
– Nearest neighbor matching (NNM) [18]
– Coherent Point Drift (CPD) [8]
• Graph matching
– Elastic matching based on Gaussian Processes [19]
In the following Section, a brief description of the various tested approaches can
be found with bibliographic references. A short description of the generation and
contents of the dataset used is included in Section 4. In Section 5, the functions
for method validation and comparison are presented, together with some notation,
followed by the experimental results in Section 6. Finally in Section 7, a discussion
of the results is provided with conclusions.
2
2.1
State of the Art
RANSAC
The RANSAC method was first presented in 1981 [13], and it is a method for fitting
a model to experimental data. Its target is not the point matching problematic, but
it can be adapted to our purposes.
The approach selects s sample points on each iteration i from both point clouds
(i)
(i)
xA and xB , which we are going to refer as sA and sB . We then calculate a transformation T (i) based on these two set of points as
(i)
(i)
T (i) = (sA )−1 .sB .
(1)
We fit this transformation to all the points T (i) (xB ) = (T (i) )T .xB , and count how
many inliers this transformation computes. We do this by counting how many points
in T (i) (xB ) have at least one point of xA at a distance of α, to which we refer as
3
one inlier. The transformation which produces the most inliers is the output of the
approach.
The number of iterations that is required to have a probability of p of finding
the optimal solution is estimated as
N=
log (1 − p)
.
log (1 − psi )
(2)
where pi is the percentage of inliers obtained with transformation T (i) at iteration
i. N is updated when the method finds the highest number of inliers so far.
2.2
Softassign
The article presented in 1997 by Gold et al. [11] presented an expectation maximization algorithm which minimizes the following criterion
J X
K
X
2
mjk ||Xj − t − AYk || − g(A) − α
j=1 k=1
J X
K
X
mjk
(3)
j=1 k=1
for two sets of points X = {Xj } for j = 1, ..., J and Y = {Yk } for k = 1, ..., K.
The transformation between the two sets is described by an affine transformation
described by (A; t) parameters and a match matrix mjk , where mjk ∈ [0; 1], which
establishes the correspondences between the points of both sets and has the size
(J + 1) × (K + 1), with P
an additional column and
row for unmatched points (outPK+1
J+1
liers), with constraints
k=1 mjk = 1, ∀j. The function
j=1 mjk = 1, ∀k and
g(A) represents a regularization to disallow high values for the affine components
of the transformation, and α encourages non-trivial solutions. The update of the
variables in the expectation maximization context introduces the softassign concept,
which normalizes the values of the match matrix, including the ones with regards
to the outliers. The algorithm normalizes all the rows and columns iteratively until
convergence.
2.3
TPS-RPM
The TPS-RPM algorithm was introduced in 2003 [12], in a paper co-authored by
one of the authors of the softassign algorithm. The method presents many of the
ideas of previous work, such as the correspondence matrix with an extra row and
column to identify outliers
a very similar objective function to minimize. A
PJ Pand
K
new entropy term T j=1 k=1 mjk log mjk is introduced into the criterion, where
T is called a temperature parameter and the remaining variables are defined as in
softassign. As the algorithm iterates, the temperature is reduced, controlling the
level of convexity of the objective function. The g(A) function is replaced by a new
operator L, with g(A) = −||LA||2 , which is referred to as a smoothness measure.
Furthermore, the method also analyses a non-rigid case represented by a function
A(Yk , d, w) = Yk · d + φ(Yk ) · w,
4
(4)
where d represents a (D + 1) × (D + 1) affine transformation matrix, where D
is the number of dimensions, w is a K × (D + 1) warping coefficient matrix and
each φ(Yk ) is a 1 × K vector of thin plate splines basis functions, φ(Yk ) = ||Yb −
Yk ||2 log ||Yb − Yk || for b = 1, ..., K. With this notation it is possible to represent
non-rigid transformations.
2.4
CPD
In 2010, Myronenko and Song presented an alignment technique for rigid, affine and
non-rigid transformation cases, called Coherent Point Drift [8]. The authors look at
the task as a probability density estimation problem, with one of the sets being data
points X = (x1 , ..., xN )T , and the second one representing Gaussian Mixture Model
(GMM) centroids Y = (y1 , ..., yM )T . The GMM probability density is therefore
p(x) =
M
+1
X
P (m)p(x|m),
(5)
m=1
(
(1 − ω) · 1/M
P (m) =
ω · 1/N
if m 6= M + 1
,
if m = M + 1
p(x|m) =
||x−ym ||2
1
−
2σ 2
exp
.
(2πσ 2 )D/2
where σ 2 is the variance for all GMM components, D the number of dimensions and
ω the weight for uniform distribution for M + 1, with 0 ≤ ω ≤ 1. The added M + 1
accounts for outliers and noise present in the task. The objective function is the
negative log-likelihood, leading to
N
M
N
M
1 XX
D XX
2
Q(θ, σ ) = 2
P (m|xn ) log σ 2 (6)
P (m|xn )||xn − T (ym , θ)|| +
2σ n=1 m=1
2 n=1 m=1
2
where T (ym , θ) is the transformation of the point ym using parameters θ.
An expectation maximization algorithm is used. In the E-step, the values for
P (m|x) are reassigned and normalized. In the M-step, the transformation’s parameters and the variance of the density functions are updated, using the new value for
P.
The article goes on to describe different and more detailed approaches to calculate T (ym , θ) for the rigid and affine cases. In the rigid case, the transformation is
represented by a rotation matrix R and a scaling parameter s together with a translation vector t, i.e. T (ym ; R, t, s) = sRym + t. For the affine case, the transformation
is represented by a general affine transformation matrix B, and also together with
a translation vector, i.e. T (ym ; B, t) = Bym + t.
A non-rigid approach is also presented. Here, the transformation takes the shape
T (Y, v) = Y + v(Y ), where v is a displacement function. The objective function is
changed and a regularization term is added, leading to
Q(v, σ 2 ) =
N
M
N X
M
X
1 XX
2 D
2 λ
P
(m|x
)||x
−(y
+v(y
))||
+
P
(m|x
)
log
σ
+ ||Lv||2
n
n
m
m
n
2
2σ n=1 m=1
2 n=1 m=1
2
(7)
5
where λ is a trade-off parameter for regularization and L is a regularization operator [20]. Taking a derivative w.r.t. v and equaling it to zero, we get equations for the
minimum. The non-linear transformation turns out to be T (Y, v(Y )) = T (Y, W ) =
1
yi −yj
Y + GW , where G is an M × M matrix with elements gij = e− 2 || β || and W is an
P
M ×D matrix with elements wm = σ12 λ N
n=1 P (m|xn )(xn −(ym +v(ym ))). The value
for σ 2 is also trivially obtained deriving the Q function. The algorithm is similar as
in the affine and rigid cases.
2.5
ICP
The Iterative Closest Point is a very popular method and was first presented in
1992 by Besl and McKay [9]. In the method’s own notation, the task is to find a
transformation with which the data points X can fit the model points P in a threedimensional space (3D) – which can also be adapted to a 2D case. The method finds
the closest point in P for each point of X, denoted Y in an operation denominated
C(Pk , X), i.e.
Yk = C(Pk , X).
(8)
For each iteration k, a set of points Yk is computed as C(Pk , X). The algorithm then uses quaternion representation (although it can be extended to allow
other types of transformations) to obtain the transformation between P0 and Yk
distributed by a vector ~qk . This transformation is applied to P0 to obtain a new
set of points, i.e. Pk+1 = ~qk (P0 ). These steps are performed repeatedly until the
difference between the mean squared points matching errors (between X and Pk )
for subsequent iterations dk and dk+1 falls bellow a threshold τ .
2.6
ICRP
The Iterative Closest Reciprocal Point [10] algorithm is similar ICP, as the name
implies. To improve robustness, the method only considers points which are reciprocally the closest points between the two sets.
2.7
SMAC
The paper presented in 2006 by Cour et al. [14], introduces a technique named SMAC
where two graphs G = (V, E, A) and G0 = (V 0 , E 0 , A0 ) are matched. The method
tries to obtain the mapping between the vertices V and V 0 and consequently also E
and E 0 , based on edge attributes A, A0 gathered from the edges E and E 0 .
The method builds a compatibility matrix W with the size |V ||V 0 | × |V ||V 0 |
where each entry describes the compatibility between two edges of the two graphs
– Wii0 ,jj 0 = f (Aij , Ai0 j 0 ) where ij ∈ E and i0 j 0 ∈ E 0 .
Having built this matrix W , the method needs to correctly assign the correspondences between edges of opposite graphs. Since the task is N P-complete,
the proposed algorithm finds an approximate solution by Spectral Matching [21].
In summary, this method considers solely the edges information to match graphs,
which is positive for situations where the distance between vertices has not been
6
significantly changed, i.e. the length of the edges remains similar – rotation and
translation cases.
2.8
Shape Context
Presented in 2001 by Belongie et al. [17], the method matches point sets using a local
qualifier on each point. The qualifier is a histogram of the neighborhood around each
point, based on distance and angle – the histogram is based on a log-polar mapping
around each point. The method compares the two histograms hi , hj of K bins of
the points pi , qj respectively using the χ2 test statistic, or
K
C(pi , qj ) =
1 X [hi (k) − hj (k)]2
2 k=1 hi (k) + hj (k)
(9)
The method then uses a state of the art method for the assignment between the
points of each set, to create correspondences.
2.9
Nearest neighbor matching
This local classifier [18] uses the nearest neighbors to build a description of each
point. Given any point xi ∈ X in a two or three dimensional space, the method
(x )
(x )
(x )
(x )
(x )
finds its three nearest neighbors n1 i , n2 i , n3 i where d(xi , n1 i ) < d(xi , n2 i ) <
(x )
d(xi , n3 i ) and d(x, y) is the distance between the points x and y. The description
is built using on these distances through geometric hashing, where the axes are
based on the two nearest neighbors. Using this transformation, each point will be
described using six elements in 3D and five in 2D.
2.10
Elastic matching based on Gaussian Processes
The method presented by Serradell et al. [19] matches graphs using a robust affine
matching approach and then on a separate module refines the matching trying
to find a nonlinear deformation which is consistent with the initial module. We
adapt the second module of the approach which assumes the transformation is
yi = Tθ (xi ) + ξ(xi ) between corresponding points xi and yi , where Tθ (xi ) is an
affine transformation and ξ(xi ) is a nonlinear deformation, modeled as a Gaussian
process with spatial correlation



0
if i 6= j
0
2
||
if i = j ∧ x 6= x0 .
E[ξi (x)ξj (x0 )] = σξ2 exp − ||x−x
2
2σλ


σ 2 + ρ 2
if i = j ∧ x = x0
i
ξ
(10)
This solution groups up several matching possibilities and using a maximum
weighted independent set formulation, it assigns the graphs’ nodes and edges to
each other.
7
3
Problem formulation and notation
Let GA = (VA , EA ) be an undirected acyclic graph without loops. Let us also define
a subset BA ⊆ VA with elements bk , with k = 1, ..., |BA | as the branching points of
GA , i.e.
BA = {i ∈ VA | deg(i) > 2},
where deg(i) is the degree of the vertex i. Also, let us define the subset LA ⊆ VA as
the leaf nodes of graph GA , or
LA = {i ∈ VA | deg(i) = 1}.
We refer to the shortest paths between consecutive elements of BA ∪ LA as
branching connections – where therefore each node inside the connection has degree
2. Let us denote n(i, j), where i, j ∈ BA ∪LA as the number of branching connections
in the shortest path between i and j. We then define NiN as the neighborhood of
size N of the branching point i, such that
NiN = {j ∈ BA ∪ LA | n(i, j) ≤ N }.
Let us now consider a second graph GB = (VB , EB ). This graph is obtained
through a geometrical transformation T of the first graph, and therefore GB =
T (GA ). Furthermore, the transformation may not be isomorphic and therefore,
vertices may be missing or added in GB . We keep the mapping C : VA → VB between
each node. The number of elements on C is variable in the range [1, min(|VA |, |VB |)],
since there could be missing or extra nodes, which do not have correspondences on
the other graph. The correspondence between branching points, is also extracted
from C, into a mapping CB : BA → BB with a number of elements which has the
range [1, min(|BA |, |BB |)]. The task is therefore to match the two graphs GA and
GB .
4
Datasets
In order to thoroughly test and compare the performance of each method, several sets
of graphs were synthetically generated. Each set has 200 pairs of graphs, where each
pair has one graph GA = (VA , EA ) generated using randomly uniformly distributed
points and the minimum spanning tree (MST) [22] of the points. The distribution
of vertices that the MST generates is similar to the structure of the real data, hence
the decision of using this approach. The other graph GB = (VB , EB ) of the pair
is obtained through a transformation T of the vertices VA , i.e. GB = (VA .T T , EA ).
For each of the sets, the transformation takes different values, in order to test each
method’s robustness towards each type of transformation. The names and values
for each set are as follows:


cos(θ) − sin(θ) 0
• Rotation set – T = R(θ) =  sin(θ) cos(θ) 0 
0
0
1
8
(a) 2D
(b) 3D
Figure 1: Examples of 2D and 3D graph pairs related by a small transformation in
red and blue, together with their correspondence C


ea 0 0
• Scale set – T = S(a, b) =  0 eb 0 
0 0 1


1 ka 0
• Shear set – T = H(ka , kb ) =  kb 1 0 
0 0 1


1 0 ta
• Translation set – T = T (ta , tb ) =  0 1 tb 
0 0 1
• Affine set – T = A(θ, a, b, ka , kb , ta , tb ) = R(θ).S(a, b).H(ka , kb ) + T (ta , tb )
• Noisy set – T = A(θ, a, b, ka , kb , ta , tb ) + φ(VA )
• Crop set
• Cropnoisy set
where ψ(VA ) is an additional B-spline deformation was applied to each vertex of
the graph. For the crop set, the transformation is not only composed by an affine
transformation A(θ, a, b, ka , kb , ta , tb ), but also branches of the graphs were cropped
in GA and GB . This operation is also performed to the Cropnoisy set where the
transformation of the vertices is also T = A(θ, a, b, ka , kb , ta , tb ) + φ(VA ).
For every pair of graphs, we keep a mapping C : VA → VB , which is in fact not
bijective in all sets since there are unpaired elements in the Crop and Cropnoisy set.
Similar sets were also reproduced for a three dimensional scenario, having similar transformations. Two examples taken from a 2D and 3D set are presented in
Figure 1.
9
5
5.1
Experiments
Global matching
Three different implementations of the CPD algorithm were tested: the rigid (CPDR), the affine (CPD-A) and the non-rigid (CPD-NR) approach. The implementation of the algorithm was taken from one of the author’s personal page. The comparison between processing times with some of the other methods written entirely
on MATLAB is relatively unfair, since it is partially programmed in C.
ICRP is also programmed in C and has a small interface with MATLAB, which
also can be considered as an unfair comparison. All other approaches are partially
or totally programmed in MATLAB.
The TPS-RPM was, similarly to the CPD algorithm, obtained from the authors.
The initial temperature of the algorithm (see Section 2.3) was taken to be twice
the maximum distance between corresponding points. Both the ICRP and ICP are
publicly available re-implementations of the original algorithms.
The RANSAC algorithm was also obtained from an available implementation
and was adapted to point matching, as mentioned before.
The Softassign algorithm was re-implemented from scratch, and therefore might
be less optimized than the rest of the algorithms.
The SMAC method presents itself as a graph matching algorithm and does not
output a transformation matrix. Therefore, in order to compare the achieved transformation, we added an extra module which calculates the transformation matrix
from the obtained correspondences.
5.1.1
Error measurements
Correct correspondences
One of the measurements used to compare the performance of the methods is the
number of correct correspondences that the result of each method provokes. Let NC
be the number of mappings in C – this value is not min(VA , VB ), since there could be
(j)
outliers in both graphs. Let NC be the number of correct correspondences obtained
on experiment j – this value is a result of comparing the correspondences that the
(j)
method provokes with the mapping C. We look at the ratio NC /NC of correct
correspondences as a measure of accuracy of the method. This value is observed as
a function of the mean distance between corresponding vertices, or
(
|VA |
1 X ||viA − C(viA )||2
d=
NC i=1 0
if C(viA ) ∈ VB
otherwise
(11)
where viA ∈ VA , meaning that we count only the distances between viA and C(viA )
when there is in fact a mapping of viA to an element in VB .
Geometrical error
To measure the accuracy, we evaluate the distance between the correct and obtained
correspondences (see Figure 2). The desired ∆(j) is
10
Figure 2: This figure shows an example of the measured distance between the original
and obtained correspondences. Two trees are depicted in blue and red, through its
nodes and edges using filled circles and lines connecting them.
(j)
∆
NC
1 X
||C(viA ) − Γ(j) (viA )||,
=
NC i=1
where Γ(j) : VA → VB is a similar mapping as C, but showing the correspondences
between VA and VB in the result of experiment j.
5.2
Local matching
Figure 3: An example of two graphs in light green and light blue, together with two
selected neighborhoods Nb1k in darker colors
When matching two graphs with a large number of missing or extra nodes,
a global approach for establishing correspondences could produce a misleading result. We propose an approach which matches local neighborhoods around branching
points, and we use this information to build a global correspondence between the
two graphs. In other words, we try to locally match all the neighborhoods around
the branching points of the two graphs – Nb1k and Nb10 . An example of two such
k
neighborhoods is presented in Figure 3. First, we need to find or build a method
which can provide us a robust local similarity measurement. To do so, we conducted
a number of experiments, to compare different approaches. For each method, a matrix W is built, with the size |B| × |B 0 |, where each element wbk b0k0 is the matching
score the neighborhoods Nb1k and Nb10 . We then perform a classification H 0 , with the
k
index of the minimum weight, for every Nb1k , where each element h0bk is
11
h0bk = arg
max
k0 ∈(1,...,|B 0 |)
wbk b0k0 .
The classification H 0 will tell us how week the method has performed. However,
as discussed before, not all branching points in G may have a correspondence in
G0 , and therefore we will only consider the branching points with a correspondence,
eliminating the remaining elements. A new classification H is found with elements
ha = {h0bk ∈ H 0 | bk ∈ {c1,1 , ..., c|CB |,1 } }.
We define the performance of a method by comparing the values of H 0 with the
original correspondences in CB , with a percentage of correct correspondences
|H|
1 X
δ(ha , ca2 )
η=
|H| a=1
where δ(., .) is Kronecker’s delta.
The tested methods were the following:
• Shape context (SC) [17]
• Nearest neighbor matching (NNM) [18]
• Coherent Point Drift (CPD) [8]
• Elastic matching based on Gaussian Processes (EMGP) [19]
Since the methods SC and NNM are by definition local matching techniques,
the only modification introduced was to only consider the branching nodes, instead
the entire set V and V 0 . For CPD, a correspondence between the nodes in the
neighborhood R is the output of the method, for each pair of branching points. The
matching score considered was
wbk b0k =
|R|
X
d(vRi1 , vR0 i2 )
i=1
where d(., .) is the Euclidean distance.
As for EMGP, the method provides us the position of the inversely transformed
set, as well as a geometric evaluation Gbk b0k which lies in the interval [0, 1], where 0
represents a low compatibility and 1 a high one. It also provides a length Lbk b0k , of the
superedges not considered in the final matching. We also consider a measurement
Dbk b0k , which represents the final distance between the original graph NbNk and the
inversely transformed set T −1 (NbN0 ) and it is calculated as
k
|N N0 |
|NbN |
Dbk b0k =
k
X
min(
d(NbNk (i), T −1 (NbN0k ))
i=1
)+
b
k
X
j=1
12
min( d(T −1 (NbN0k (j)), NbNk ) )
where NbNk (i) is the node of NbNk with the index i. In other words, for every node, we
consider the smallest Euclidean distance to the opposite graph, and we sum up all
these distances. The weight for matching the two neighborhoods is found through
a heuristic formula
wbk b0k = Dbk b0k .( 1 + (1 − Gbk b0k ) +
Lbk b0k
)
Tbk + Tb0k
where Tbk and Tb0k are the total length of the graphs NbNk and NbN0 , respectively.
k
We also test the performance of each variable Dbk b0k , Lbk b0k and Gbk b0k individually,
together with the length L0bk b0 of the edges in fact considered in the final matching
k
– this does not provide the same results as when using Lbk b0k , because the length of
the edges is not the same between neighborhoods.
b) N = |xB |2 , ∆t = 324.95s
a) N = |xB |, ∆t = 0.95s
c) N =
log (1−p)
,
log (1−psi )
p=1−
1
,
|xB |s
∆t = 1472.91s
Figure 4: Correct correspondences and the respective smoothed estimates given
different number of iterations N for the RANSAC method, each with a mean elapsed
time in each experiment of ∆t
13
6
6.1
Results
The RANSAC approach
A special observation has to be made for RANSAC, since it is not included on the
remaining results. After a few observations of global results, it was visible that the
RANSAC method was less successful than other approaches.
As stated before, the algorithm matches s points selected randomly at each
iteration from both sets of points. In order to view its runtime, we first fix the
sample points in one of the sets xA throughout the experiment, and randomly select
s points only in xB . The probability of finding the correct correspondence at any
given iteration is 1/(|xB |(|xB | − 1)...(|xB | − s + 1)) since we choose s different points.
In order to find a transformation, we need a minimum of 3 points from both xA and
xB , and so we could minimize the number of iterations needed, we set s = 3. In
Figure 4, the results are depicted, using a different number of iterations. It is visible
that as we increase the number of iterations, the correct correspondence rate also
increases. However, the amount of time needed to achieve a reasonably acceptable
correct correspondence rate is too high, and therefore this approach was not included
on the remaining results, where we test with different datasets.
6.2
6.2.1
Results for the two dimensional datasets
Correspondences
In Figure 5, the results for the dependence of the percentage of correct correspondences on the ratio of mean distance to sampling are depicted.
The results show that the best performance is achieved by CPD-A and CPD-NR.
Bear in mind that these are results for the affine dataset. Right after these two,
naming the method with the best performance is arguable. For longer distances and
tougher transformations, the SMAC algorithm presents the best results, followed by
the CPD-R approach. However, for smaller distances between trees the TPS-RPM
method presents higher correct correspondence rates. Looking at the percentage
of samples with perfect accuracy in Table 1, it is possible to see that the TPSRPM presents the third best result. However, this does not reflect entirely on the
distribution plot. This is due to the relatively uncommon distribution of its results.
In Figure 6g, the distribution of result samples is depicted and observing it, we can
conclude that most results either achieve 100% or 0% accuracy, i.e. either succeeds
or fails completely. The other methods’ samples are more evenly distributed.
The remaining approaches present similar results when considering the smoothed
estimate. The Softassign algorithm does present a higher number of perfect accuracy
results, however it also fails considerably more when the mean distance between trees
is higher.
6.2.2
Geometrical error
In Figure 7, we show the graph for the distance between the correct and estimated
correspondences ∆(j) with respect to the initial correspondence. Here, not only
14
Figure 5: Distribution of correct correspondences for all tested methods
Method
CPD-A
CPD-R
CPD-NR
ICP
ICRP
TPS-RPM
Softassign
SMAC
Affine
97.8 %
56.5 %
94.5 %
43.2 %
43.0 %
82.8 %
66.2 %
38.0 %
Rotation
82.8 %
86.2 %
78.5 %
68.2 %
55.5 %
62.2 %
49.0 %
66.5 %
Scale
99.8 %
36.2 %
97.2 %
17.5 %
17.2 %
79.8 %
65.8 %
26.5 %
Shear
97.5 %
34.5 %
95.5 %
29.8 %
29.0 %
72.0 %
54.8 %
28.5 %
Translation
99.5 %
99.5 %
99.5 %
100.0 %
66.5 %
60.8 %
45.0 %
73.5 %
Crop
36.5 %
13.5 %
30.5 %
13.2 %
45.0 %
11.5 %
57.2 %
1.0 %
Noisy Cropnoisy
0.5 %
0.5 %
0.0 %
0.0 %
0.0 %
0.5 %
0.0 %
0.0 %
0.0 %
0.0 %
4.0 %
1.2 %
0.0 %
0.0 %
0.0 %
0.0 %
Table 1: Percentage of trials with 100% correct correspondence rate for each tested
method and each dataset on the 2D case out of a total of 400 experiments
CPD-A and CPD-NR present the best performance, but also CPD-R achieves very
good results; when the approach does not get 100% correct correspondences, it still
approaches the solution very closely. On the other hand, the TPS-RPM method
presents a very high total distance between correct and estimated correspondences
and this is consistent with the high concentration of trials on the 0% accuracy
of correct correspondences – when the method fails, the estimated transformation
values are very different from the solution.
15
a) CPD-R
b) CPD-A
c) CPD-NR
d) SMAC
e) ICP
f) ICRP
g) TPS-RPM
h) Softassign
Figure 6: Correct correspondence rate for a) CPD-R; b) CPD-A; c) CPD-NR; d)
SMAC; e) ICP; f) ICRP; g) TPS-RPM; h) Softassign. Each result for each pair of
trees is represented by a blue dot and the distribution by a black line.
6.2.3
Transformation components
In Figure 8, the results with respect to each of the affine transformation components
are shown. As described before, on each of the datasets only one of the components
16
Figure 7: Smoothed estimate of distance between correct and obtained correspondences ∆i for all tested methods
of the transformation is used. We show the correct correspondence rate are used as
a term of comparison.
There are some interesting points to be taken from these observations. In 8a,
the results for the rotation component are depicted. Here, the results are somehow
different from the results using a general transformation approach – the SMAC
approach is the most successful. Since its approach is solely graph matching, the
method is impervious to rotation and translation transformations because the length
of the edges remain the same. The CPD-R results are actually the second most
successful, only then followed by CPD-A. This fact can be explained by the approach
of the CPD-R algorithm – it only handles rotation (and translation) components
of the transformation, i.e. a rigid transformation. The results connected to the
translation component also has some interesting values, since both the CPD-A and
CPD-R as well as the ICP algorithm present solely perfect results.
The SMAC method presents similar results for rotation and translation variations. As mentioned previously, this approach analyses the compatibility between
the edges, disregarding the position of the nodes. This allows the performance of
the method to be independent of the value of the rotation and translation components, since these components do not change the properties of the edges. However,
the same not cannot be said about the shear and scale components. These types of
transformations cause alterations in the distances between connected nodes, hence
causing problems to the method’s approach.
The shear component results present the most similar results to the general
transformation case, and the one which presents the worst results when considering
all of the approaches. The scale component results also presents similar results,
however when looking at the CPD-A results, they are distributed completely on the
100% value. Therefore, we can conclude that, looking at these values, the CPD-A
algorithm is only affected negatively by shear and rotation components.
17
a)
b)
c)
d)
Figure 8: Correct correspondences rate depending on each individual component:
a) rotation; b) scale; c) shear and d) translation of the affine transformation for each
of the tested approaches
6.2.4
Nonlinear deformations
In Figure 9, the correct correspondences plot for the noisy set described before is
presented.The decrease in the performance for all methods is notable. The CPD-NR
holds the best performance on this dataset. Unsurprisingly, it is better than CPDA since it holds a non-rigid approach. Nonetheless, the latter also presents a high
performance, especially as it does not handle per se a non-rigid transformation. The
remaining methods present a similar result relative to one another as in the affine
dataset, except for the SMAC method. On the noisy set (Figure 10), the approach is
situated around the 30% accuracy rate and furthermore on the crop set the method
is consistently close to the 0% mark. This proves the method does not handle
nonlinear deformations and should only be considered for rigid and occasionally for
affine deformations.
In Figure 11, we can see the results for the cropnoisy dataset. The dataset
contains nonlinear correlated noise in the transformed tree as in the noisy set, and
some branches were also removed. This constitutes a difficult task and as expected
the overall performance of the method decreases. Nonetheless, the objective of this
18
Figure 9: Correct correspondence rate for the noisy dataset
Figure 10: Correct correspondence rate for the crop dataset
trial was to check the relative performance of the methods and whether they would
fail under such constraints. As it is visible, the rates decrease severely. However,
the order of success of the methods remains similar as in the previous experiments.
6.3
6.3.1
Results for the three dimensional datasets
Affine dataset
A 3D case, although an extension to the 2D case, presents different constraints
and additional problems. Here we present some results using the three dimensional
19
Figure 11: Correct correspondence rate for the cropnoisy set
datasets. In Figure 12, we show the correct correspondence rate as a function of the
initial mean distance between trees.
Figure 12: Correct correspondence rate as function of dj (i)/rs for the 3D general
dataset.
For two of the methods the results in the 3D case are relatively different. The
performance of the TPS-RPM method is considerably higher and the performance
20
of the SMAC algorithm deteriorates. Nonetheless, the performance of CPD-NR and
CPD-A remain the best between all methods and also with a similar accuracy.
In Table 2, the percentage of trials which provided the correct solution for the
correspondences is presented. With the improvement of the TPS-RPM algorithm,
it’s possible to note that its percentage of correct solutions is in fact over the one
for CPD-NR. However, the binary distribution of the results for TPS-RPM, which
is also observed in the 2D scenario, forces the its distribution to be below the one
for CPD-NR.
Method
CPD-A
CPD-R
CPD-NR
ICP
ICRP
TPS-RPM
SMAC
Affine
96.8 %
31.5 %
86.5 %
27.2 %
27.5 %
84.2 %
8.5 %
Rotation
74.2 %
79.5 %
61.5 %
63.2 %
58.0 %
66.2 %
34.0 %
Scale
98.5 %
26.8 %
90.0 %
16.2 %
16.0 %
89.0 %
4.5 %
Shear
92.2 %
14.0 %
75.2 %
12.0 %
12.0 %
81.5 %
6.0 %
Translation
100.0 %
100.0 %
90.5 %
100.0 %
89.5 %
54.2 %
31.0 %
Crop
56.8 %
30.8 %
43.8 %
13.5 %
38.0 %
12.5 %
0.2 %
Noisy Cropnoisy
1.5 %
0.8 %
2.0 %
0.8 %
3.2 %
2.2 %
0.0 %
0.0 %
0.0 %
0.2 %
0.2 %
0.0 %
0.0 %
0.0 %
Table 2: Percentage of trials with 100% correct correspondence rate for each tested
method and each dataset on the 3D case out of a total of 400 experiments
6.3.2
Transformation components
In Figure 13, the results for the datasets where each transformation is composed of
solely one of the affine components is presented. Here we can observe the correct
correspondence rate as a function of the mean initial distance dj (i)/rs .
These results present relatively very similar distributions as in the 2D case.
As expected after the discussion on the previous subsection, the results for the
TPS-RPM algorithm improve and the results for the SMAC deteriorate. It’s still
evident that most approaches can handle translation transformations. The results
for this component suggest that it would be possible to see a further increase of
performance for TPS-RPM, if the method included a normalization function, i.e.
if we cancelled the translation by forcing the same average on the positions of the
points, as it happens on other approaches. Comparing with CPD-A and CPDNR, the method’s correspondence rate is worse only when handling the scale and
translation components. Furthermore, it presents percentages of perfect accuracy
trials between the two, except for the translation component.
The rate between correct correspondences after the final iteration and after the
initial one can be observed in Figure ??. In this scenario, we observe all the methods
in the same plot, as we’re looking at the behavior of each algorithm throughout its
iterations, which is independent of the type of transformation each method solves.
The importance of such observation is arguable, but in principle when an algorithm
spends most processing time (one of the key factors when deciding upon a good
method) in improving the correct correspondence rate, one expects that there has to
be an improvement between the first and last iteration. An example of this expected
behavior is observed on the TPS-RPM algorithm – regardless of the accuracy after
the first iteration, the accuracy is improved after all its iterations. However, the
21
a)
b)
c)
d)
Figure 13: Correct correspondences rate depending on each individual component:
a) rotation; b) scale; c) shear and d) translation of the affine transformation for each
tested approach in the 3D case
behavior of ICP, ICRP and Softassign is quite different. The distributions of their
samples show that there is little improvement between the end of the first iteration
and the final result, which allows an open discussion about the importance of the
iterations after the initial estimate.
All CPD algorithms have a similar behavior – they start off with a very inaccurate
correct correspondence rate, forcing their distribution to be in the far left side of the
plot. Nonetheless, the algorithms present rather successful correct correspondence
rates, as seen in previous sections.
6.4
Processing times
The average processing times for each method tested is depicted in Table 3 for the
2D case and in Table 4 for the 3D case. The ICRP, SMAC and CPD algorithms
are tested with implementations in C language and the remaining algorithms are
entirely written in MATLAB.
Nonetheless, the processing time for the Softassign is clearly high, mainly due to
the complex parameter updates in each iteration. Considering the remaining meth22
2D
Affine
CPD-A
0.421 s
CPD-R
0.529 s
CPD-NR
0.874 s
ICP
0.472 s
ICRP
1.006 s
TPS-RPM 4.883 s
Softassign 19.163 s
SMAC
3.863 s
Rotation
Scale
0.468 s
0.354 s
0.610 s
0.508 s
1.023 s
0.385 s
0.970 s
0.892 s
0.805 s
1.397 s
5.441 s
5.293 s
26.987 s 21.775 s
4.161 s
4.550 s
Shear
0.515 s
0.461 s
0.895 s
0.845 s
1.165 s
3.018 s
24.065 s
4.103 s
Translation
0.374 s
0.236 s
0.365 s
0.384 s
2.011 s
5.224 s
21.954 s
1.705 s
Crop
0.443 s
0.437 s
2.127 s
0.967 s
1.765 s
2.301 s
16.610 s
1.854 s
Noisy
0.428 s
0.454 s
2.403 s
0.604 s
0.724 s
3.572 s
32.174 s
5.015 s
Cropnoisy
0.449 s
0.460 s
1.904 s
0.662 s
1.319 s
2.099 s
29.923 s
1.989 s
Table 3: Average processing time for each tested method for 2D datasets per pair
of trees
3D
CPD-A
CPD-R
CPD-NR
ICP
ICRP
TPS-RPM
SMAC
Affine
1.192 s
1.595 s
3.773 s
10.326 s
42.421 s
45.307 s
42.327 s
Rotation
2.437 s
2.739 s
13.813 s
11.990 s
44.414 s
42.004 s
36.806 s
Scale
1.164 s
2.030 s
2.624 s
10.174 s
57.202 s
35.437 s
33.660 s
Shear
1.304 s
2.269 s
5.111 s
12.205 s
70.345 s
39.086 s
43.992 s
Translation
0.945 s
0.805 s
1.852 s
2.814 s
0.758 s
32.546 s
24.575 s
Crop
0.911 s
1.173 s
3.572 s
7.099 s
21.710 s
19.930 s
15.690 s
Noisy
1.162 s
1.203 s
5.986 s
6.730 s
31.608 s
26.301 s
26.331 s
Cropnoisy
1.133 s
1.742 s
11.013 s
8.099 s
36.608 s
12.580 s
15.510 s
Table 4: Average processing time for each tested method for 3D datasets per pair
of trees
ods, the TPS-RPM and SMAC algorithms present a higher processing time. In the
case of SMAC, the algorithm has to handle an Nx Ny × Nx Ny compatibility matrix,
which slows down its performance. The CPD-A presents the lowest processing time
for most datasets, although all CPD algorithms present similar times throughout
the datasets.
The processing times for the 3D trees are clearly much higher. The inclusion of a
third coordinate obviously increases the complexity of the calculations and therefore
this general increase is not surprising. However, this increase is much higher in some
approaches than others. Namely, the increase on the processing times of the ICRP,
SMAC and TPS-RPM methods is quite higher than the rest.
6.5
Local matching
In Table 5, the percentages of η are presented for each method and each synthetic
dataset, both for the 2 and 3 dimensional scenarios. For the Shape Context method,
the results decrease for the dataset where a rotation transformation is applied, as
expected. On the opposite, the nearest neighbor matching method is near perfect for
the same dataset. The dataset cropnoisy, where a B-spline deformation was added
and some nodes were removed from both graphs, presents the hardest challenge and
most likely the most realistic. As expected, the results are much lower than for the
rest of the datasets. Here, the EMGP method presents the best results, especially
on the 3 dimensional case.
In Table 6, the average elapsed time per pair of graphs for each method is
presented. It is visible that the elapsed times for SC and NNM methods are much
23
SC
NNM
CPD
EMGP
EMGP
EMGP
EMGP
EMGP
SC
NNM
CPD
EMGP
EMGP
EMGP
EMGP
EMGP
(L0bk b0 )
k
(Dbk b0k )
(Lbk b0k )
(Gbk b0k )
2D
main scale rot shear cropnoisy crop noisy
84.6 88.4 62.6 79.6
68.2
70.0 83.3
81.8 66.4 99.7 65.5
48.5
62.2 57.0
91.5 92.2 79.8 91.0
76.1
85.1 82.5
98.9 83.0 100.0 87.1
85.6
91.8 90.8
29.3 44.5 31.5 32.9
15.6
6.9
14.8
85.5 99.7 97.5 90.6
23.6
29.9 46.5
23.2 70.6 57.3 60.9
14.1
23.5 22.2
0.0
0.0
0.0
0.0
2.0
2.1
0.0
(L0bk b0 )
k
(Dbk b0k )
(Lbk b0k )
(Gbk b0k )
main scale
72.1 89.9
63.2 49.9
77.3 92.9
94.8 77.3
39.2 14.9
83.3 36.4
70.9 22.3
0.0
0.0
3D
rot
53.2
92.6
56.3
99.8
14.9
66.9
62.9
0.0
shear cropnoisy crop noisy
70.5
69.0
76.3 76.2
48.9
26.8
45.6 31.7
76.6
68.8
83.8 72.6
76.4
87.4
96.6 90.3
27.0
11.5
9.9
22.0
85.4
42.3
27.8 63.8
75.9
34.3
29.1 56.8
0.0
9.4
1.4
7.0
Table 5: Results for the performance of the method, in percentage
lower than for CPD and EMGP.
main scale
rot
SC
0.158 0.157 0.180
NNM
0.022 0.021 0.023
CPD
21.2 20.5 24.0
EMGP 52.7 51.5 56.9
2D
shear cropnoisy crop noisy
0.171
0.126
0.125 0.165
0.023
0.019
0.019 0.022
21.3
16.2
16.0 21.5
51.2
43.6
44.4 53.6
main scale
rot
SC
1.026 1.138 1.032
NNM
0.023 0.021 0.022
CPD
48.6 42.5 48.9
EMGP 64.0 59.2 62.5
3D
shear cropnoisy crop noisy
0.962
0.742
0.702 0.871
0.022
0.020
0.019 0.022
47.1
34.8
33.9 43.6
64.8
49.9
50.2 62.9
Table 6: Elapsed times per pair of graphs for the performance of the method, in
seconds
24
7
7.1
Conclusions
Global matching
After experimenting with these algorithms, it possible to conclude that the most
successful algorithm of the group is the Coherent Point Drift method. It is interesting
to note that the successfulness of each method in the correct correspondence rate
is practically directly correlated with the release date of the algorithms, which is
somehow expected.
A similar hierarchy of successfulness is observed for the error in the transformation, except in the case of TPS-RPM – the method either obtains near to perfect
results or fails totally, imposing in the latter case a high error for the transformation,
which forces the error distribution to have much higher values than for the other
algorithms.
The obtained processing times, although not directly comparable due to different
implementation languages, give an idea of the differences and efficiency of each
algorithm.
7.2
Local matching
The results show that the elastic matching Gaussian processes approach shows the
best results. However, the measurement is based on a mix of different variables,
which do not perform as successfully independently. There should be a more intuitive
and understandable measurement to match these neighborhoods. Nonetheless, the
method proves to perform a good local matching, above the rest of the methods.
The processing times are higher for CPD and EMGP, since the operations on
each neighborhood are more complex, but also produce considerably better results.
These matching operations are however done in parallel, since they are independent
from each other, for both these methods which help reducing the processing time.
References
[1] Nicola Ritter, Robyn Owens, James Cooper, Robert H. Eikelboom, and Paul
P. Van Saarloos. Registration of stereo and temporal images of the retina. IEEE
Transactions on Medical Imaging, 18:404–418, 1999.
[2] Jian Zheng, Jie Tian, Yakang Dai, Kexin Deng, and Jian Chen. Retinal image
registration based on salient feature regions. Proceedings of the International
Conference of IEEE Engineering in Medicine and Biology Society, 2009:102–
105, 2009.
[3] David B. Williams and C. Barry Carter. The Transmission Electron Microscope.
2009.
[4] Karen A. Holbrook and George F. Odland. The fine structure of developing
human epidermis: Light, scanning, and transmission electron microscopy of the
periderm. Journal of Investigative Dermatology, 65:16–38, 1975.
25
[5] Ali Can, Charles V. Stewart, Badrinath Roysam, and Howard L. Tanenbaum.
A feature-based, robust, hierarchical algorithm for registering pairs of images of
the curved human retina. IEEE Trans. Pattern Anal. Mach. Intell., 24:347–364,
March 2002.
[6] Bin Fang and Yuan Yan Tang. Elastic registration for retinal images based on
reconstructed vascular trees. IEEE Transactions on Biomedical Engineering,
53(6):1183–1187, 2006.
[7] Kexin Deng, Jie Tian, Jian Zheng, Xing Zhang, Xiaoqian Dai, and Min Xu.
Retinal fundus image registration via vascular structure graph matching. Journal of Biomedical Imaging, 2010:14:1–14:13, January 2010.
[8] Andriy Myronenko and Xubo Song. Point set registration: Coherent point drift.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(12):2262–
2275, 2010.
[9] P J Besl and H D Mckay. A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):239–256, 1992.
[10] Tomas Pajdla and Luc Van Gool. Matching of 3-d curves using semi-differential
invariants. In 5th International Conference on Computer Vision, pages 390–
395. IEEE Computer Society Press, 1995.
[11] Steven Gold, Anand Rangarajan, Chien-ping Lu, and Eric Mjolsness. New
algorithms for 2d and 3d point matching: Pose estimation and correspondence.
Pattern Recognition, 31:957–964, 1997.
[12] Haili Chui and Anand Rangarajan. A new point matching algorithm for nonrigid registration. Comput. Vis. Image Underst., 89:114–141, February 2003.
[13] Martin A. Fischler and Robert C. Bolles. Random sample consensus: a
paradigm for model fitting with applications to image analysis and automated
cartography. Commun. ACM, 24:381–395, June 1981.
[14] Timothee Cour, Praveen Srinivasan, and Jianbo” Shi. Balanced graph matching. IN NIPS, 2006, 2006.
[15] Tatsuya Akutsu, Kyotetsu Kanaya, Akira Ohyama, and Asao Fujiyama. Matching of spots in 2d electrophoresis images. point matching under non-uniform
distortions. In Maxime Crochemore and Mike Paterson, editors, CPM, volume
1645 of Lecture Notes in Computer Science, pages 212–222. Springer, 1999.
[16] E. Serradell, P. Glowacki, J. Kybic, F. Moreno-Noguer, and P. Fua. Robust
non-rigid registration of 2d and 3d graphs. Conference on Computer Vision
and Pattern Recognition, 2012.
[17] Serge Belongie, Jitendra Malik, and Jan Puzicha. Shape matching and object
recognition using shape contexts. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 24:509–522, 2001.
26
[18] Stephan Preibisch, Stephan Saalfeld, Johannes Schindelin, and Pavel Tomancak. Software for bead-based registration of selective plane illumination microscopy data. Nature Methods, 7(6):418–419, 2010.
[19] Eduard Serradell, Francesc Moreno-Noguer, Jan Kybic, and Pascal Fua.
Robust elastic 2d/3d geometric graph matching. SPIE Medical Imaging,
8314(1):831408–831408–8, 2012.
[20] Zhe Chen and Simon Haykin. On different facets of regularization theory. Neural
Computation, 14(12):2791–2846, 2002.
[21] M Leordeanu and M Hebert. A Spectral Technique for Correspondence Problems Using Pairwise Constraints. Computer Vision, 2005. ICCV 2005. Tenth
IEEE International Conference on, 2, 2005.
[22] Otakar Boruvka. O Jistém Problému Minimálnı́m (About a Certain Minimal
Problem) (in Czech, German summary). Práce Mor. Prı́rodoved. Spol. v Brne
III, 3, 1926.
27