CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
000
001
002
003
004
005
006
Spherical Embeddings for non-Euclidean Dissimilarities
007
008
009
010
011
Paper ID 000
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
Anonymous CVPR submission
Abstract
positions xi for the points in Euclidean space:
1
X = US ΛS2
where US and ΛS are the eigenvector and eigenvalue matrices of S, respectively.
If S is indefinite, which is often the case, then the objects
cannot exist in Euclidean space with the given distances.
This does not necessarily mean the the distances are nonmetric; metricity is a separate issue. One measure of the
deviation from definiteness which has proved useful is the
negative eigenfraction (NEF) which measures the fractional
weight of eigenvalues which are negative:
P
λi <0 |λi |
NEF = P
(3)
i |λi |
1. Introduction
Many pattern recognition problems may be posed by
defining a way of measuring dissimilarities between patterns.
Given a set of samples, the dissimilarity matrix D defines the problem at hand. If the distances are Euclidean
then we can find an equivalent similarity matrix S which is
positive semi-definite. We can identify it with a kernel matrix S ≡ K and use a kernel machine to classify the data.
The kernel then has an implicit Euclidean space associated
with it which can be found by kernel embedding. The similarities in this space are identified with the inner products
between the embedded points. However, for many types
of distance, the similarity matrix S is indefinite and cannot
be associated with a kernel. The objects represented by S
cannot exist as points in a Euclidean space with the given
similarities.
We can measure the non-metricity of the data by counting
the number of violations of metric properties. It is very rare
to have an initial distance measure which gives negative distance, so we will assume than the distances are all positive.
The two measures of interest are then the fraction of triples
which violate the triangle inequality (TV) and the degree of
asymmetry of the distances (γ):
2. Indefinite spaces
γ=
We begin with the assumption that we have measured a
set of dissimilarities or distances between all pairs of patterns in our problem. This is denoted by the matrix D,
where Dij is the distance between i and j. We can define an
equivalent set of similarities by using the matrix of squared
0
2
distances D0 , where Dij
= Dij
. This is achieved by iden1 0
tifying the similarities as − 2 D and centering the resulting
similarities:
1
1
1
S = − (I − J)D0 (I − J)
2
n
n
(2)
X |d(i,
˜ j) − d(j,
˜ i)|
˜ j) + d(j,
˜ i)|
|d(i,
(4)
i6=j
˜ .) is the dissimilarity scaled so that the average
where d(.,
dissimilarity is one.
Previous work has shown that they can be embedded in
a non-Riemannian pseudo-Euclidean space[4]. This space
uses the non-Euclidean inner product
< x, y >= xT My
(5)
(1)
where
Here J is the matrix of all-ones, and n is the number of
objects. In Euclidean space, this procedure gives exactly
the inner-product or kernel matrix for the points.
If S is positive semi-definite, then the original distances
are Euclidean and we can use the kernel embedding to find
M=
Ip
0
0
−Iq
The values of −1 correspond to the ‘negative’ part of the
space. The space has a signature (p, q) with p positive dimensions and q negative dimensions.
1
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
points on the manifold is not easy; it involves solving a set
of coupled second-order differential equations. There are,
however, manifolds which are non-Euclidean but on which
it is easy to find the geodesics. Two examples are furnished
by the elliptic and hyperbolic manifolds.
This inner-product induces a norm, or distance measure:
X
X
|x|2 =< x, x >= xT Mx =
x2i −
x2i
(6)
i+
i−
We can then write the similarity as
1
1
S = US |ΛS | 2 M|ΛS | 2 UTS
4. Elliptic Geometry
(7)
Elliptic geometry is the geometry on the surface of a hypersphere. The hypersphere can be straightforwardly embedded in Euclidean space; for example the embedding of a
sphere in three dimensions is well known:
where the negative part of the space corresponds to the negative eigenvalues of K, and the kernel embedding as
1
X = US |ΛS | 2
(8)
x = (r sin u sin v, r cos u sin v, r cos v)T
So the pseudo-Euclidean embedding reproduces precisely the original distance and similarity matrices. But,
while the pseudo-Euclidean embedding reproduces the
original distance matrix, it introduces a number of other
problems. The embedding space is non-metric and points in
the space can have negative distances to each other. Locality is not preserved in such a space. more about problems In
order to overcome these problems, we would like to embed
the points in a space with a metric distance measure which
produces indefinite similarity matrices; this means that the
space must be curved.
(10)
This embedding implies a particular metric tensor:
ds2
= dx2 + dy 2 + dz 2
= r2 sin2 vdu2 + r2 dv
(11)
The embedding of an (n − 1)-dimensional sphere in ndimensional space is a straightforward extension of this. We
can define the surface implicitly using the constraint
X
x2i = r2
(12)
i
3. Riemannian Manifolds
For the hypersphere to be a Riemannian space, we should
have a positive definite metric tensor g. This is equivalent
to the statement ds2 > 0 for any infinitesimal movement in
the surface. We have
X
ds2 =
dx2i
(13)
In this paper, we use Riemannian manifolds to embed a
set of objects. On the manifold, distances are measured by
geodesics (the shortest curve between points), so we will
employ manifolds where the geodesics are easy to compute. The manifold must also be curved in order to produce
an indefinite similarity matrix. Two prime candidates for
the embedding are the elliptic manifold and the hyperbolic
manifold.
An n-dimensional Riemannian space is defined by
its metric tensor gij in some local coordinate system
{u1 , u2 . . . un }. This can be related to an infinitesimal distance element in the space by
X
ds2 =
gij dui duj
(9)
i
which clearly must be positive for any values of dxi . This
surface is curved and has a constant sectional curvature of
K = 1/r2 everywhere.
The geodesic distance between two points in curved
space is the length of the shortest curve lying in the space
and joining the two points. For an elliptic space, the
geodesic is a great circle on the hypersphere. The distance
is the length of the arc of the great circle which joins the two
points. If the angle subtended by two points at the centre of
the hypersphere is θij , then the distance between them is
ij
The metric tensor must be positive definite, and any metric tensor defines a particular Riemannian space. However,
this definition is not always the most convenient - alternatively we can define a manifold as a subspace embedded
in a larger flat (Euclidean or pseudo-Euclidean) space. The
embedding then implies a particular metric for the space.
Note that the converse is not true; there may be many different embeddings with the same metric and therefore the
same Riemannian space. Finally, the geodesic distance in a
Riemannian space is a metric distance, but in general it is
non-Euclidean. However, finding the geodesic between two
dij = rθij
(14)
With the coordinate origin at the centre of the hypersphere,
we can represent a point by a position vector xi of length r.
Since the inner product is < xi , xj >= r2 cos θij we can
also write
< xi , xj >
(15)
dij = r cos−1
r2
The elliptic space is metric but clearly not Euclidean. It
is therefore a good candidate for representing points which
2
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
216
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
where Zij = r2 cos dij /r. Since the embedding space has
dimension n − 1, X consists of n points of dimension n − 1
and Z should then be an n by n matrix which is positive
semi-definite with rank n − 1. In other words, Z should
have a single zero eigenvalue, with the rest positive. We can
use this observation to determine the radius of curvature.
Given a radius r and a distance matrix D, we can construct
Z(r) and find the smallest eigenvalue λ0 . By minimising
the magnitude of this eigenvalue, we can find the optimal
radius.
r∗ = arg min |λ0 [Z(r)] |
(18)
0.25
0.2
Negative eigenmass
217
218
219
220
221
222
0.15
0.1
0.05
r
In practice we locate the optimal radius via search. The
smallest eigenvalue can be determined efficiently using the
power method without the expense of the full eigendecompositon. Give the optimal radius, the embedding positions
are determined via the full eigendecomposition:
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Standard deviation
Figure 1. Negative eigenvalue fraction of points on an elliptic surface
Z(r∗ ) = UZ ΛZ UZ
1
2
produce indefinite kernels. The first question we wish to
answer is, to what extent do the points in a curved space
produce indefinite similarities? To answer this question, we
have constructed the similarity matrices of points in these
spaces. The points are generated via a parameterisation
(Eqn 10) and drawing points from the parameters via a normal distribution. The indefinite nature of the similarity can
be characterised by the negative eigenfraction (NEF) (Eqn.
3)
Figure 1 shows the NEF for points on a unit hypersphere
with varying standard deviation. The curved manifold produces significant negative eigenfraction, up to 24%.
X = UZ ΛZ
(19)
(20)
If the points truly lie on a hypersphere, then this is sufficient. However, in general this is not the case, the optimal
smallest eigenvalue λ0 will be less than zero, and there will
be residual negative eigenvalues. The embedding then is
onto a ’hypersphere’ of radius r, but embedded in a pseudoEuclidean space. In order to obtain points on the hypersphere, we must correct the recovered points. The traditional method in kernel embedding is to discard the negative
eigenvalues; here that will not suffice as this will change the
length of the vectors and constraint 12 will be violated. In
the next section we present a solution to this problem.
4.1. Embedding in Elliptic space
4.2. Approximation of the points
Given a distance matrix D, we wish to find the set of
points in an elliptic space which produce the same distance
matrix. Since the curvature of the space is unknown, we
must additionally find the radius of the hypersphere. We
have n objects of interest, and therefore we would normally look for an n−1-dimensional Euclidean space. Since
we have freedom to set the curvature, we must look for a
n − 2-dimensional elliptic space embedded in the n − 1dimensional Euclidean space.
We begin by constructing a space with the origin at the
centre of the hypersphere. If the point positions are given
by xi , i = 1 . . . n, then we have
< xi , xj >= r2 cos θij = r2 cos
dij
r
For a general set of distances, the points do not lie on
a hypersphere, and need correction to lie on the manifold.
We pose the problem as follows: The task is to find a pointposition matrix X on the elliptic manifold which minimises
the Frobenius distance to the Euclidean-equivalent matrix
Z. Given the manifold radius r, determined by the method
in the previous section, we begin with the normalised matrix
Ẑ = Z/r2 . The problem is then
|XXT − Ẑ|
min
X
xTi xi
(16)
=
1
(21)
Next, we construct the matrix of point positions X, with
each position vector as a column. Then we have
This can be simplified by observing in the usual way that
the Frobenius norm is invariant under an orthogonal similarity transform, so given Ẑ = UΛUT , we apply U as an
orthogonal similarity transform to get
XXT = Z
min |UT XXT U − Λ|
(17)
X
3
(22)
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
which has a solution X = UD where D is some diagonal
matrix, giving
min |D2 − Λ|
D
As before, there is a particular metric tensor associated with
this embedding:
(23)
ds2
Of course, D2 = Λ is a solution if all the eigenvalues are
positive, and this is the case if the points lie precisely on
a hypersphere. In the general case, there will be negative
eigenvalues and we must find a minimum of the constrained
optimisation problem. Let d be the vector of squared diag2
onal elements of D, i.e. di = Dii
and Us be the matrix of
2
squared elements of U, Usij = Uij
. Then we can write the
problem as
min
d
> 0
Us d
= 1
(24)
(28)
0
1
(30)
i
It can be shown that ds2 ≥ 0 and so the hyperbolic surface
is Riemannian. If there is more than one negative dimension, the surface is no longer Riemannian as it is possible to
obtain ds2 < 0. The hyperbolic space is therefore restricted
to any number of positive dimensions but just one negative
dimension.
Finally, the sectional curvature of this space, as with the
hypersphere, is constant everywhere. In this case, the curvature is negative and given by K = −1/r2 . For the hyperbolic space, the geodesic is the analogue of a great circle.
While the notion of angle in Euclidean space is intuitive, it
is less so in pE space. However, we can define such a notion
from the inner product. The inner product is defined as
X
< xi , xj > =
xik xjk − zi zj
(32)
(25)
k
=
4.3. Hyperbolic geometry
−|xi ||xj | cosh θij
(33)
(34)
As we previously observed, the pseudo-Euclidean(pE)
space has been used to embed points derived from indefinite
kernels. The pE space is clearly non-Riemannian as points
may have negative distances to each other. However, it is
still possible to define a sub-space which is Riemannian.
As an example, take the 3D pE space with a single negative
dimension (z) and the ‘sphere’ defined by
which in turn defines the notion of hyperbolic angle. From
this angle, the distance between two points in the space is
dij = rθij
(35)
With the coordinate origin at the centre of the hypersphere,
we can represent a point by a position vector xi of length r.
Since the inner product is < xi , xj >= r2 cosh θij we can
also write
< xi , xj >
(36)
dij = r cosh−1 −
r2
Figure 2 shows the NEF for points on a unit hyperbolic
surface with varying standard deviation. The curved manifold produces significant negative eigenfraction, up to 16%.
(26)
This space is called hyperbolic.
This surface has a parameterisation given by
x = (r sin u sinh v, r cos u sinh v, r cosh v)T
r2 sinh2 vdu2 + r2 dv
The metric tensor is positive definite, and so the surface is
Riemannian and distances measured on the surface are metric, even though the embedding space is non-Riemannian.
We can extend this hyperbolic space to more dimensions.
Firstly, we take the case when there is just one negative dimension, z in the embedding space.
X
x2i − z 2 = −r2
(31)
It only remains then to find the value of α which satisfies
the first constraint and minimises the criterion. Since the
criterion is quadratic, the solution is simply given by the
largest value of α for which the first constraint is satisfied.
< x, x >= x2 + y 2 − z 2 = −r2
=
and so the metric tensor is
sinh2 v
2
g=r
0
While this is a quadratic problem, and can be solved
by quadratic programming, the solution actual has a simple form which can be found by noting that the matrix Us
should have rank n − 1 and hence one singular value equal
to zero. First we make the following observations: The vector of eigenvalues λ is an absolute minimiser of this problem, i.e. d = λ minimises the Frobenius norm and satisfies
constraint 2, but not constraint
P 1.2 Secondly, d = 1 satisfied both constraints since i Uij
= 1 (as U is orthogonal). These observations, and the fact that Us is rank n − 1,
means that the general solution to the second constraint is
d = 1 + α(λ − 1)
dx2 + dy 2 − dz 2
(29)
(d − λ)T (d − λ)
di
=
(27)
4
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
432
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
negative dimension. We then have, as before
0.18
0.16
(d − λ)T (d − λ)
min
d
0.14
Negative eigenmass
433
434
435
436
437
438
0.12
0.1
d1
<
0
d∗i
>
0, i 6= 1
Us d =
−1
(41)
0.08
Now we have a global minimiser of d = λ which satisfies
the final constraint and a second solution of the constraint is
given by d = −1. We must therefore find the optimal value
for α in
d = −1 + α(λ + 1)
(42)
0.06
0.04
0.02
0
0
0.5
1
1.5
2
2.5
3
The solution is more complicated than in the elliptical case,
due to the constraint d1 < 0. This means that it is possible
that there is no solution. However, in all cases we have
examined, there is a set of solutions. If a solution exists, the
optimal point will lie on one of the two boundaries of the
feasible region.
Standard deviation
Figure 2. Negative eigenvalue fraction of points on a hyperbolic
surface
4.4. Embedding in Hyperbolic space
5. Results
In hyperbolic space, we have
dij
< xi , xj >= −r2 cosh θij = r2 cosh
r
We have applied our spherical embedding techniques to
three different datasets. These datasets give rise to indefinite similarities. These are the Chickenpieces data[5, 1], the
CoilYork data[6] and the DelftGestures[3] set.
(37)
with the inner product defined by Eqn 5. Constructing Z as
before, we get
XMXT = Z
5.1. Chickenpieces
(38)
The Chickenpieces data is a useful set for the study of
indefinite similarities, because there is a set of parameters
which can be varied to change the level of indefiniteness.
In this case, we use a cost of 45 and varying values of L.
Figure 3 shows the negative eigenfraction(NEF) of the data,
which is a measure of indefiniteness. We also measure the
residual NEF of the matrix Z, which measures the negative
eigenvalues of Z not explained by the spherical embedding.
For the elliptic embedding, this is precisely Eqn. 3 applied
to Z, but for the hyperbolic embedding we ignore the largest
negative eigenvalue in the top sum, which is allowed in this
geometry.
The assymmetry of the data varies with L, but reaches
a maximum of 0.08 for L = 40, and the fraction of triangle violations is less than 0.01% in all cases. The data is
therefore highly non-Euclidean, but only very slightly nonmetric.
Figure 4 shows the two-dimensional kernel embedding
of the points (i.e. the largest positive dimension of the
pseudo-Euclidean embedding) and two views of the elliptic embedding. The point distributions are quite different,
particularly for the red class.
Secondly, we used the 1-NN classifier to measure the
classification accuracy on the data. The 1-NN is chosen
to give a comparison between the original embeddings and
Again we have an embedding space of dimension n − 1,
but Z is no longer positive semi-definite. In fact, Z should
have precisely one negative eigenvalue (since the hyperbolic space has just one negative dimension) and again a
single zero eigenvalue. We must now minimise the magnitude of the second smallest eigenvalue:
r∗ = arg min |λ1 [Z(r)] |
r
(39)
The embedded positions become
1
X = UZ |ΛZ | 2
(40)
As with the elliptic embedding, in general the points do
not lie on the embedding space and there will be residual
negative eigenvalues.
4.5. Approximation of the points
A similar procedure may applied for hyperbolic space as
for the elliptic space. Exactly one of the di ’s must be negative (the one corresponding to the most negative element
of λ). Let d1 be the component of d corresponding to the
5
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
540
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
8
0.4
Original
Res. Elliptic
Res. Hyperbolic
0.35
6
0.3
4
0.25
599
600
601
602
603
604
2
NEF
541
542
543
544
545
546
0.2
0
0.15
0.1
−2
0.05
−4
0
5
10
15
20
25
30
35
−6
40
L
Figure 3. NEF of the original data and of the similarity matrices Z
for the ChickenPieces data
−8
−10
−20
NEF
0.258
Res. Elliptic
0.129
−15
−10
−5
0
5
10
15
605
606
607
608
609
610
611
612
613
614
615
Res. Hyperbolic
0.0099
Table 1. NEF for the original data and embeddings of the CoilYork
data.
616
617
618
619
620
621
spherical space because it is one of very few classifiers
which do not rely either explicitly or implicitly on a underlying manifold. For comparison purposes, we also show
results against a couple of Euclidean embeddings; the positive subspace (‘Pos Sp’) which is the kernel embedding discarding negative eigenvalues, and the absolute space (‘Abs
sp’) which is the kernel embedding were negative eigenvalues are instead considered to be positive (absolute space).
The error-rates are estimated using 10-fold cross-validation.
The error-rates are shown in Figure 5.
622
623
624
625
626
627
5.2. CoilYork
628
629
630
631
632
The CoilYork data is a set of dissimilarity measurements
between four objects from the COIL database. Firstly, feature points are extracted from the object images, and then
these points are used to construct a graph for each image[6].
The graph matching algorithm of Gold and Ranguranjan[2]
is then used to generate a dissimilarity score between the
graphs. Table 1 summarises the indefiniteness of the data
and the two embeddings. The asymmetry coefficient is
0.009 and there is just one triangle violation from more than
23 million triples.
In Figure 6, we show learning curves for the CoilYork
data. These error-rates are obtained using 10-fold crossvalidation on training sets of varying sizes. As before, ‘Pos
Sp’ is the positive subspace and ‘Abs sp’ is the absolute
space.
633
634
635
636
637
638
639
640
641
642
643
644
5.3. DelftGestures
The DelftGestures are a set of dissimilarities generated
from a sign-language interpretation problem. The gestures
Figure 4. The 2D kernel embedding and two views of the 2D elliptic embedding of the Chickenpieces data with L=25
6
645
646
647
CVPR
CVPR
#000
#000
CVPR 2010 Submission #000. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE.
648
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
Distance
Elliptic
Hyperbolic
Pos Sp
Abs Sp
0.5
0.4
0.9
0.8
0.7
0.6
Error rate
Error rate
655
656
657
658
659
702
703
704
705
706
1
0.6
0.3
707
708
709
710
711
712
0.5
0.4
Orig. Dist
Elliptic
Hyperbolic
Pos Sp
Abs Sp
0.3
0.2
0.2
0.1
0.1
0
1
0
10
100
1000
10000
Training set size
5
15
25
35
Figure 7. Estimated error rate for the DelftGestures data
L
Figure 5. Estimated error rates for the Chickenpieces data
Learning Curves
N
Orig. Dist
Elliptic
1
0.75
10
0.619
20
0.556
0.8
40
0.473
60
0.438
100
0.387
0.7 140
0.348
180
0.316
0.29
0.6 220
287
0.233
0.75
0.613
0.547
0.465
0.421
0.363
0.332
0.305
0.272
0.236
Hyperbolic
Pos Sp
Abs Sp
0.75
0.75
0.75
0.616
0.613
0.652
0.541
0.554
0.605
0.479
0.491
0.567
0.416
0.46
0.554
0.371
0.417
0.528
0.327
0.394
0.515
0.302
0.372
0.511
0.277
0.357
0.504
0.233
0.357
0.504
fore, ‘Pos Sp’ is the positive subspace and ‘Abs sp’ is the
absolute space.
6. Conclusion
References
0.5
Error rate
649
650
651
652
653
654
[1] G. Andreu, A. Crespo, and J. Valiente. Selecting the
toroidal self-organizing feature maps (tsofm) best organized to object recognition. In ICNN’97, pages 1341–
1346, 1997. 5
[2] S. Gold and A. Rangarajan. A graduated assignment algorithm for graph matching. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18:377–388,
1996. 6
0.4
0.3
Orig. Dist
Elliptic
Hyperbolic
Pos Sp
Abs Sp
0.2
0.1
0
1
10
100
1000
Training set size
[3] J. Lichtenauer, E. A. Hendriks, and M. J. T. Reinders.
Sign language recognition by combining statistical dtw
and independent classfication. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 30:2040–
2046, 2008. 5, 7
Figure 6. Estimated error rate for the CoilYork data
NEF
0.309
Res. Elliptic
0.214
Res. Hyperbolic
0.022
[4] E. Pekalska and R. P. W. Duin. The dissimilarity representation for pattern recognition. World Scientific,
2005. 1
Table 2. NEF for the original data and embeddings of the DelftGesture data.
[5] E. Pekalska, A. Harol, R. P. W. Duin, B. Spillmann, and
H. Bunke. Non-euclidean or non-metric measures can
be informative. In SSPR/SPR, pages 871–880, 2006. 5
are measured by two video cameras observing the positions the two hands in 75 repetitions of creating 20 different
signs. The dissimilarities are computed using dynamic time
warping procedure on the sequence of positions[3]. Table 2
shows the NEF for the data and the two embeddings. The
data is completely symmetric, and the fraction of triangle
violations is 4 × 10−6 , so for all normal purposes the data
is metric but significantly non-Euclidean.
In Figure 7, we show learning curves for the DelftGestures data. These error-rates are obtained using 10-fold
cross-validation on training sets of varying sizes. As be-
[6] B. Xiao and E. R. Hancock. Geometric characterisation
of graphs. ICIAP 2005, LNCS 3617:471–478, 2005. 5,
6
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
7
© Copyright 2026 Paperzz