PPT - Computer Vision Lab.

Compact Signatures
for High-speed Interest Point
Description Matching
Vision Seminar
2009. 8. 21 (Fri)
Young Ki Baik
Computer Vision Lab.
References
 Compact Signatures for High-speed Interest Point
Description and Matching
 Michael Calonder, Vincent Lepetit, Pascal Fua et.al. (ICCV 2009 oral)
 Keypoint signatures for fast learning and recognition
 M. Calonder, V. Lepetit, P. Fua (ECCV 2008)
Descriptor
 Fast Keypoint Recognition using Random Ferns
 M. Ozuysal, M. Calonder, V. Lepetit, P. Fua (PAMI 2009)
 Keypoint recognition using randomized trees
 V. Lepetit, P. Fua (PAMI 2006)
Classification
Computer Vision Lab.
Outline
 Previous works
 Randomized tree
 Fern classifiers
 Signatures
 Proposed algorithm
 Compact signatures
 Experimental results
 Conclusion
Computer Vision Lab.
Problem statement
 Classification (or Recognition)
 Assumption
• The number of class = N
• Appearance of an image patch = p
• Obtain new patches from the input image
What is class of p?
Computer Vision Lab.
Problem statement
 Classification (or Recognition)
 Conventional approaches
• Compute all possible distance of p
• Select NN class to the solution.
∑(
-
)2 =
∑(
-
)2 =10
∑(
-
)2 =11
1
Nerest
neighbor
=
Computer Vision Lab.
Problem statement
 Classification (or Recognition)
 Advanced approaches (SIFT, …)
• Compute descriptor and match…
• View (rotation, scale, affine) and illumination changes.
Bottleneck!!!
∑(
-
SIFT
SURF …
)2 =?
When we have to make descriptors for many input
patches, bottleneck problem is occurred.
Computer Vision Lab.
Randomized tree
Computer Vision Lab.
Randomized tree
 A simple classifier f
 Randomly select pixel position a and b in the image
patch
 Simple binary test
• The tests compare the intensities of two pixels around the
keypoint:
b
a
I(*) : Intensity of pixel position *
Invariant to light change by any raising function
Computer Vision Lab.
Randomized tree
 Initialization of RT with classifier f
 Define depth of tree d and establish binary tree…
 Each leaf has a N dimensional vector, which denotes the
probability.
depth 1
1
depth 2
1
depth 3
leaf
f3
f0
0
f1
0
f4
1
f2
f5
0
f6
Number of leaves = 2d ,
N
where d = total depth
Computer Vision Lab.
Randomized tree
 Training RT
 Generate patches to cover image variations
(scale, rotation, affine transform, …)
Computer Vision Lab.
Randomized tree
 Training RT
 Implement training for all generating patches…
 Update probabilities of leaves…
depth 1
f0
1
depth 2
0
f1
1
f2
0
1
0
leaf
N
Computer Vision Lab.
Randomized tree
 Training RT
 Implement training for all generating patches…
 Update probabilities of leaves…
depth 1
f0
1
depth 2
0
f1
1
f2
0
1
0
leaf
N
N
Computer Vision Lab.
Randomized tree
 Training RT
 Implement training for all generating patches…
 Update probabilities of leaves…
depth 1
f0
1
depth 2
0
f1
1
f2
0
1
0
leaf
N
N
Computer Vision Lab.
Randomized tree
 Classification with trained RT
 Implement classification for input patches…
 Confirm probability when a patch reaches a leaf…
depth 1
f0
1
depth 2
0
f1
1
f2
0
1
0
leaf
N
N
Computer Vision Lab.
Randomized tree
 Random forest
 Multiple RTs (or RF) are used for robustness .
Computer Vision Lab.
Randomized tree
 Random forest
 Final probability is summed value of probability of each RT.
+
Final probability
Computer Vision Lab.
Randomized tree
 Pros.
 Easily handle multi-class problems.
 Easily cover large perspective and scale variations.
 Classifier training is time consuming, but recognition is
very fast and robust.
 Cons.
 Memory requirement is high.
Computer Vision Lab.
Fern classifier
Computer Vision Lab.
Fern classifier
 Randomized tree
depth 1
f0
1
depth 2
depth 3
0
f1
f2
1
0
f3
f4
1
0
f5
f6
Computer Vision Lab.
Fern classifier
 Modified randomized tree
 In same depth, same classifier f is used
depth 1
f0
1
depth 2
depth 3
0
f1
f1
1
0
f2
f2
1
0
f2
f2
Computer Vision Lab.
Fern classifier
 Fern classifier (Randomized list)
 Modified RT and RL(Fern) are identical…
depth 1
f0
depth 2
f1
depth 3
f2
2d
N classes
Computer Vision Lab.
Fern classifier
 Fern classifier (Randomized list)
Tree
List
Computer Vision Lab.
Fern classifier
 Multiple fern classifier (training)
 Initialize each fern classifier with depth d and class N
Computer Vision Lab.
Fern classifier
 Multiple fern classifier (training)
 Update probabilities of fern classifiers for all reference
image…
0
1
0
1
1
0
1
1
1
4
6
7
Computer Vision Lab.
Fern classifier
 Multiple fern classifier (training)
 Establish trained fern classifiers…
Computer Vision Lab.
Fern classifier
 Multiple fern classifier (Recognition)
 Just apply new patch image and obtain final probability.
+
+
Computer Vision Lab.
Fern classifier
 Pros.
 Same as pros. of randomized tree.
 A small number of f classifiers are used.
 Fast, easy to implement relative to RT.
 Cons.
 Still takes a lot of memory…
Computer Vision Lab.
Signature
Computer Vision Lab.
Signature
 Assumption
 Definition
• Classifier C with patch p
Computer Vision Lab.
Signature
 Assumption
 Definition
• If the classifier C has been trained well, then result of
classifier C for the deformed patch is also same as original
one.
Computer Vision Lab.
Signature
 Assumption
 Input patch q is not member of class.
 Definition of signature …
• Signature of q is the result of classifier C.
Computer Vision Lab.
Signature
 Assumption
 If input patch q’ is same member of q, then signatures
of q and q’ are almost same...
 C() can make the signature of q.
 Signature of q (= C(q)) can be descriptor.
Computer Vision Lab.
Signature
 Sparse signature
 q is not a member of K.
 Response of C(q) can not be an exact probability of S(q).
 Change the signature value by using user defined threshold.
 Fern classifiers is used for descriptor maker (=SIFT).
 A result of sparse signature(q) = descriptor of q.
Computer Vision Lab.
Compact Signature
Computer Vision Lab.
Compact Signature
 Purpose
 There is no loss of good matching rate while reduced
memory size of classifiers!
Signature generation
F1
2d
F2
FJ
th()
+
+
N
Computer Vision Lab.
Compact Signature
 Approach #1 (Dimension reduction)
 To reduce the size of memory…
 Random Ortho-Projection(ROP) matrix
(N >>M)
 Definition of compact signature
• A ROP is applied to all probabilities…
F1
F1
N
M
2d
Computer Vision Lab.
Compact Signature
 Approach #1 (Dimension reduction)
 No loss of good matching rate, while reduced memory
size of classifiers! (N >>M)
Compact Signature
generation
F1
2d
F2
FJ
+
+
M
Computer Vision Lab.
Compact Signature
 Approach #2 (Quantization)
 The computation can be further streamlined by quantizing the
compressed leaf vectors.
 Using 1 byte per element instead of 4 bytes required by floats.
Pfloat
F1
Pbyte
Quantization
2d
M
Computer Vision Lab.
Compact Signature
 Total complexity of compact signature
classifier (or descriptor maker)
50 times less memory
Computer Vision Lab.
Experimental results
 Comparison
 4 image database
• Wall, light, jpg and fountain
• http://www.robots.ox.ac.uk/~vgg/research/affine
 Descriptors
•
•
•
•
SURF-64 (Speed Up Robust Feature)
Sparse signatures
Compact signatures-176
Class
Compact signatures-88
N = 500
Computer Vision Lab.
Experimental results
 Matching performance
reference
image
m|n
test
image
Computer Vision Lab.
Experimental results
 CPU time and Memory Consumption
Computer Vision Lab.
Conclusion
 Contribution
 A Descriptor maker is proposed by using novel classifier
which is well-known Fern-classifier.
• Descriptors can be comuted rapidly.
 Classifier size is extremly reduced
• Almost 50 times less memory than sparse ones…
• by using dimesion reduction (ROP) and simple
quantization technique
 Discussion
 Well trained classifier is difficult to obtain in in normal
situation.
Computer Vision Lab.
Q
&
A
Computer Vision Lab.