Feature based deformable
registration of neuroimages using
interest point and feature
selection
Leonid Teverovskiy
Center for Automated Learning and Discovery
Carnegie Mellon University
Description of the problem
Our task is to align given neuroimages so that their corresponding
anatomical structures have the same coordinates
Description of the problem
Our task is to align given neuroimages so that their corresponding
anatomical structures have the same coordinates
Existing approaches
• Landmark based registration.
Deformation between images is calculated
based on user defined correspondences
between certain points, curves or surfaces
on the neuroimages.
- Not fully automatic.
- Transformation of non-landmark points is
interpolated from the transformation of
landmark points.
Landmark based registration
Existing approaches
• Registration driven by a similarity
measure. Deformation model is
parameterized and then a numerical
optimization procedure is used to find
parameters that maximize some similarity
measure.
- Automatic, but prone to local maxima.
- The more degrees of freedom
deformation model has, the harder it is to
find optimal parameters for it.
Affine registration driven by sum of
square differences of the images
SSD: 0.0085
Sometimes it works
Affine registration driven by sum of
square differences of the images
SSD: 0.0063
Sometimes it works
Affine registration driven by sum of
square differences of the images
SSD: 0.0289
Sometimes it works
Affine registration driven by sum of
square differences of the images
SSD: 0.0188
Sometimes it does not
Existing approaches
• Feature based registration. A feature
vector is computed for each voxel.
Correspondences between voxels in the
reference image and voxels in the input
image are estimated based on the
simliarity of their feature vectors.
- Best results among existing methods.
- Existing systems have many hand tuned
parameters, including components of
feature vectors.
Feature based registration
Feature based registration
Our goals.
• Fully automatic method that selects which
features to use depending on
- modality of the images;
- anatomical structures we care to register
the most
• No restriction on the degrees of freedom
of the deformation model
A few questions…
Would it be a hard task to register these two images for a human?
A few questions…
Would it be a hard task to register these two images for a human?
A few questions…
OK, then how about registering this image with a rotated copy of itself?
A few questions…
OK, then how about registering this image with a rotated copy of itself?
We can get some idea
about how difficult a
registration task will be
even without seeing the
other image!
A few questions…
Not an easy task indeed
A few questions…
If there were some points that “stood out”, we could easily
find what the rotation was…
A few questions…
If there were some points that “stood out”, we could easily
find what the rotation was… provided we can determine
correspondences correctly.
We are facing two different
problems:
• How to find interesting points in the
reference image automatically.
• How to find corresponding points in the
input image.
Feature Extraction.
We compute various rotationally
invariant features at different scales.
…
If we knew h(Pi|F) we could do this :
A pixel in the input image
[Feature Vector]
h(Pi|F)
Most likely correspondences
in the reference image
Probability that given feature
vector “belongs” to a certain pixel
in the reference image
We could find h(Pi|F) …
Probability of observing
feature F at the pixel Pi
Prior
…if we knew what g(F|Pi) and q(Pi) were.
Prior q(Pi)
• if we have a reason to believe that certain pixels in
the reference image are more likely to correspond to
the given pixel in the input image, we can express our
beliefs through prior.
• we will use uniform prior for now.
We can estimate g(F|Pi)!
• We have applied about 1560 affine transforms to the
reference image and computed features for each pixels
in each of the resulting 1560 images.
• Thus we obtain 1560 feature vectors for each
anatomical location in the reference image.
• We assume that components of feature vector are
independent of each other and distributed according to
a gaussian distribution.
• We find MLE of mean and variance for each gaussian
• g(F|Pi) is a product of these gaussians.
We are almost done; we need to have a way of distinguishing good
correspondences …
A pixel in the input image
[Feature Vector]
h(Pi|F)
Blue dots represent
correspondences with
probability of 0.4999.
Probability of other
correspondences is negligibly
small
… from bad correspondences
A pixel in the input image
[Feature Vector]
h(Pi|F)
Blue dots represent
correspondences with
probability of 0.4999.
Probability of other
correspondences is negligibly
small
Risk
Risk=∑h(Pi|F)L(Pi, Po)
• L(Pi, Po) – loss, which is a
geometric distance between
estimated corresponding pixel Pi
and the correct corresponding pixel
Po.
• When Po is unknown, we use MAP
estimate of Po instead.
• Correspondences with low risk are
“good” correspondences
Where are we now?
• We can find correspondences between
pixels in the input image and pixels in the
reference image using feature vectors
computed on the pixels of the input image.
• And we can also determine interesting
points by finding correspondences
between pixels in the reference image and
pixels in the … reference image!
Feature Selection
• Select feature subset for determining
interesting points.
• Select feature subset for determining
correspondences.
• Use sum of square differences (quality of
registration) as a means of evaluating
feature subsets.
Birds eye view.
Feature pool
Interesting point
feature subset
Reference
image
Interesting points
RANSAC
h(Pi|F)
Correspondence
feature subset
Input
image
Correspondences
Affine Transform
Driving voxels
feedback
TPS transform
Registration quality
Experimental results.
Reference Image
Input Image
Interesting points
Best correspondences
Driving voxels
Registration results
SSD: 0.0047
Reference image
Registered input image
Difference image
More experiments
1. Select random set of interest points; select
random subset of features to find
correspondences
2. Select random set of interest points; use
forward feature selection to find subset of
features to be used for estimating
correspondences
3. Select random subset of features to find
interest points; select random subset of
features to find correspondences
More experiments
4. Select random subset of features to find
interesting points; use forward selection to
choose subset of features used for determining
the correspondences.
5. Select random subset of features to find
interesting points; use forward selection to
choose subset of features used for determining
the correspondences. This time start from the
subset used to find interesting points without
one feature.
More experiments
6. Select random subset of features to find interesting
points. Then employ forward selection for choosing
subset of features to be used for determining the
correspondences. Find a new set of interest points using
this subset of features and iterate.
7. Select random subset of features to find interesting
points. Then employ forward selection for choosing
subset of features to be used for determining the
correspondences. This time start from the subset used to
find interesting points without one feature. Find a new
set of interest points using this selected subset of
features and iterate
More experiments.
For each feature selection strategy we run
registration eight times, each time
restarting at a random point.
Each run continues for 20 iterations.
Feature pool.
22 features, all at the finest scale (for now).
4. First derivative (D1)
9. Second derivative (D2)
14. Third derivative (D3)
19. Fourth derivative (D4)
24. Fifth derivative (D5)
29. Gabor_0_3 (G1)
34. Gabor_0_5 (G2)
39. Gabor_2_7 (G3)
44. Gabor_3_7 (G4)
49. Gabor_4_9 (G5)
54. Laplacian (L)
59. Harris (H)
64. Intensity_1_mean (M1)
69. Intensity_1_std(S1)
74. Intensity_2_mean(M2)
79. Intensity_2_std(S2)
84. Intensity_4_mean(M3)
89. Intensity_4_std(S3)
94. Intensity_8_mean(M4)
99. Intensity_8_std(S4)
104. Intensity_16_mean(M5)
109. Intensity_16_std(S5)
“Intensity_n_mean” is the mean of the pixel intensities inside a ring with
inner radius log2(n) and outer radius n, centered at the given pixel.
“Intensity_n_std” is the standard deviation of the pixel intensities inside a
ring with inner radius log2(n) and outer radius n, centered at the given
pixel.
Feature selection significantly improves registration results. Here, a
100 interest points were used
Registration error when 30 interesting points are used. Interesting points
are selected 1) at random from all the image pixels, 2) at random from
image pixels that lie on edges, 3) using interesting point selection;
Random IP
Random edge IP
IP selection
Interest point selection has greater positive effect on the registration
accuracy when number of interesting points is decreased.
Typical graph for the case when feature selection strategy
number 7 is used. Brown line shows registration error if affine
deformation is used, green line – when thin plate spline
deformation is used.
Feature pool consists of 22 features. 8 features appear to be enough for
good registration results.
Histogram of selected interesting point features when feature selection
strategy number 7 is used
Histogram of selected correspondence features when feature selection
strategy number 7 is used
Histogram of selected interesting point features when feature selection
strategy number 6 is used
Histogram of selected correspondence features when feature selection
strategy number 6 is used
Registration results at each step of feature subset selection
Reference slice
Input slice
Reference slice and input slice are midsagittal slices of neuroimages of
different subjects. In addition, input slice was affinely transformed.
Registration results at each step of feature subset selection
SSD: 0.04719
{M4}
SSD: 0.01644
{M4, S1, S5, M5}
SSD: 0.04043
{M4, S1}
SSD: 0.01777
{M4, S1, S5, M5, H}
SSD: 0.01954
{M4, S1, S5}
SSD: 0.01625
{M4, S1, S5, M5, H, G2}
Thank you
© Copyright 2026 Paperzz