Integrated Analysis of Vascular and Non

Integrated Analysis of Vascular and Non-Vascular Changes from
Color Retinal Fundus Image Sequences
Harihar Narasimha-Iyer1, Ali Can2, Badrinath Roysam1, Charles V. Stewart1, Howard L.
Tanenbaum3 & Anna Majerovics3
1
2
Rensselaer Polytechnic Institute, Troy, New York 12180, USA.
Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA.
3
The Center for Sight, 349 Northern Blvd., Albany, New York 12204, USA.
ABSTRACT
Algorithms are presented for integrated analysis of both vascular and non-vascular changes
observed in longitudinal time-series of color retinal fundus images, extending our prior work. A Bayesian
model selection algorithm that combines color change information, and image understanding systems
outputs in a novel manner is used to analyze vascular changes such as increase/decrease in width, and
disappearance/ appearance of vessels, as well as non-vascular changes such as appearance/disappearance
of different kinds of lesions. The overall system is robust to false changes due to inter- and intra-image
non-uniform illumination, imaging artifacts such as dust particles in the optical path, alignment errors and
outliers in the training-data.
An expert observer validated the algorithms on 54 regions selected from 34 image pairs. The
regions were selected such that they represented diverse types of changes of interest, as well as no-change
regions. The algorithm achieved a sensitivity of 82% and a specificity of 86% on these regions. The
proposed system is intended for applications such as retinal screening, image-reading centers, and as an
aid in clinical diagnosis, monitoring of disease progression, and quantitative assessment of treatment
efficacy.
Key words: Change detection, Change analysis, Illumination correction, Bayesian classification, Retinal
image analysis, Diabetic retinopathy.
Correspondence: Badrinath Roysam, Professor, JEC 7010, 110 8th Street, Rensselaer
Polytechnic Institute, Troy, New York 12180-3590, USA. Phone: 518-276-8067, Fax: 518-276-8715,
Email: [email protected].
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
1
I.
Introduction
Retinal vessels are affected by many diseases. In conditions such as diabetic retinopathy, the blood
vessels often show abnormalities at early stages [1, 2]. Changes in retinal blood vessels are also associated
with hypertension [3-6] and other cardio-vascular conditions [7]. Common structural changes associated
with vessels include changes in width, disappearance of vessels due to occlusion, and growth of new
vessels i.e., neo-vascularization. For instance, in [3] it is shown that the retinal arteries dilate by about 35%
in cases with hypertension. Age and hypertension can also cause changes in the bifurcation geometry of
retinal vessels [8].
In an earlier paper [9], the authors had described an integrated framework for analyzing changes from
color retinal fundus images. The main focus of that paper was robustly detecting and analyzing changes
associated with diabetic retinopathy in the non-vascular regions. In this paper, we present an extension of
the framework described in [9] to include changes associated with the vasculature as well. The proposed
method results in analysis of the changes on the vascular and non-vascular regions and is robust to both
inter- and intra-image illumination variations, dust artifacts on film, and image alignment errors. The
proposed system is intended for applications such as retinal screening, image-reading centers, and as an aid
in clinical diagnosis, monitoring of disease progression, and quantitative assessment of treatment efficacy.
The rest of the paper is organized as follows: a brief background of the related literature is presented in
Section II. Section III summarizes the components of the change analysis framework described in [9].
Section IV describes the new Bayesian method to analyze changes in the vascular regions. Experimental
results and conclusions form Sections V and VI, respectively.
II. Summary of Relevant Background Literature
The main challenges to automatic change analysis of blood vessels is in being able to accurately
segment the blood vessels, accurately align the images, correct for any global variations in illumination
and finding areas of blood vessels that have changed significantly. As a result, even though many
methods have been proposed for the segmentation of vasculature from retinal images [10-22], relatively
few methods have been described for fully-automatic detection and analysis of changes in the vasculature.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
2
Berger et al. used manual alternating flicker animation to detect changes in the optic nerve head in [23].
Semi-automated methods have been described in [24, 25]. Many prior methods considered global
properties of the retina [1, 7, 26-28]. These methods captured summary descriptions such as the average
width or the ratio of widths of the retinal vasculature and compared these numbers with the measurements
from the same individual from previous visits or with population distributions. For example, Heneghan et
al. described automated methods to segment the blood vessels in [26]. They then detected changes in the
average vessel width and tortuosity. The methods described above find global changes and hence are not
able to pinpoint exact locations of the blood vessels that changed. A method to measure vessel width
changes from multiple frame fundus photography was described by Dumskyj et al. in [29]. Recently, a
fully automated method that finds changes in retinal vessels was described by Fritzsche in [30].
A detailed description of the methods that detect changes from retinal images can be found in [9].
Here we briefly mention some of main papers for completeness. Cree et al. [31, 32] describe methods to
find leakage of fluorescein in blood vessels by looking at restored images from an angiographic sequence
over time and finding areas that do not have a particular pattern of intensity changes. Studies of
microaneurysm turnover were also made by Goatman et al. [33]. Sbeh and Cohen [34] studied changes in
drusen. Also, many methods have been proposed for segmenting lesions such as exudates [35-40],
microaneurysms [31-33, 41-43] and drusen [34, 44]. Zhou et al. [45] and Ballerini [46], describe methods
for detecting diabetic retinopathy from vascular features and anomalies in the foveal avascular zone
respectively. The STARE project [47] and the retina mapping system described by Pinz et al. [48] are
also major notable efforts in integrating the segmentation of the different retinal features.
The problem of interest to this work goes beyond change detection. Specifically, we are interested in
generating concise and high-level descriptions of changes. The earlier paper by the authors [9] described a
robust change analysis framework for the detection and classification of changes from retinal fundus
images..
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
3
III. Components of the Change Analysis Framework
For completeness and clarity, we summarize the main elements of the change analysis framework
described in our prior paper [9] to provide context for the extensions reported here. Table 1 is a glossary
of symbols used in the following sections.
Segmentation of Retinal Features: The retinal vasculature is first traced using an exploratory
vessel tracing algorithm [15, 16, 30]. This algorithm recursively finds connected pairs of parallel edges of
blood vessels using directional edge templates. The branch points are extracted from the vessel
centerlines as landmarks for registration [49]. The approximate location of the optic disk is estimated
using an adaptation of Hoover’s fuzzy convergence algorithm [50]. The radius of the optic disk is
estimated using an adaptive thresholding strategy and the location is further refined using template
matching [9]. Based on the location of the optic disk, the fovea is detected using an adaptation of the
algorithm described by Pinz et al. [48]. Figure 1 shows an example of automatic vessel segmentation, and
detection of the optic disk and fovea.
Sub-Pixel Accuracy Registration: We use the robust dual-bootstrap iterative closest point (ICP)
algorithm [51] to register the longitudinal time series of images. This algorithm is feature based and uses
the branching and cross-over points of the detected vasculature as landmarks to estimate a 12-dimensional
spatial transformation between the images [52]. The algorithm is robust to illumination variations and low
spatial overlap.
Illumination Correction: Non-uniform illumination is corrected using the Iterative Robust
Homomorphic Surface Fitting algorithm [9]. This algorithm robustly estimates the illumination and
reflectance components from the color fundus image by homomorphic filtering and robust surface fitting,
leveraging extracted retinal features such as the blood vessels, optic disk and fovea and lesions that are
known to have significantly different reflectance properties compared to the normal retinal background.
The observed color image is modeled as the product of an illumination component, I ( x, y , λ ) , and a
reflectance component, R ( x , y , λ ) as shown below:
F ( x, y, λ ) = I ( x, y, λ ) × R ( x, y, λ ),
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
(1)
4
where λ represents the wavelength of the color channel (i.e., red, green, or blue) [53-56]. This model is
valid everywhere except for the optic disk, fovea, blood vessels and the pathological lesions, where the
surface is specular. The slowly-varying illumination component is estimated by computing the logarithm
of the image at each wavelength and fitting a 4th-order polynomial surface with 15 parameters
r
P = [ p1 , p2 ,... p15 ]T . The weighted least-squares estimate for this parameter vector is given by:
r
r
P = (ST WS) −1 (STW ) FL ,
(2)
where W is a diagonal weight matrix that serves to exclude pixels on the optic disk, fovea, and blood
r
vessels, FL = [log F (0, 0),..., log F ( N , M )]T is a vector consisting of the logarithms of pixel intensities,
and S is a matrix composed of
powers of x and y, one for each pixel. For instance, the row
corresponding to pixel ( x, y ) is of the form [ x 4 , x 3 y , x 2 y 2 ,... y ,1] . Once the parameters of the surface are
estimated, the illumination and reflectance components can be recovered as shown below:
r
I ( x, y, λ ) = exp(SP ) .
r
R( x, y, λ ) = exp( FL ( x, y, λ ) − SP).
(3)
In order to reduce the effect of pathologies on the estimation, an iterative strategy is used and pixels
with intensities that are in the upper and lower 10th percentile are excluded from the estimation during
each iteration [9]. Figure 1 shows the results of such illumination correction.
Robust Change Detection: We used imagse scanned from 35 mm films and hence the first step
prior to change detection is to remove artifacts arising from dust particles on the surface of the film. The
ratio of the estimated reflectance components from the red and green channels has been shown [9] to be
robust to this artifact. The normalized sum of the squared differences of the ratios within a neighborhood
wi is used to detect the changes [53]:
Ωi =
Narasimha-Iyer et al.
Change
1
σn
2
∑
( x , y )∈wi
∆Rratio ( x, y )
2
>
<
Γ.
(5)
NoChange
Longitudinal Retinal Change Analysis
5
Post-Detection Change Classification: The change mask obtained from the previous step is
classified into multiple categories reflecting pigmentation changes. The first part of Table 2 lists the
classes of interest and their significance. We use three ‘change features’ for the classification, given by:
f1 = Ri ( x, y, λgreen ) / Ri ( x, y, λred );
f 2 = Ri ( x, y, λgreen ) − R j ( x, y, λgreen );
(6)
f 3 = Ri ( x, y, λgreen ) + R j ( x, y, λgreen ) − 2;
where Ri ( x, y, λgreen ) and Ri ( x, y, λred ) are the reflectance components of the green and red channel of
the i th image. Each pixel is classified into one of several color change classes, {Ci } , listed in Table 2
using a Bayesian classifier given below
C *i = arg max{gi ( X )},
(7)
i =1,2...5
where gi ( X ) is the following discriminant function:
1
1
1
gi ( X ) = − X T ∑i −1 X + ∑i −1 µi X − µiT ∑i −1 µi − ln | ∑i | + ln P (Ci ).
2
2
2
(8)
In order to get a consistent change mask in the non-vascular regions, contextual information is integrated
into the classifier by using a Markov Random Field (MRF) formulation [54, 55], where the class label of
a pixel is influenced by the class labels of its neighbors.
IV. Change Analysis on the Vascular Regions
This section describes how the color change classification and the image understanding system
outputs are combined to build descriptions of the vascular changes. The exploratory vessel tracing
algorithm produces a set of traces for each image denoted:
Φ i = {φi1 , φi 2 ........φiN } ,
(9)
where φik is the k th trace segment for image I i . In order to arrive at semantic descriptions of the changes
associated with each vessel segment, we have to associate segments from each image to each other. In
retinal images, the vessels are not expected to move freely. Hence, the correspondence between the
segments in Φ i and Φ j is decided based on their overlap. The regions of change between the two
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
6
segmentations can be found by computing the logical Exclusive OR ( ⊕ ) of the vessel masks obtained
from the tracing algorithm for each image.
∆ vessel = Λ (Φ i ) ⊕ Λ (Φ j ),
(10)
where ∆ vessel is the vessel change mask and Λ (Φ i ) is the pixel mask obtained from Φ i , with a value of 1
for regions with a vessel and 0 for the background. Figure 3 shows the vessel masks for two images and
the exclusive OR of the masks which shows the change regions.
Suppose the mth vessel from Φ i corresponds to nth vessel from Φ j ; i.e. φim corresponds
to φ jn .Then the change regions between the two segments can be defined as:
rmn = Λ (φim ) ⊕ Λ (φ jn ) .
(11)
It is possible that one segment in the first image, might overlap with parts of multiple segments in the
second image. Then the exclusive OR is computed between overlapping segments. Also, there can be
multiple change regions associated with one correspondence between the segments. Below, we describe
the core strategy for describing changes for one change region between the segments. The same procedure
is repeated for the other change regions.
It has been shown that there is a variation of about 4.8% in the width of the vessels depending
upon the instant in the cardiac cycle at which the image is captured [56]. Since there is no way for us to
know the exact point at which the two images are captured from by just analyzing the color images, we
allow for a 5% tolerance level before declaring a change. Over a small window Γ, the vessel segments can
be assumed to be locally linear, with a particular width and orientation. If the vessel has undergone a
width change of more than 5% in this window, it would translate to an increase in the area of the vessel
by approximately 5% in the second image. This concept is illustrated in Figure 4, where a vessel is
undergoing a significant width change greater than 5%. We use this information to identify regions of
significant change from the change mask.
After finding the change regions as in equation (11), we move a window over the segment and
compare the area of the original segment and the area of change regions inside the window. If the area of
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
7
the change region inside the window is larger than 5% of the original area inside the window, we consider
this a valid change region. Otherwise, the regions of change inside the window are not considered for
further processing. Once we have determined the change regions as described, the changes in each of
these regions need to be described separately.
The vascular changes considered in this work are the increase/decrease in width of a vessel
segment and appearance/disappearance of a vessel segment. These changes are higher-level descriptions
compared to the pixel level classifications described earlier. The challenge here is to select the best model
(description) for a change region, given the trace outputs and the individual pixel level color change
classifications in that region. A Bayesian model selection algorithm is proposed, where the model with
the highest posterior probability is assigned to a region. Let {M a , M d , M nc } denote the set of models
under consideration corresponding to appearance, disappearance and “no-change” for a vessel segment.
Let P ( M p / I i , I j , φim , φ jn ) denote the posterior probability of rmn to belong to change model M p , given
the images and the trace segments. The posterior probability is expressed as follows:
P ( M p / I i , I j , φim , φ jn ) =
P ( I i , I j , φim , φ jn / M p ) P ( M p )
R
∑ P( I , I
i
j
,
(12)
, φim , φ jn / M x ) P ( M x )
x =1
where P ( I i , I j , φim , φ jn / M p ) is the likelihood for rmn to belong to change model M p and P ( M p ) is the
prior probability of model M p . Using the chain rule for probabilities, the likelihood term can be
expressed as:
P ( I i , I j , φim , φ jn / M p ) = P ( I i / I j , φim , φ jn , M p ) × P ( I j / φim , φ jn , M p ) × P (φim / φ jn , M p ) × P (φ jn / M p ).
(13)
Combining the first and second terms, and the third and fourth terms together, the above equation
can be simplified to:
P ( I i , I j , φim , φ jn / M p ) = P ( I i , I j / φim , φ jn , M p ) × P (φim , φ jn / M p ).
(14)
Since the images are independent of the trace results, equation (14) can be written as:
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
8
P ( I i , I j , φim , φ jn / M p ) = P ( I i , I j / M p ) × P (φim , φ jn / M p ).
(15)
Since the denominator in equation (12) is the same for all the models, using equation (15) and assuming
equal prior probabilities for the models, the discriminant function for the models can be obtained as
shown below:
g P (rmn ) = P ( I i , I j / M p ) × P (φim , φ jn / M p ).
(16)
The model with the maximum value for the discriminant function is assigned to the region, i.e.,
M * = arg max{ g P (rmn )}.
(17)
P∈{a , d , nc}
Estimating the likelihoods: Section 3 described the method to classify individual pixels into
different color change classes. The higher-order changes described in the previous section are related to
these pixel level changes. The disappearance of a vessel or decrease in width is associated with the
decrease in redness ( Crd ) and the appearance of a vessel or increase in width is associated with increase
in redness ( Cru ). Since the same color changes (red-up/red-down) are associated with
appearance/disappearance as well as increase/decrease in width, these pairs of models are considered
separately and a heuristic will be developed to distinguish between the change models. Assuming
independence between the pixels in rmn , the likelihoods in equation (16) due to the images can be
approximated as follows:
P( I i , I j / rmn ∈ M d ) =
∏
P(I ix , I jx / Crd );
∏
P(I ix , I jx / Cru );
∏
P(I ix , I jx / Cnc ).
x ∈ rmn
P( I i , I j / rmn ∈ M a ) =
x ∈ rmn
P( I i , I j / rmn ∈ M nc ) =
(18)
x ∈ rmn
The likelihoods associated with the tracing algorithm outputs have to be estimated next. Given the
inevitability of tracing errors, however rare, it is desirable to incorporate robustness to such errors.
Common errors include missed segments and false traces. Our approach to achieving robustness to such
errors is based on exploiting confidence factors that can be computed for each segment during the tracing.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
9
We first describe the application of the confidence factors, and then describe the computation of these
factors next.
Using the confidence factors α im and α jn associated with segments φim and φ jn , respectively, the
probabilities of a vessel disappearance, appearance, and “no change” are formulated as follows. For the
case when a change region was part of φim but not part of φ jn , the likelihoods can be written as follows.
P (φim , φ jn / rmn ∈ M d ) = 2α im ;
P (φim , φ jn / rmn ∈ M a ) = 2(1 − α im );
P (φim , φ jn / rmn ∈ M nc ) = 2(1 − α im ).
(19)
For the case when a change region is part of segment φ jn but not φim , the likelihoods are formulated as:
P (φim , φ jn / rmn ∈ M d ) = 2(1 − α jn );
P (φim , φ jn / rmn ∈ M a ) = 2α jn ;
P (φim , φ jn / rmn ∈ M nc ) = 2(1 − α jn ).
(20)
The likelihoods here are triangular distributions. Since the value of α , the confidence is always between 0
and 1, the triangular distribution was found to be a good choice for the likelihoods. For instance, as seen
from equation (20), when the confidence associated with a region is very low ( α jn ≈ 0 ), the likelihood
for that region to have appeared is set to a very low value.
Estimating the confidence of the trace segments:
The tracing algorithm output for the
vasculature is in the form of centerline points of the vessels, the width and local orientation of the vessel
at the centerline points. The algorithm also produces a correlation template response at each centerline
point [15]. The template response is a measure of how strong the edges are at this point in the vessel.
Hence the average template response for each trace segment is used to compute the confidence of a trace
segment. The confidence value α im is set to be proportional to the ratio of the average template response
for that segment to the maximum average template response for all segments in that image. Let us denote
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
10
the average template response for a segment by θim and the maximum average template response for all
segments in the particular image by Θi . Then the confidence for the mth segment is given by:
α im = θim / Θi
(21)
The above equation means that the segment which has the maximum average template response in the
image will have a confidence of 1, and the confidence value for other segments will vary proportionally to
their average edge strengths.
Distinguishing changes in thickness and appearance: As mentioned earlier, the pixel-level
color change associated with appearance of a vessel and increase in thickness is the same. The same is
true for disappearance of a vessel and decrease in thickness. We use the properties of the change region to
differentiate between the two models. A change region that has centerline points from either of the IUS
outputs, corresponds to an appearance/disappearance of a vessel and a change region that does not have
centerline points, lies on the sides of a vessel and correspond to an increase/decrease in thickness.
Detection of Neo-Vascularization on the optic disk: Neo-vascularization usually occurs by the
appearance of fine vessels on the optic disk. Because these vessels are extremely thin, they are often
missed by the tracing algorithm, which will result in the detection of their appearance being missed by the
method described above. This warrants the use of alternative methods that are robust to such tracing
errors.
The main challenge in finding changes over the optic disk is to align the disks from the two
images accurately. Although the 12-parameter registration algorithm works well on most of the retinal
regions, the registration accuracy is reduced near the optic disk due to the high local curvature. With this
in mind, we perform a local refinement of the registration results around the optic disk region using an
optical flow algorithm [57]. After the refinement step, we perform change detection just on the optic disk
region using the method described earlier. Each of the detected change regions are then tested for three
change models - appearance of new vessel, disappearance of new vessel, and no-change. This is
accomplished by adapting equation (16) to use only the likelihood terms associated with the color change
in the image. The discriminant function for a change region on the optic disk becomes:
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
11
g P (Om ) = P ( I i , I j / M p ) × P ( M p ),
(22)
where, Om is the mth change region detected on the optic disk and the likelihood terms have the same
meanings as described earlier. Figure 5 shows an example of neo-vascularization on the optic disk and
how is detected by this method.
Tracing at multiple sensitivities: The tracing algorithm can be made more or less sensitive in
accepting a valid trace. Hence we trace each image with two settings, one which is the default setting and
the other a tracing with higher sensitivity, which will extract the weaker vessels. Hence for each change
region that is computed, we check whether this region is part of the tracing at higher sensitivity. The
model selection is performed only if the region is not part of the tracing with higher sensitivity.
V. Experimental Validation Results
The clinical data was recorded at the Center for Sight (Albany, NY) using a TOPCON TRC 50IA
fundus camera. The images originally stored using 35mm film slides, were converted into digital format
by scanning using a digital slide scanner (Canon CanoScan 6735A002). Twenty two (22) subjects with
Proliferative and Non-proliferative Diabetic Retinopathy were selected for validating the effectiveness of
the algorithms. Images were obtained from multiple sittings for each subject with multiple images from
each sitting. From each sitting, an image centered on the fovea was selected for the experiments. It should
be noted that the change analysis methods are applicable throughout the retina. The choice of foveacentered images was made because of their clinical significance, and since they gave the maximum
amount of overlap between the images from different sittings.
A training set consisting of 18 image pairs, distinct from the test set, was selected for training the
color change classifier. The set contained image pairs exhibiting appearance/disappearance of
microaneurysms, bleeding, exudates and cottonwool spots. Training samples for each class described in
Table 2 were selected manually by a retina specialist from the illumination-corrected image pairs. The
original images were at hand for cross-checking. The classifier was trained based on these samples, by
calculating the relevant statistics for each of the 5 classes.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
12
Our earlier publication [9] had reported the results for classifying changes on the non-vascular
regions. Here we describe the validation methodology and results for the change analysis on the vascular
regions.
Validation of vascular changes is a challenging problem. If we have to validate the algorithm based
on some gold standard, then it is necessary to manually segment the vasculature from each image and also
to make sure that the segmentation is very accurate so that even small vessel width changes can be
detected correctly. The task of doing the manual segmentation is extremely cumbersome, especially when
we consider the number of images to be segmented to do a meaningful study. In order to alleviate this
burden, we selected regions from 34 image pairs. The regions were selected from image pairs including
cases with non-uniform illumination and pathological conditions of clinical interest. A total of 54 regions
were selected and the regions had a representative mix of increase/decrease in vessel width,
disappearance of vessel segments and as well as no-change regions. Figure 5 shows multiple examples of
the selected regions and the results detected by our algorithm.
The selected regions were presented to a retina specialist who was asked to mark the ground-truth for
the regions. The automatically computed results were not presented at this stage to avoid bias. After this,
the original image pairs were again presented to the observer along with the automatically generated
results. This time, he was asked to qualitatively assess whether the change detection/analysis was
acceptable/unacceptable. The summary statistics in Table 3 were then computed based on the assessment
of the observer.
Of the 33 regions which had significant changes, 27 were detected and analyzed correctly. The
algorithm missed 6 of the changes. Of the 21 regions which did not have change, the algorithm identified
18 regions as having no change. A false change was reported in 3 of the no-change regions.
Table 4 summarizes the performance of the integrated method for change analysis from the vascular
and non-vascular regions. For the vascular changes, the algorithm achieved a sensitivity of 82% and a
specificity of 86% on the selected regions. The specificity corresponds to a 9% false positive rate with
respect to the true change regions. The previously reported performance data for the non-vascular changes
[9] had a sensitivity of 97% and a false positive rate of 10%. The performance data for vascular changes
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
13
are somewhat lower than for the non-vascular changes. This is attributable largely to the higher
complexity of the image analysis task, and errors introduced due to low image quality that become
pronounced when analyzing changes associated with vessels that are only a few pixels wide.
VI. Discussion and Conclusions
The importance of automated image analysis procedures in general, and change analysis in
particular are due to the fact that most retina-related clinical diagnostic and treatment procedures
are largely image driven. This is also true for retinal research studies. Low-level processing tools
for image display, enhancement, manual annotation, and manual image analysis are now
commonly integrated into fundus imaging systems. The focus of this paper is on software tools
for higher-level, quantitative, and highly-automated retinal image analysis, with a focus on
change analysis. This task necessarily builds upon much prior work, since accurate registration,
and robust illumination correction are pre-requisites to successful change detection. These
operations, in turn, rely on robust extraction of retinal features such as the vessels, and key
regions such as the optic disk and fovea. Each of these tasks is non-trivial, and large numbers of
publications have been devoted to them. By integrating these elements on a large scale, our work
goes significantly beyond the change detection problem, and addresses the more ambitious task
of classifying changes in some detail without being overly disease specific in the methodology.
The extension of the change analysis framework described in [9] presented in this paper shows that
vascular changes can be analyzed jointly with other non-vascular changes in the retina. Such integration is
novel and valuable. It combines accurate vessel segmentation algorithms, color changes and confidence
measures to perform model selection for each change region. The proposed algorithms are robust to intraand inter frame illumination variations by the use of a robust illumination correction algorithm that is
specifically engineered for retinal images.
The proposed integrated system is intended for applications such as retinal screening, image-reading
centers, clinical trials scoring, and as an aid in clinical diagnosis, monitoring of disease progression, and
quantitative assessment of treatment efficacy. They can be incorporated into software packages that can
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
14
run on workstations that support retinal fundus cameras. They can also be incorporated into web-based
image analysis services.
VII. Acknowledgments
Various portions of this research were supported by the National Science Foundation Experimental
Partnerships grant EIA-0000417, the Center for Subsurface Sensing and Imaging Systems, under the
Engineering Research Centers Program of the National Science Foundation (Award Number EEC9986821), and by Rensselaer Polytechnic Institute. We thank colleagues Dr. Chia-Ling Tsai, Michal
Sofka and Gehua Yang for the tracing, registration and optic disk location detection algorithms, and Prof.
Richard Radke for general comments on change detection algorithms. The authors would also like to like
to thank the staff at the Center for Sight, especially photographers Michael Lambert and Gary Howe for
extensive assistance with image acquisition.
VIII. References
1. T. Y. Wong, R. Klein, A. R. Sharrett, M. I. Schmidt, J. S. Pankow, D. J. Couper, B. E. K. Klein, L. D.
Hubbard, and B. B. Duncan, “Retinal arteriolar narrowing and risk of diabetes mellitus in middle-aged
persons,” Journal of American Medical Association, vol. 287, no. 19, pp. 2528–2533, May 2002.
2. J. Evans, C. Rooney, S. Ashgood, N. Dattan and R. Wormald, “Blindness and partial sight in England
and Wales April 1900–March 1991,” Health Trends, vol. 28, pp.5-12, 1996.
3. A. Houben, M. Canoy, H. Paling, P. Derhaag, and P. de Leeuw, “Quantitative analysis of retinal vascular
changes in essential and renovascular hypertension,” Journal of Hypertension, vol. 13, pp 1729–1733,
1995.
4. A.V. Stanton, B. Wasan, A. Cerutti, S. Ford, R. Marsh, P.P. Sever, S.A. Thom, and A.D. Hughes,
``Vascular network changes in the retina with age and hypertension,'' Journal of Hypertension, vol. 13,
pp 1724-1728, 1995.
5. L.A. King, A.V. Stanton, P.S. Sever, S. Thom, and A.D. Hughes,``Arteriolar length-diameter (l:d) ratio:
A geometric parameter of the retinal vasculature diagnostic of hypertension,'' Journal of
Hum.
Hypertension, vol. 10, pp. 417-418, 1996.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
15
6. H.A.J. Struijker, J.L.M. le Noble, M.W.J. Messing, M.S.P. Huijberts, F.A.C. le Noble, and H. van Essen,
`The microcirculation and hypertension,'' Journal of Hypertension, 10:S147-S156, 1992.
7. T. Wong, R. Klein, A. Sharrett, B. Duncan, D. Couper, J. Tielsch, B. Klein, and L. Hubbard, “Retinal
arteriolar narrowing and risk of coronary heart disease in men and women the atherosclerosis risk in
communities study,” Journal of American Medical Association, 287(9):1153–1159, March 6 2002.
8. A. Stanton, B. Wasan, A. Cerutti, S. Ford, R. Marsh, P.P. Sever, S.A. Thom and A.D. Hughes, “Vascular
network changes in the retina with age and hypertension”, Journal of Hypertension, vol. 13, pp 1724–
1728, 1995.
9. H. Narasimha-Iyer, A.Can, B.Roysam, C.V. Stewart, H. L. Tanenbaum, A. Majerovics and H. Singh,
"Automated Analysis of Longitudinal Changes in Color Retinal Fundus Images for Monitoring Diabetic
Retinopathy," Accepted for publication in the IEEE Transactions on Biomedical Engineering, September
2005.
10. A. Hoover, V. Kouznetsova and M. Goldbaum, “Locating blood vessels in retinal images by piecewise
threshold probing of a matched filter response,” IEEE-TMI, vol. 19, no. 3, pp. 203-210, 2000.
11. X Jiang and D. Mojon, “Adaptive local thresholding by verification-based multithreshold probing with
application to vessel detection in retinal images,” IEEE-PAMI, vol.25,no.1, pp.131-137, Jan. 2003.
12. L. Zhou, M.S. Rzeszotarski, L. J. Singerman and J.M. Chokreff, “The detection and quantification of
retinopathy using digital angiograms,” IEEE-TMI, vol. 13, 4, 619-626, 1994.
13. R. Poli and G. Valli, “An algorithm for real-time vessel enhancement and detection,” Comput. Meth.
Programs Biomed, vol. 52, pp.1-22, 1997.
14. L. Gagnon, M. Lalonde, M. Beaulieu and M.C. Boucher, “Procedure to detect anatomical structures in
optical fundus images,” Proc SPIE Vol 4322, Med Imaging:Img Processing, pp. 1218-25, 2001.
15. A. Can, H. Shen, J.N. Turner, H.L. Tanenbaum, and B. Roysam, "Rapid automated tracing and feature
extraction from live high-resolution retinal fundus images using direct exploratory algorithms", IEEE
Trans. Inform. Technol. Biomed, vol. 3, no. 2, pp. 125-138, Jun. 1999.
16. K. Fritzsche, A. Can, H. Shen, C. Tsai, J. Turner, H.L. Tanenbuam, C.V. Stewart and B. Roysam,
"Automated model based segmentation, tracing and analysis of retinal vasculature from digital fundus
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
16
images", Angiography and Plaque Imaging: Advanced Segmentation Techniques, J.S. Suri and S.
Laxminarayan, Ed, CRC Press, 2002.
17. S.R. Aylward and E. Bullitt, “Initialization, noise, singularities, and scale in height ridge traversal for
tubular object centerline extraction”, IEEE-TMIg, vol. 21, no. 2, 61-75, 2002
18. F. Zana and J.C. Klein., “Segmentation of vessel-like patterns using mathematical morphology and
curvature evaluation,” IEEE-TIP, vol.10, no.7, pp. 1010-1019, Jul. 2001.
19. T. Walter, J. C. Klein, P. Massin and A. Erginay, “A contribution of image processing to the diagnosis of
diabetic retinopathy detection of exudates in color fundus images of the human retina,” IEEE-TMI, vol.
21, no.10, pp. 1236-1244, Oct 2002.
20. M. Fontaine, L. Macaire, J. G. Postaire, M. Valette and P. Labalette, “Fundus images segmentation by
unsupervised classification,” Vision Interface, Trois-Rivieres, Canada, May 1999.
21. J. Staal,, M.D. Abramoff, M. Niemeijer, M.A. Viergever, B. van Ginneken, “Ridge-based vessel
segmentation in color images of the retina,” IEEE-TMI, vol. 23, no. 4, pp. 501-509, Apr 2004.
22. V. Mahadevan, H. Narasimha-Iyer, B. Roysam and H. L. Tanenbaum, “Robust model-based
vasculature detection in noisy biomedical images,” IEEE-TITB, vol.8, no. 3, pp 360-376, Sept. 2004
23. J. Berger, T. Patel, D. Shin, J. Piltz and R. Stone, “Computerized Stereo Chronoscopy and Alternation
Flicker to Detect Optic Nerve Head Con tour Change,” Ophthalmology. Vol. 107 No. 7 pp. 1316-20,
2000.
24. M. Goldbaum,N. Katz, S. Chaudhuri, M. Nelson and P. Kube, “Digital Image Processing for Ocular
Fundus Images,“ Opthalmology Clinics of North America, vol. 3, no.3,pp. 447-466. 1990.
25. J.Jagoe, C. Blauth, P. Smith, J. Arnold, K. Taylor and W.R. “Quantification of retinal damage done
during cardiopulmonary bypass: Comparison of computer and human assessment”, IEEE Proceedings of
Communications, Speech and Vision, vol. 137, no. 3, pp. 170-175, 1990.
26. C. Heneghan, J. Flynn, M. O’Keefe and M.Cahill, “Characterization of changes in blood vessel width
and tortuosity in retinopathy of prematurity using image analysis,” Medical Image Analysis, vol.6, no. 4,
pp. 407-429, 2002.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
17
27. Hubbard, R.Brothers, W. King, L. Clegg, R. Klein, L. Cooper, R. Sharrett, M/ Davis and J. Cai,
“Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in
the atherosclerosis risk in communities study,” Opthalmology, vol., 106, no. 12, pp. 2269-2280, 1999.
28. A. Sharrett, L. Hubbard, L. Cooper, P. Sorlie, R. Brothers, F. Neito, J. Pinsky, and R. Klein, “Retinal
arteriolar diameters and elevated blood pressure in the atherosclerosis risk in communities study,”
American Journal of Epidemiol., vol. 150, no.3, pp. 263-270, 1999.
29. M.J. Dumskyj, S. J. Aldington, C. J. Dore and K. M. Kohner, “The accurate assessment of changes in
retinal vessel diameter using multiple frame electrocardiograph synchronised fundus photography”,
Current Eye Research, vol. 15, no. 6, pp. 625-32, 1996.
30. K. Fritzsche, “Computer Vision Algorithms for Retinal Vessel Width Change Detection and
Quantification,” PhD Dessertation, Department of Computer Science, Rensselaer Polytechnic Institute,
Troy, NY, 2003.
31. M. J. Cree, J.A. Olson, K.C. McHardy, P.F. Sharp and J.V. Forrester, “The Preprocessing of Retinal
Images for the Detection of Fluorescein Leakage," Phys. Med. Biol., vol. 44, pp. 293–308., 1999.
32. M.J. Cree, J.A. Olson, K.C. McHardy, P.F. Sharp, J.V. Forrester, “A fully automated comparative
microaneurysm digital detection system,” Eye, vol. 11, pp. 622-628,1998.
33. K.A. Goatman, M. J. Cree, J.A. Olson, J.V. Forrester and P.F. Sharp, “Automated measurement of
microaneurysm turnover,” Investigative ophthalmology and Visual Science, vol.44,5335-5341, 2003.
34. Z.B. Sbeh, L.D. Cohen, G. Mimoun and G. Coscas, “A new approach of geodesic reconstruction for
drusen segmentation in eye fundus images,” IEEE-TMI, vol. 20, no.12, pp.1321-1333, 2001.
35. A.Osareh, M. Mirmehdi, B. Thomas and R. Markham, “Classification and localisation of diabetic-related
eye disease,” In 7th European Conference on Computer Vision (ECCV), A.Heyden, G. Sparr, M.
Nielsen, and P. Johansen, editors ,pp. 502-516, Springer LNCS 2353, May 2002.
36.W. Hsu, P. M. D. S. Pallawala, M. L. Lee and K. A. Eong, “The role of domain knowledge in the
detection of retinal hard exudates,” IEEE-CVPR, Hawaii, 2001.
37.R. Phillips, J.V. Forrester and P. F. Sharp, “Automated detection and quantification of retinal exudates,”
Graefe's Archives of Clinical and Experimental Ophthalmology, vol. 231, pp. 90-94, 1993.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
18
38. H. Li and O. Chutatape, “Fundus image features extraction,” Proc. of 22nd IEEE International
Conference of the IEEE Engineering in Medicine and Biology Society, vol.4, pp.3071 -3073, 2000.
39. D. Usher, M. Dumsky, M. Himaga, T. H. Williamson, S. Nussey and J. Boyce, “Automated detection of
diabetic retinopathy in digital retinal images: a tool for diabetic retinopathy screening,” Diabetic
Medicine, vol.21, no. 1, pp. 84, January 2004.
40. T. Walter , J. C. Klein, P. Massin, A. Erginay, “A contribution of image processing to the diagnosis of
diabetic retinopathy detection of exudates in color fundus images of the human retina,” IEEE-TMI, vol.
21,no.10,pp. 1236-1243, October 2002.
41. T. Spencer, J. A. Olson, K. C. McHardy, P. F. Sharp and J. V. Forrester, “An image-processing strategy
for the segmentation of microaneurysms in fluorescein angiograms of the ocular fundus,” Computers and
Biomedical Research , vol. 29, no. 4,pp. 284-302, August 1996.
42. J. H. Hipwell, F. Strachan, J. A. Olson, K. C. McHardy, P. F. Sharp and J. V. Forrester, “Automated
detection of microaneurysms in digital red-free photographs: a diabetic retinopathy screening tool,”
Diabetic Medicine, vol.17, no. 8, pp. 588-594, August 2000.
43. M. Niemeijer, B. van Ginneken and M.D. Abramoff, “Automatic detection and classification of
microaneurysms and small hemors in color fundus photographs,” European Journal of Ophthalmology,
vol.13, no. 2, pp. 226, 2003.
44. R. T. Smith, T. Nagasaki, J. R. Sparrow, I. Barbazetto, C. C. W. Klaver & J. K. Chan, “A method of
drusen measurement based on the geometry of fundus reflectance,” Biomedical Eng. Online, 2(1):10,
2003.
45. L. Zhou, M. Rzeszotarski, L.J. Singerman and J.M Chokreff, “The detection and quantification of
retinopathy using digital angiograms,” IEEE-TMI,vol.13, no. 4, 619-626, 1994.
46. L. Ballerini, “An automatic system for the analysis of vascular lesions in retinal images,'' IEEE Medical
Imaging Conference, Seattle, USA, October 1999.
47. M. Goldbaum, S. Moezzi, A. Taylor, S. Chatterjee, J. Boyd, E. Hunter and R. Jain, “Automated
diagnosis and image understanding with object extraction, object classification, and inferencing in retinal
images,” Proc. IEEE International Conf. on Image Processing, vol. 3 , pp. 695-698, 1996.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
19
48. A. Pinz, S. Bernogger, P. Datlinger and A. Kruger, “Mapping the human retina,” IEEE-TMI, vol. 17, pp.
606-619, 1998.
49. C-L. Tsai, C. V. Stewart, H. L. Tanenbaum and B. Roysam, "Model-based method for improving the
accuracy and repeatability of estimating vascular bifurcations and crossovers from retinal fundus
images," IEEE-TITB, vol. 8, no. 2, pp. 122-130, Jun. 2004.
50. A. Hoover and M. Goldbaum, “ Locating the optic nerve in a retinal image using the fuzzy convergence
of the blood vessels,” IEEE-TMI, vol. 22, no. 8, 951-958, 2003.
51. C. V. Stewart, C-L Tsai and B. Roysam, "The dual-bootstrap iterative closest point algorithm with
application to retinal image registration," IEEE-TMI, vol. 22, no. 11, pp. 1379-1394, Nov. 2003.
52. A. Can, C. V. Stewart, B. Roysam, and H. L. Tanenbaum, “A feature-based robust hierarchical algorithm
for registration pairs of images of the curved human retina,” IEEE-PAMI vol. 24, no. 3, Mar. 2002.
53. T. Aach and A. Kaup, “Bayesian algorithms for adaptive change detection in image sequences using
Markov Random Fields,” Signal Processing: Image Communication, vol. 7, pp. 147–160, Aug. 1995.
54.L. Bruzzone and D.F. Prieto, “Automatic analysis of the difference image for unsupervised change
detection,” IEEE Trans. GeoScience and Remote Sensing, vol.38, no.3, pp. 1171-1182, May 2000.
55.T. Kasetkasem and P.Varshney, “An image change detection algorithm based on Markov random field
models,” IEEE Trans. On Geosci. Remote Sensing, vol.40, no.8, pp. 1815-1823, Aug. 2002.
56.H. Chen, V. Patel, J. Wiek, S. Rassam, and E. Kohner, “Vessel diameter changes during the cardiac
cycle,” Eye, vol. 8, pp. 97-103, 1994.
57.S. Negahdaripour, “Revised Definition of Optical Flow: Integration of Radiometric and Geometric Cues
for Dynamic Scene Analysis,” IEEE Pattern Analysis and Machine Intelligence,vol.20, no.9, 961-979,
Sept. 1998.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
20
Table 1: Glossary of terms.
F ( x, y , λ )
Observed image at wavelength λ
I i ( x, y , λ )
Illumination component at pixel location ( x, y ) for the i th image at
wavelength λ
Ri ( x, y, λ )
Reflectance component at pixel location ( x, y ) for the i th image at
wavelength λ
Rratio _ i ( x, y )
Ri ( x, y, λgreen )
Ri ( x, y, λred )
∆Rratio ( x, y )
Rratio _ j ( x, y ) − Rratio _ i ( x, y )
σn
Noise standard deviation of ∆Rratio in the no-change regions.
Γ
f1 , f 2 and f3
Threshold for change detection derived from χ 2 tables
X = { f1 , f 2 , f 3 }
Feature vector at pixel location ( x, y )
µi
Mean vector for class Ci .
Σi
Covariance matrix for class Ci .
P (Ci )
Prior probability of class Ci .
rmn
A change region between the trace segments φim and φ jn
Md
Model indicating disappearance of vessel segment
Ma
Model indicating appearance of vessel segment
Mn
Model indicating no-change of vessel segment
Cru
Class indicating increase in red color at a pixel
Crd
Class indicating decrease in red color at a pixel
Cnc
Class indicating no-change in color at a pixel
I ix , x ∈ rmn
Pixels in the i th image belonging to rmn
Narasimha-Iyer et al.
Change features for the pixel at location ( x, y )
Longitudinal Retinal Change Analysis
21
Table 2: Different color changes under consideration, associated color codes and their clinical
significance. The classes listed are application specific, and can be extended to other clinically
important classes easily. The change regions corresponding to a particular type of change is outlined
in the color code for that type of change. Figures 5-6 show the sample results using the color coding
scheme.
Type of Color Change
Display Color Code
Significance
Changes on Non-Vascular Regions
Increase in redness
Appearance of bleeding/microaneurysm
Decrease in redness
Disappearance of bleeding/microaneurysm,
Ischemia
Increase in yellowness
Appearance of exudate/cottonwool spot
Decrease in yellowness
Disappearance of exudates/cottonwool spot
Changes on Vascular Regions
Increase in redness
Appearance of a vessel
Increase in redness
Increase in thickness
Decrease in redness
Disappearance of a vessel
Decrease in redness
Decrease in thickness
No change
Narasimha-Iyer et al.
N/A
Longitudinal Retinal Change Analysis
No change
22
Table 3: Change analysis results for the vascular regions. Fifty four regions were selected from thirty four
pairs of images. The regions were a mix of different types of changes as well as no-change regions. The
results for the regions were qualitatively assessed by an ophthalmologist. The algorithm correctly
identified 27 of the 33 changes leading to a sensitivity of 82%. Of the 21 no-change region, 18 were
identified correctly, leading to a specificity of 86%.
Region Type
Thickening
Thinning
Disappear
No-Change
Overall
Narasimha-Iyer et al.
Number of Regions
16
9
8
21
54
Acceptable
13
7
7
18
45
Longitudinal Retinal Change Analysis
Unacceptable
3
2
1
3
9
23
Table 4: Overall performance results for change analysis on vascular and non-vascular regions. The
sensitivity value for the vascular regions are computed from Table 3. For the changes in the non-vascular
regions, the performance metrics are based on the results reported in [9] for the same data-set. The false
positive rate is the percentage of false changes detected with respect to the number of true changes.
Change Type
Sensitivity
False positive rate
Changes in vascular regions
82%
9%
Changes in non-vascular regions
97%
10%
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
24
(a)
(b)
(c)
(d)
Figure 1: Illustrating automated feature extraction and robust illumination correction. (a) Fundus image
exhibiting symptoms of diabetic retinopathy; (b) Results of automatic vasculature tracing, optic disk
detection, and fovea detection. These features are used to compute exclusion regions for the reflectance
estimation; (c) Illumination component estimate; (d) Reflectance component, that has been color mapped
for visualization purposes. The reflectance estimate enables illumination-invariant change analysis.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
25
Color Image
Fovea
Detection
Optic
Disk
Detection
Color Image
( Ii )
Blood
Vessel
Detection
Branch
Point
Analysis
Φi
Dual-Bootstrap
ICP
Registration
Φ
Ri ( x , y , λ )
Ri ( x, y , λgreen )
R j ( x, y, λ )
T ij
Change Features
Rratio _ j ( x, y , λ ) =
Bayesian Change
Analyzer
Vascular
changes
I j ( x, y, λ )
Dust removal by image ratioing
Ri ( x, y , λred )
Φi
Fovea
Detection
Robust Illumination Correction
Dust removal by image ratioing
Rratio _ i ( x, y, λ ) =
Optic
Disk
Detection
j
Robust Illumination Correction
Ii ( x, y, λ )
Blood
Vessel
Detection
Branch
Point
Analysis
( Ij )
Φ
R j ( x, y , λgreen )
R j ( x, y , λred )
j
Non-vascular
changes
Figure 2: A large number of individually sophisticated algorithms from prior work have been integrated
along with novel components to achieve the proposed integrated approach to change analysis. The vessel
segmentations Φ i serve as a starting point for the analysis. On the one hand, it provides features for
robust registration. On the other hand, it enables automated detection of the optic disk and fovea. These
results in combination enable robust illumination correction, and subsequent rejection of dust artifacts.
The change features capture the changes between the two images, and enable comprehensive
classification of vascular and non-vascular changes. The Bayesian change analyzer generates high-level
descriptions of the changes. The boxes shown in gray highlight improvements over prior work.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
26
(a)
(b)
(c)
(d)
(e)
(f)
Figure 3: Illustrating the key processing steps in the framework for a sample region. (a) original regions
at two different points in time (b) automated vessel tracing results (c) illumination corrected regions (d)
regions after dust removal (e) vessel change mask for the regions (f) color coded vascular change analysis
results for the region. It can be seen that the algorithm is able to recover from errors in the tracing. For
example, the small vessel that was traced in the first image and not in the second image was correctly
classified as a “no-change” region by using the color change information as well.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
27
(b)
(a)
Figure 4: Illustration of change in width inside a small window Γ. (a) A portion of a segment with
width w . (b) The same portion of the vessel after undergoing an increase in vessel width to ( w + δ w ).
The change regions towards the boundary of the vessel are shown in lighter gray. The sum of the areas of
these change regions is compared to the original area, and if it is greater than 5%, we hypothesize that
there might be a true change happening in the region and select the best change model for the region.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
28
(a)
(b)
(c)
(d)
(e)
Figure 5A: Examples of diverse types of changes. These examples show close-up views of vessel
segments drawn from the 35 image-pairs used for this study. The first two columns show the two segment
regions under consideration. The third column shows the color coded change analysis results
superimposed on the second image. The color codes are the ones shown in Table 2.(a) Decrease in width,
(b) Increase in width, (c) Disappearance of a vessel, (d) Decrease in width, (e) Neo-vascularization on the
optic disk.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
29
(a)
(b)
(c)
(d)
Figure 5B: More examples of different types of changes. The first two columns show the two segment
regions under consideration. The third column shows the color coded change analysis results
superimposed on the second image. The color codes are the ones shown in Table 2.(a) Disappearance of a
vessel. This region also illustrates a falsely traced segment. (b) Decrease in width of a vessel. (c) Increase
in width. (d) No-Change.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
30
(a)
(b)
(c)
(d)
Figure 5C: More examples of different types of changes. The first two columns show the two segment
regions under consideration. The third column shows the color coded change analysis results
superimposed on the second image. The color codes are the ones shown in Table 2.(a) No-Change, (b)
Increase in width, (c) Increase in width, (d) Decrease in width.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
31
(a)
(b)
(c)
Figure 6A: Sample integrated color-coded change analysis display for an eye with branch retinal vein
occlusion (a) Eye with proliferative diabetic retinopathy (PDR) (b) Same eye as in (a) with branch retinal
vein occlusion occurring between patient visits. The eye was also treated with a laser between the two
dates. (c) Analysis of the changes. The changes are color coded as described in Table 2.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
32
(a)
(b)
(c)
Figure 6B: Sample change analysis result on an eye with NPDR. (a)Eye with non-proliferative diabetic
retinopathy (NPDR) (b) Same eye as in (a) at a subsequent patient visit 23 months later. (c) Automated
change analysis results. Note appearance of bleeding and exudates. The changes are color coded as
described in Table 2.
Narasimha-Iyer et al.
Longitudinal Retinal Change Analysis
33