Sensors Journal, IEEE - Functional and Molecular Imaging

308
IEEE SENSORS JOURNAL, VOL. 12, NO. 2, FEBRUARY 2012
A Multiaperture Bioinspired Sensor With Hyperacuity
Geoffrey P. Luke, Student Member, IEEE, Cameron H. G. Wright, Senior Member, IEEE, and
Steven F. Barrett, Senior Member, IEEE
(Invited Paper)
Abstract—We have developed a multiaperture sensor based
on the visual system of the common housefly (Musca domestica).
The Musca domestica insect has compound eyes, which have been
shown to exhibit the ability to detect object motion much finer than
their photoreceptor spacing suggests, a phenomenon often called
motion hyperacuity. We describe how such motion hyperacuity
can be achieved through a controlled preblurring of an optical
imaging system. We used this method to develop a software model
of a new fly eye sensor and to optimize its motion hyperacuity.
Finally, we compare the completed sensor to previously developed
fly eye sensors. The new design shows a threefold improvement in
signal-to-noise ratio and motion acuity.
Index Terms—Hyperacuity, insect vision, machine vision, multiaperture image sensor, sampling methods, visual system.
I. INTRODUCTION
T
HE VAST majority of imaging systems are inspired by
human vision. They typically consist of a charge coupled
device (CCD) or a complementary metal oxide semiconductor
(CMOS) array behind a lens (or lens system) in a single-aperture design. Traditional imaging sensors are intuitive for humans
and well-suited for many computer vision tasks. However, these
sensors must transfer an enormous amount of information to a
host processor. This is typically done via a serial connection,
which limits the temporal resolution of the imaging device. Because of these drawbacks, a sensor based on the Musca domestica multiaperture visual system shows great promise for certain
applications, such as edge and motion detection.
Insects have the ability to detect object motion over much
smaller distances than their photoreceptor spacing suggests, a
phenomenon known as motion hyperacuity [1]. In addition, the
fly brain is incredibly simple when compared to that of the
human. However, the fly is able to perform complicated image
Manuscript received October 01, 2010; revised November 19, 2010; accepted November 22, 2010. Date of publication December 13, 2010; date of
current version December 07, 2011. This work was supported in part by the
National Institutes of Health (NIH) under Grant P20 RR015553 and Grant
P20 RR015640-06A1 and in part by DoD Contract FA4819-07-C-0003. The
associate editor coordinating the review of this paper and approving it for
publication was Prof. Raul Martin-Palma.
G. Luke is with the Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712 USA (e-mail: geoffluke@mail.
utexas.edu).
C. Wright and S. Barrett are with the Department of Electrical and Computer
Engineering, University of Wyoming, Laramie, WY 82071 USA (e-mail: c.h.g.
[email protected]; [email protected]).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/JSEN.2010.2099112
processing tasks, including object recognition, motion detection, and obstacle avoidance [2]. The fly is able to carry out all
these tasks in real-time. It meets the real-time requirements by
preprocessing the visual data in a parallel and analog fashion
[3].
When we discuss hyperacuity in this paper, we refer to the
concept of motion hyperacuity defined by Nakayama [1]. Motion hyperacuity occurs when a stationary imaging system is
able to detect object motion much finer than the photoreceptor
spacing would suggest. This is not to be confused with other
studies which have measured static hyperacuity [4]–[8] (i.e., a
subpixel resolution of line pairs) or vernier hyperacuity [9] with
a scanning sensor, relying on active visual processes.
This paper presents the design of a new sensor inspired by
the anatomy of the common housefly. We describe how motion
hyperacuity can be achieved in such an imaging system. We
then show how we used optical modeling to optimize the newly
designed sensor. Finally, we compare the performance of the
new sensor to previously designed “fly eye” sensors.
II. BACKGROUND
Traditional CCD or CMOS imaging sensors are inspired by
the single aperture “camera eye” design seen in the anatomy
of the human eye. However, the sensor described in this paper
is inspired by the visual system of the common housefly,
Musca domestica. At first glance, the fly’s eye appears to be
a much poorer imaging system. It suffers from small aperture
optics which leads to a broad Gaussian photoreceptor response
[10]–[12] (technically speaking, the response has the form of
an Airy disk, but a Gaussian approximation is commonly used).
This causes the fly to have poor resolution in the traditional
sense (i.e., the ability to resolve line pairs). The fly’s spatial
resolution [also known as the minimum angle of resolution
(MAR)] is roughly 2/5 [13], compared to the MAR of the
human eye, which is approximately 1/60 for 20/20 vision [14].
The Musca domestica has two compound eyes, each consisting of about 3000 facets called ommatidia. Each ommatidia
is, in itself, a crude imaging system. It consists of a focusing lens
(the cornea), a cone shaped lens, and eight receptors (the rhabdomeres, usually designated R1–R8) [10]. Each rhabdomere
has been shown to exhibit an approximately Gaussian response
[15]–[17]. The anatomy of an ommatidium is depicted in Fig. 1.
The Musca domestica isolates the light in each individual
rhabdomere. The signals of the R1–R6 rhabdomeres from spatially adjacent lenses are combined in the L1 and L2 neurons
in the lamina (shown as the shaded parts in Fig. 1). These rhabdomeres share a single optical axis. This slightly shifted field-of-
1530-437X/$26.00 © 2010 IEEE
LUKE et al.: A MULTIAPERTURE BIOINSPIRED SENSOR WITH HYPERACUITY
309
Fig. 1. (a) Cross section of seven adjacent ommatidia, (b) parallel optical
axis of rhabdomeres from adjacent ommatidium, (c) resulting overlapping
Gaussian photoreceptor responses, and (d) overlapping field-of-view of seven
rhabdomeres. Adapted from [18].
view is how the phenomenon of motion hyperacuity is achieved.
The R7 and R8 rhabdomeres, which are nearly coaxial in the
ommatidium, are routed directly to the medulla and do not contribute to the neural superposition effect [3]. We concentrate
here on the rhabdomeres whose signals are combined in the
same L1 and L2 neurons and share a common optical axis.
Researchers at the University of Wyoming have developed
sensors mimicking certain aspects of the visual system of the
Musca domestica [18]–[22]. The two most recent sensor designs
are depicted in Fig. 2. These designs were both shown to exhibit
motion hyperacuity [18], [21].
A number of sensors based on the compound eye have been
developed at other institutions. Perhaps the earliest was first
imagined by Angel in 1979 [23]. He presented a design of a
new X-ray telescope based on the lobster’s multiaperture visual
system. The greatest advantage was that the proposed telescope
had a (180 ) field-of-view. However, its main drawback was that
the spatial resolution did not improve upon state of the art telescopes [24]. Thus, the telescope was never fully realized.
Since 1980, a number of compound eye sensors have been
developed [11], [25]–[28]. Each of these sensors are based on
the apposition compound eye, rather than a true neural superposition compound eye. The goal of each of these sensors is to
construct an imaging system with a large field-of-view (at the
expense of spatial resolution). While this goal is easily achieved
with an artificial compound eye, the authors fail to address motion hyperacuity. Brückner, et al. have developed an apposition
sensor that achieves a type of static hyperacuity (it can localize
points and edges to a fraction of its photoreceptor spacing) [29].
A comprehensive literature search indicates that, while many
people have studied the design and construction of compound
eye sensors, researchers at the University of Wyoming (UW)
are unique in developing compound eye sensors for motion detection and motion hyperacuity using off-the-shelf components.
Fig. 2. (a) Diagram of Riley’s sensor [20] and (b) Benson’s sensor [21].
Note that collaborative efforts between UW and Wilcox et al. at
the U.S. Air Force Academy have also explored other “fly-eye”
sensor construction aspects [30].
III. PREBLURRING AND HYPERACUITY
To simplify the presentation, we consider only the direction;
the direction would follow the same analysis. In order to evaluate the effects of preblurring, the response of adjacent pixels,
apart, must be observed. Let the input image,
spaced
be the equivalent of an impulse (or Dirac delta) function (optically speaking, an infinitesimal point-of-light) centered over
. The response of the th pixel to such
the th pixel,
. This assumes perfect optics, so that
an impulse is
the incoming point of light is not spread (or smeared) by the
point spread function (PSF) of imperfect optics. If the incoming
, then the response of the
image is spatially shifted by some
th pixel changes by
(1)
It follows that motion can only be detected if
(2)
310
IEEE SENSORS JOURNAL, VOL. 12, NO. 2, FEBRUARY 2012
Fig. 5. Improved sensor design [33].
Fig. 3. Adjacent pixel responses in an unblurred system [31].
hyperacuity is best achieved when the following is maximized
[31]:
(3)
IV. SOFTWARE MODEL
Fig. 4. Adjacent pixel responses in a blurred system [31].
where
depends on the noise floor and contrast limit of the
system.
Now, we assume realizable optics where the associated PSF
must be taken into account. First, consider the case with minimal
blurring. In this case, the width of the PSF due to the optics
is less than the width of an individual photodetector. Thus, the
will be dominated by the photodetector weighting
shape of
function (typically considered to be a rect function). This results
, as can be seen in Fig. 3. Note that
in large constant areas of
results in negligible differences in pixel
the shift by a small
). In
responses (i.e.,
fact,
must approach
before any motion is detected. This
is similar to human vision, where a shift of approximately
must occur before motion can be detected [1].
Now, consider the case with a specific amount of preblurring,
where the PSF is wider than the width of an individual photodeis domitector (see Fig. 4). In this situation, the shape of
nated by the shape of the PSF, and it appears to be Gaussian. In
, all motion
theory, since there are no constant regions in
is detectable, no matter how small. However, this is not true in
practical applications. The ability to detect motion is limited by
the noise floor of the system and the contrast limit, represented
by above.
While such preblurring can be used to achieve motion hyperacuity, more blurring does not necessarily yield better motion acuity. Excessive blurring leads to a more uniform PSF and
can be tailored to maxthus less motion acuity. Therefore,
imize motion acuity. It follows from (2) that this happens when
is maximized. Since symis usually assumed,
. Then,
metry of
We used a software model to design a sensor that maximized
motion hyperacuity, using the ZEMAX optical design package
[32]. We chose a multiaperture design because it allowed us to
separate the optimization into two parameters: the shape of the
and the spatial sampling period
. In
pixel response
is limited by the size of the phoa single aperture design,
todetector and motion acuity is determined solely by the degree
of preblurring.
Fig. 5 shows the salient aspects of the new sensor design. It
is a nonplanar, multiaperture design. While the design does not
exactly mimic the fly eye (we use only one photodetector per
lens rather than 8), it still boasts many of the same beneficial
characteristics. The design consists of a series of lenses (3 mm
diameter, 2.6 mm focal length), each focusing light onto a single
optical fiber. The multimodal fiber has core diameter of 1 mm,
jacket diameter of 2.2 mm, and numerical aperture of 0.5. The
fiber sits at a distance behind the lens. The inter-lens angle
is also shown in the figure. Motion hyperacuity can be optimized for this sensor configuration by simply adjusting the two
.
parameters, and
The first task in maximizing the motion acuity of the sensor
was to optimize the response of a single photoreceptor. This was
done by adjusting the distance between the lens and the image
plane ( in Fig. 5). If is chosen to be the focal length of the
lens, then the light is most focused (i.e., minimal preblurring).
As deviates from the focal length, the light on the fiber becomes increasingly blurred.
Obviously, the greater a response each photoreceptor has to
incoming light, the easier it is to detect motion. However, optimizing the response is not as simple as maximizing the peak. A
Gaussian shape must be maintained so that there are no “flat”
regions in the response (motion cannot be detected in a region
of constant response). Therefore, a heuristic method was used
to determine optimal preblurring (the response with the highest
peak that still appeared Gaussian was chosen as optimal). Fig. 6
shows three cases: insufficient preblurring (top dashed line),
excessive preblurring (bottom dot-dashed line) and ideal preblurring (middle solid line). The optimization led to a value of
mm.
LUKE et al.: A MULTIAPERTURE BIOINSPIRED SENSOR WITH HYPERACUITY
311
Fig. 6. Individual pixel response optimization [33].
Fig. 8. Response comparison between current sensor (dashed line) and new
design (solid line) [33].
Fig. 9. Solidworks model of lens housing [34].
Fig. 7. Adjacent pixel overlap optimization [33].
The optimal response corresponds to a blur radius (measured
at the half-power point by convention) of 1 mm. Therefore, at
the peak response, the blurred light lies exactly on top of the
fiber. This means that preblurring does not necessarily decrease
the peak response of a pixel to light. It has a negligible effect
when applied correctly.
The second task in hyperacuity maximization is adjusting
the amount of overlap between adjacent pixel responses. This
. We chose
was achieved by adjusting the inter-lens angle
such that the term in (3) was maximized. Fig. 7 shows the
smoothed photodetector motion response as a function of the
percentage of overlap, where
(4)
An overlap of approximately 50% yields the greatest motion
, was therefore chosen such
acuity. The inter-lens angle,
that neighboring pixels had a 50% overlap over a wide range
of distances. The ideal inter-lens angle approaches 7.8 as the
object distance approaches infinity. However, in order to increase motion sensitivity at closer object distances, we chose
. Using this value for the inter-lens angle
value of
maximizes motion detection with an object distance of roughly
1.4 meters. In practice, this choice will depend on the applicaprovides significant sensitivity to motion
tion, but
at all distances.
In order to evaluate and compare performance of the new
sensor, an optical model of Benson’s earlier sensor was devel-
oped. Fig. 8 shows a comparison between the optical models of
the new sensor (solid line) and Benson’s sensor (dashed line).
The peak pixel response and motion acuity of the new sensor
have improved by a factor of 4.
V. HARDWARE IMPLEMENTATION
We designed the sensor with the dimensions detailed in the
previous section. The first prototype has seven pixels (lens/fiber
pairs) in a nonplanar hexagonal configuration (depicted in
Fig. 9). The field-of-view can easily be expanded by adding
more lenses.
The terminal end of each fiber is connected to an IFD93
photodarlington (shown as the top device in Fig. 10). The
IFD93 was chosen for its high sensitivity and simple interface
design. Current proportional to the amount of sensed light is
generated by this photodarlington. A second IFD93 (shown as
the bottom device in Fig. 10) generates current proportional to
the amount of unfocused ambient light . This configuration
prevents saturation and allows the sensor to be used in a wide
range of lighting conditions. The ambient light detectors were
connected to polished optical fibers (arranged hexagonally)
without lenses. The ambient light detectors do not mimic the
Musca domestica, which relies on chemical signaling [35] and
adjusting its photoreceptor membrane size [36] to adapt to
varying light conditions. The sensor signal is given by
(5)
The sensor is shown in Fig. 11 along with the designs by Riley
and Benson. A penny is pictured for scale. The optical front-end
of the new design is smaller than 1/4 the size of each of the other
two.
312
IEEE SENSORS JOURNAL, VOL. 12, NO. 2, FEBRUARY 2012
Fig. 10. Amplifier configurations using the IFD93 with ambient light adaptation [34].
Fig. 13. A comparison of the hardware results (solid red lines) versus software
model predictions (dashed black lines) [34].
TABLE I
SIGNAL-TO-NOISE RATIO OF THE THREE SENSORS
Fig. 11. Riley’s sensor (left), Benson’s sensor (middle) and the new design
(right) [34].
Fig. 12. A top-view diagram of the test setup [34].
After the sensor was constructed, a test setup was designed
to compare the sensor’s output to the software model and to
previous fly eye sensors. The test setup, depicted in Fig. 12,
consisted of a stationary sensor placed a distance
m
from a painted wood dowel on a conveyor belt. White felt was
hung behind the conveyor belt. Numerous dowels were used to
provide stimuli of varying size and contrast. Each dowel rod
was either painted white (for low contrast) or black (for high
contrast) and had diameter . The dowels were tall enough to
span the vertical FOV of the sensor. The conveyor belt swept the
dowel across the sensor’s horizontal FOV at a constant velocity
of 0.62 m/s. The test setup was housed in a light-insulated room
in order to provide consistent lighting conditions during each
test. The experiment was illuminated by overhead fluorescent
lights. The data was collected using a DATAQ DI-148U data
acquisition system. WINDAQ/XL software was used to record
the data with a sample frequency of 240 Hz.
Fig. 13 compares the normalized output from the sensor
(shown as solid red lines) with the predicted output from the
software model (shown in dashed black lines). The shapes of
the sensor signals are approximately Gaussian and are similar
to the software model prediction. Any oddities and asymmetries
can be attributed to flaws in sensor construction (most likely
related to fiber polishing and alignment).
We compared the new sensor to two earlier designs created
by Riley and Benson. We tested each sensor’s signal-to-noise
ratio (SNR) with a high contrast stimulus. Table I shows the
calculated SNR for each sensor. The new sensor has a SNR of
2.78 times Benson’s sensor’s SNR. This is not as large of an
improvement as the fourfold improvement predicted by the software model. However, the amount of gathered light (which the
software model calculated) is not the only determining factor for
the SNR. Noise is inherent in the sensor output due to a variety
of sources, including the photodetectors and the amplification
circuitry. It is likely that this noise, not considered in the software model, is the reason the new sensor produced slightly less
improvement than first predicted. No specific noise mitigation
techniques have yet been applied.
The new sensor clearly outperforms Riley’s sensor. This is
interesting because of the similarity of the designs. This reinforces the idea that, when using a multiaperture design, a much
stronger signal can be achieved without sacrificing the desired
Gaussian shape or overlap.
Fig. 14 shows each sensor’s normalized response (from a
single photoreceptor) to the same high contrast stimulus (a black
cm). Note the effect of the SNR on
bar with diameter
the overall response shape. Riley’s sensor’s signal (dot-dashed
LUKE et al.: A MULTIAPERTURE BIOINSPIRED SENSOR WITH HYPERACUITY
Fig. 14. Normalized response of Riley’s sensor (dot-dashed line), Benson’s
sensor (dashed line), and the new sensor (solid line). The x axis represents the
relative object location [34].
line) is considerably noisier than the other two, while the new
sensor clearly generates the smoothest signal (solid line).
VI. CONCLUSION
We have designed a prototype of a sensor based on the visual system of the Musca domestica. The multiaperture sensor
showed a significant increase in SNR and motion acuity over
the previous fly eye sensors. The SNR for each sensor was proportional to the angular width of the bar (i.e., larger objects
yielded a stronger signal, as expected). Furthermore, the SNR
for the high contrast stimulus was appropriately proportional to
the SNR of the low contrast stimulus. The increase in SNR results in a higher motion acuity, since the ability to detect motion
in this sensor is limited only by the noise floor and the resolution of the analog-to-digital converter.
This prototype is simply the optical front-end of a complete
fly eye sensor system. Analog circuitry (which mimics the
Musca domestica’s L1, L2, and L4 neurons in the lamina) is
being developed to extract edge and motion information with
little to no processing (i.e., minimal CPU overhead). While
they will never replace traditional imaging devices, multiaperture sensors show great promise to augment these devices by
efficiently providing detailed edge and motion information in
real-time.
ACKNOWLEDGMENT
The authors would like to thank J. Benson for his extensive
work on the fly eye project and for contributing to the development of the signal processing basis for motion hyperacuity.
REFERENCES
[1] K. Nakayama, “Biological image motion processing: A review,” Vision
Research, vol. 25, no. 5, pp. 625–660, 1985.
[2] R. Żbikowski, “Fly like a fly,” IEEE Spectrum, vol. 42, no. 11, pp.
46–51, Nov. 2005.
[3] M. Wilcox and D. Thelen, Jr., “A retina with parallel input and pulsed
output, extracting high-resolution information,” IEEE Trans. Neural
Networks, vol. 10, no. 3, pp. 574–583, May 1999.
313
[4] P. N. Prokopowicz and P. R. Cooper, “The dynamic retina: Contrast
and motion detection for active vision,” Int. J. Comput. Vision, vol. 16,
pp. 191–204, 1995.
[5] S. Viollet and N. Franceschini, “Biologically-inspired visual scanning
sensor for stabilization and tracking,” in Proc. IEEE/RSJ Int. Conf. Intell. Robot. Syst., 1999, pp. 204–209.
[6] S. Viollet and N. Franceschini, “A hyperacute optical position sensor
based on biomimetic retinal micro-scanning,” Sens. Actuators A: Physical, vol. 160, no. 1–2, pp. 60–68, 2010.
[7] O. Landolt, A. Mitros, and C. Koch, “Visual sensor with resolution
enhancement by mechanical vibrations,” in Proc. Conf. Adv. Res. VLSI,
2001, pp. 249–264.
[8] K. Hoshino, F. Mura, and I. Shimoyama, “A one-chip scanning retina
with an integrated micromechanical scanning actuator,” J. Microelectromech. Syst., vol. 10, no. 4, pp. 492–497, Dec. 2001.
[9] M. H. Hennig and F. Wörgötter, “Eye micro-movements improve stimulus detection beyond the Nyquist limit in the peripheral retina,” in Advances in Neural Information Processing Systems 16, S. Thrun, L. Saul,
and B. Schölkopf, Eds. Cambridge, MA: MIT Press, 2004.
[10] M. F. Land and D. Nilsson, Animal Eyes. Oxford, U.K.: Oxford Univ.
Press, 2002, corrected reprint 2006.
[11] J. S. Sanders and C. E. Halford, “Design and analysis of apposition
compound eye optical sensors,” SPIE Opt. Eng., vol. 34, no. 1, pp.
222–235, Jan. 1995.
[12] J. S. Sanders, “Biomimetic applications of arthropod visual systems,”
ITEA J. Test Evaluation, vol. 29, pp. 375–379, 2008.
[13] M. F. Land, “Visual acuity in insects,” Annu. Rev. Entomology, vol. 42,
no. 1, pp. 147–177, 1997.
[14] G. Osterberg, “Topography of the layer of rods and cones in the human
retina,” Acta Ophthalmol., vol. 6, pp. 1–102, 1935.
[15] K. G. Götz, “Optomotorische untersuchung des visuellen systems
einiger augenmutanten der fruchtfliege drosophila,” Biol. Cybern., vol.
2, pp. 77–92, 1964.
[16] D. G. Stavenga, “Angular and spectral sensitivity of fly photoreceptors.
I. Integrated facet lens and rhabdomere optics,” J. Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology,
vol. 189, pp. 1–17, 2003.
[17] E. Tomberlin, “Musca domestica’s large monopolar cell responses to
visual stimuli,” Master’s thesis, Univ. Wyoming, Laramie, Aug. 2004.
[18] D. T. Riley, W. Harman, S. F. Barrett, and C. H. G. Wright, “Musca
domestica inspired machine vision sensor with hyperacuity,” IOP J.
Bioinspiration Biomimetics, vol. 3, no. 2, Jun. 2008, p. 026003(+13).
[19] J. D. Davis, S. F. Barrett, and C. H. Wright, “Bio-inspired minimal
machine multi-aperture apposition vision system,” ISA Biomed. Sci.
Instrum., vol. 44, pp. 373–379, 2008.
[20] D. T. Riley, S. F. Barrett, M. J. Wilcox, and C. H. G. Wright, “Musca
domestica inspired machine vision system with hyperacuity,” in Proc.
SPIE 12th Int. Symp. Smart Structures and Materials: Smart Sensor
Technol. Meas. Syst. Conf. (SPIE 5758), Mar. 2005, paper 5758-36.
[21] J. B. Benson, C. H. Wright, and S. F. Barrett, “Redesign and construction of an artificial compound eye visual sensor.,” ISA Biomed. Sci.
Instrum., vol. 44, pp. 367–372, 2008.
[22] J. D. Davis, S. F. Barrett, C. H. Wright, and M. J. Wilcox, “A bioinspired apposition compound eye machine vision sensor system,” IOP
J. Bioinspiration Biomimimetics, vol. 4, no. 4, p. 046 002(+11), 2009.
[23] J. Angel, “Lobster eyes as X-ray telescopes,” Astrophysical J., vol. 233,
no. 1, pp. 364–373, 1979.
[24] B. Hartline, “Lobster X-ray telescope envisioned,” Science, vol. 207,
p. 47, 1980.
[25] J. W. Duparré and F. C. Wippermann, “Micro-optical artificial compound eyes.,” IOP J. Bioinspiration Biomimetics, vol. 1, no. 1, pp.
R1–R16, Mar. 2006.
[26] S. Ogata, J. Ishida, and T. Sasano, “Optical sensor array in an artificial
compound eye,” Opt. Eng., vol. 33, no. 11, pp. 3649–3655, 1994.
[27] J. Kim, K.-H. Jeong, and L. P. Lee, “Artificial ommatidia by selfaligned microlenses and waveguides,” Opt. Lett., vol. 30, no. 1, pp.
5–7, Jan. 2005.
[28] K.-H. Jeong, J. Kim, and L. P. Lee, “Biologically inspired artificial
compound eyes,” Science, vol. 312, pp. 557–561, Apr. 2006.
[29] A. Brückner, J. Duparré, A. Bräuer, and A. Tünnermann, “Artificial
compound eye applying hyperacuity,” Opt. Express, vol. 14, no. 25,
pp. 12 076–12 084, 2006.
314
IEEE SENSORS JOURNAL, VOL. 12, NO. 2, FEBRUARY 2012
[30] S. F. Barrett, M. J. Wilcox, T. E. Olson, D. J. Pack, and C. H. G. Wright,
“Modelling a parallel L4 neuron array of the fly (Musca domestica)
vision system with a sequential processor,” ISA Biomed. Sci. Instrum.,
vol. 38, pp. 477–482, Apr. 2002.
[31] J. Benson, G. Luke, C. Wright, and S. Barrett, “Pre-blurred spatial
sampling can lead to hyperacuity,” in Proc. 13th IEEE Digital Signal
Process. Workshop, Jan. 2009, pp. 570–575.
[32] Zemax: Software for optical design ZEMAX Development Corporation, 2009. [Online]. Available: http://www.zemax.com
[33] G. P. Luke, C. H. G. Wright, and S. F. Barrett, “Software model of an
improved bio-inspired sensor,” ISA Biomed. Sci. Instrum., vol. 45, pp.
179–184, 2009.
[34] G. P. Luke, “Design and testing of a multi-aperture bio-inspired
sensor,” Master’s thesis, Univ. Wyoming, Laramie, Jul. 2009.
[35] S. B. Laughlin and R. C. Hardie, “Common strategies for light adaptation in the peripheral visual systems of fly and dragonfly,” J. Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral
Physiology, vol. 128, pp. 319–340, 1978.
[36] B. G. Burton, “Long-term light adaptation in photoreceptors of the
housefly, Musca domestica,” J. Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology, vol. 188, pp.
527–538, 2002.
Cameron H. G. Wright (S’80–M’83–SM’93)
received the B.S.E.E. degree (summa cum laude)
from Louisiana Tech University, Ruston, in 1983, the
M.S.E.E. from Purdue University, West Lafayette,
IN, in 1988, and the Ph.D. from the University of
Texas at Austin in 1996.
He is an Associate Professor in the Department of
Electrical and Computer Engineering , University of
Wyoming, Laramie. He was previously a Professor
and Deputy Department Head in the Department
of Electrical Engineering, United States Air Force
Academy, and served as an R&D Engineering Officer in the U.S. Air Force
for over 20 years. His research interests include signal and image processing,
real-time embedded computer systems, biomedical instrumentation, and
engineering education.
Dr. Wright is a member of Tau Beta Pi, Eta Kappa Nu, Phi Kappa Phi, and
Omicron Delta Kappa, as well as a member of the American Society of Engineering Education (ASEE), the National Society of Professional Engineers
(NSPE), the International Society for Optical Engineers (SPIE), the Biomedical
Engineering Society (BMES), and is on the Board of Directors for the Rocky
Mountain Bioengineering Symposium. He holds FCC licenses for both commercial radio/TV/radar and for amateur radio, and is a registered Professional
Engineer in California.
Geoffrey P. Luke (S’09) is a graduate student at the
University of Texas at Austin. He received the B.S.
degree (magna cum laude) in computer engineering
and mathematics and the M.S.E.E. degree from the
University of Wyoming, Laramie, in 2007 and 2009,
respectively.
He is currently researching photoacoustic imaging
and its applications to cancer detection and photothermal therapy.
Mr. Luke is a member of the International Society
for Optical Engineers (SPIE), Tau Beta Pi, and the
Biomedical Engineering Society (BMES).
Steven F. Barrett (M’86–SM’97) received the
B.S.E.E.T. degree from the University of Nebraska,
Omaha, in 1979, the M.E.E.E. degree from the
University of Idaho, Moscow in 1986, and the Ph.D.
degree from the University of Texas at Austin in
1993.
He is an Associate Professor in the Department
of Electrical and Computer Engineering, University
of Wyoming, Laramie. He currently serves as the
Associate Dean of Academic Programs, College
of Engineering and Applied Science, University of
Wyoming. His research interests include digital and analog image processing,
computer-assisted laser surgery, and embedded controller systems, and he is
the founder of WYCO Embedded Systems, LLC.
Dr. Barrett is a member of the American Society of Engineering Education
(ASEE) and Tau Beta Pi. He is on the Board of Directors for the Rocky Mountain Bioengineering Symposium. He is a registered Professional Engineer in
Wyoming and Colorado.