Tracking the Motion of Box Jellyfish
Chapter 1. Introduction
5
black and 255 being white. This condenses the pixel to have one value between 0 and
The 15 film
sequences hasKalle
a lot of features
in common but still each film sequence
Tobias Kjellberg Magnus Oskarsson 255.
Tobias
Palmér
Åström
differentiates from the others in one or more aspects. Depending how the jellyfish is set
Centre for Mathematical Sciences, up
Lund
University, Sweden
the light and shadows form differently which will make each film sequence different
3
{magnuso,
tobiasp, kalle}@maths.lth.se
from the each other. These differences can be observed in fig 1.3 below.
Chapter 1. Introduction
Figure 1.1: The Tripedalia Cystophora viewed from the side where Rh marks the
Fig. 1. Left:location
The box
of onejellyfish
rhopalium tripedalia cystophora
is only a couple of mm
large and almost completely transparent. Right: A close-up of the rhopalia
from
one
recorded
the experimental
body seen in fig
1.1. In
thisframe
thesis there
are 15 in
different
film sequences tosetup.
be analyzed,
each with different box jellyfishes, light conditions and artefacts, forcing the solution to
be more of a generic detection-algorithm. The method used to find the pacemakers in
the film sequences isAbstract—In
divided into three steps;
and selectionawhich
this detecting,
paper clustering
we investigate
system
for tracking
in a special test
setup. The goal is to measure the motor response of the animal
• find all the rhopalia in every film sequence.
given certain visual stimuli. The approach is based on tracking
special
sensory
structures
– the
• focus on the
evaluating
different
detection-methods
since this
is therhopalia
crucial step. – of the box jellyfish
from high-speed video sequences. We have focused on a real• doing the above in almost-realtime.
time system with simple building blocks in our system. However,
using a combination of simple intensity based detection and model
based tracking
we achieve
promising tracking results with up to
1.4 The jellyfish
- Tripedalia
cystophora
95% accuracy.
will be described
thoroughly
laterofin box
the thesis.
The aim tripedalia
of this master’scystophora
thesis is to:
the
motion
jellyfish
Figure 1.3: Example of how different the film sequences can be.
box jellyfish uses its visual system to detect those light shafts but it cannot see the
Fig. 2. Example input frames from a number of different sequences. Notice
though
measure
has been done
in orderframes
to minimize
the barely
film
the high Even
variance
ingreat
lighting
conditions.
In some
the artefacts
rhopaliainare
discernible
in manybetween
framesthem
there
structures
that of
have
an appearance
sequences and
the difference
canare
be quite
large. Some
the film
sequences
veryaresimilar
the rhopalia.
brighter,tomaking
it easier to find the rhopalia while some are darker and thus making
copepods themselves. The interesting part in this thesis is their visual systems which
it hard to distinguish the rhopalia from the background (figure 1.4 and 1.5). An artefact
Tripedalia cystophora is a roughly 1 cm sized box jellyfish whose habitat is in the
I.
INTRODUCTION
mangrove swamps. It preys on small copepods that swarm between the roots of the
mangrove trees. The copepods gather in light shafts created by the canopy above. The
Box jellyfish, or cubomedusae, have very special visual
systems, [1], [2]. The visual system is based on four identical
is distributed at four sensory organs, the rhopalia. Each rhopalium is carrying six eyes
sensory structures, which are called rhopalia. Each rhopalia
were three of them are looking upwards and the other three looking downwards. The
consists of six different eyes: the upper and lower lens eyes,
ones looking downwards are also directed inwards towards the bell which results in the
eyes,
and
slit visual
eyessystem
[3]–[8].
The
box jellyfish tothe
“look pit
through”
its own
bell. the
This unique
enable the
box lens eyes have
image-forming
optics
jellyfish to display
visually guided behaviours
thatand
appearresemble
remarkable forvertebrate
such “simple” and cephalopod
box jellyfish. eyes [9]–[11].
The role of vision in box jellyfish is known to involve
phototaxis, obstacle avoidance, control of swim-pulse rate and
advanced visually guided behaviour [12]–[14]. Most known
species of box jellyfish are found in shallow water habitats
where obstacles are abundant [15]. Medusae of the studied species, T. cystophora, live between the prop roots in
Caribbean mangrove swamps [16], [17]. They stay close to
the surface [16] where they prey on a phototactic copepod that
gathers in high densities in the light shafts formed by openings
in the mangrove canopy. The medusae are not found in the
open lagoons, where they risk starvation [12]. The overall goal
of this project is to learn more about the visual system and
the neural processes involved. One interesting problem is the
study of the connection between visual stimuli presented to
the animals and the motor response of the animal. However,
tracking free-swimming jellyfish poses a very demanding
tracking problem. Instead we look at a special experimental
setup where the animals are tethered whilst they are submitted
to different light stimuli. The goal is then to track how the
that is visible in all film sequences is a smudge on the camera lens that looks like an
elongateddirect
rhopalium
(A in figure 1.6).
large they
resemblance
between
and In
animals
themselves,
i.e.Thehow
would
likethetosmudge
move.
order to do this, an initial goal which we discuss in this paper
is how to track the four rhopalia. These appear as four dark
discs, situated on the perimeter of the bell, see figure 2. There
have been a number of previous studies on motor response
from controlled visual stimuli, see e.g. [18], [19].
II.
EXPERIMENTAL SETUP
In this study we used in total 33 animals, with sizes ranging
from 0.43 to 0.89 cm. The animals were tethered by the top
of the bell during the experiments, using a glass pipette with
gentle suction, and placed in a Plexiglas tank with inside
dimensions of 5 × 5 × 5 cm. The vertical walls of the tank
were covered with diffusing paper and a neutral density filter.
Each vertical wall was illuminated from the outside by four
blue-green LEDs. The diffuser was used to make a plane light
source, while the neutral density filter was used to increase
the contrast between lit and dark panels and switching one
or more panels off was used as the behavioural trigger. The
colour of the LEDs matched the maximum spectral sensitivity
of the animals and had a peak emission at 500 nm. During
the experiments a box was placed over the set-up in order to
eliminate visual cues coming from outside. Image sequences
Chapter 2. Framework
10
Figure 2.2: Original picture.
%0(
!"#"$#%&'(
120(
$)*+#",%'-(
130(
14056(
#,.$/%'-(
Figure 2.2: Original picture.
140(
Figure 2.3: The detections.
Fig. 5. A typical detection result is shown. Since there are many rhopalia-like
idea could be to create an algorithm that checks that each pixel inside a cir
structures in the images we get a largeOnenumber
of false positives, but these
is less than a specific value, meaning darker, and in the same manner checks some num
will be eliminated in the following ofclustering
and tracking steps.
pixels outside the circle if they are greater than a certain value, meaning brigh
This will probably result in a time consuming algorithm with not that good accura
Fig. 3. An overview of the system. For each input frame Ii we run the
detection algorithm. This produces a number of tentative points Xdi . This
set of points is then sent to the clustering algorithm, which then outputs a
smaller number of refined positions Xci . These points are fed into the tracking
algorithm, alongside the four point positions from the previous frame, Xti−1 .
The final output is then the four detected points Xti .
were recorded with a high-speed camera operated at 150
frames per second. The dataset consists of 15 video sequences,
each with around 100 greyscale frames. Each greyscale frame
has a resolution of 800 × 864 pixels. Depending on how
the jellyfish is set up, the light and shadows form differently
which will make the video sequences different from each other,
see figure 2. Even though great measure has been done in
order to minimize artefacts in the film sequences the difference
between them can be quite large. Some of the film sequences
are brighter, making it easier to find the rhopalia while some
are darker and thus making it hard to distinguish the rhopalia
from the background. The tethering of the animal is also visible
in all sequences. This tether-shadow causes problem when
the jellyfish is contracting and the rhopalia is moving over
the tether-shadow. Since the box jellyfish is moving in every
video sequence some parts of the jellyfish are moving in and
out of focus. The physical nature of the jellyfish, i.e. being
transparent, also affects the appearance and causes refraction
on the surroundings.
The rhopalia in each frame in each video sequence has
been manually annotated in order to perform evaluation and
testing of algorithms.
III.
The problem with this algorithm is that there are a lot of points inside the circle, ab
500 pixels, that need to be checked which takes a lot of time and by comparing to
the images. See figure 4 forvalues
a close-up
this
reason
we won’t account view.
for the filmFor
sequences
with darker
or brighter light settin
film sequences with
different light
settings many falsefor
positives will emerge
we have tested a number ofIn thetemplate
based
approaches
well. This forces our detection algorithm to have more breadth thus not only find
detection. The template is based
the
assumption
that
we a detection meth
the rhopalia on
but false
positives
as well. So the goal here
is to create
the false that
positivesare
but still
managesFurther
to find all the rhopalia and d
have a number
of pixels nearwhich
theminimizes
rhopalia
dark.
Figure 2.3: The detections.
this within reasonable time, i.e minimize the amount of data to check.
outside the rhopalia we should
have pixels that are brighter.
One idea could be to create an algorithm that checks that each pixel inside a circle
Figure 4 shows some example templates that we have tried. For
is less than a specific value, meaning darker, and in the same manner checks some number
of pixels
outside the
if they are
greater than aquite
certain value,
meaning templates.
brighter.
speed
wecirclehave
adopted
sparse
Top row shows
This will probably result in a time consuming algorithm with not that good accuracy.
pixels
that
should
be
inside
the
rhopalia,
and hence should be
The problem with this algorithm is that there are a lot of points inside the circle, about
500 pixels,
that needBottom
to be checked which
a lot of timethe
and bypixels
comparing tothat
fix
darker.
rowtakesshows
are assumed to be
values we won’t account for the film sequences with darker or brighter light settings.
outside
the
rhopalia.
For
each
point
X
(j)
in the input image
i
In the film sequences with different light settings many false positives will emerge as
weourcan
then
define
a number
ofonlyinside
and outside points,
well. I
This
detection
algorithm
to have more
breadth thus not
finding
i , forces
the rhopalia but false positives as well. So the goal here is to create a detection method
Ωin (j) and Ωout (j). Examples of these point sets can be seen
which minimizes the false positives but still manages to find all the rhopalia and does
in figure
4.i.e We
looked
at two types of measures, one
this within
reasonable time,
minimizehave
the amount
of data to check.
absolute and one relative. For the absolute measure we define
a threshold for the inner pixels, tin and one for the outer pixels,
tout . We then count the number of inside and outside pixels
that fulfil the constraints, i.e.
�
�
Nabs (j) =
Γ(Ωin (j) ≤ tin )+
Γ(Ωout (j) ≥ tout ), (1)
where Γ(x) = 1 if x is true and zero otherwise, and
Xdi = {Xi (j)|Nabs (j) > Ndet },
where Ndet is some bound.
For the relative measure we randomly compare n inside
and outside pixels, and count how many of the inside pixels
are darker than the outside pixels. So if we let R(Ω) denote
a function that randomly chooses a point from the set Ω we
have,
SYSTEM OVERVIEW
In this section we will describe our system. Since the focus
is on real-time, the chosen building blocks are quite simple in
nature, especially the first steps. In figure 3 an overview of
the system is shown. We have divided it into three parts. For
every frame i, a first detection step – which only uses the local
intensity distribution around a point – produces a large number
of detections, Xdi . These points are then clustered into a set
of points Xci , in order to remove multiple detections, as well
as for improved positional accuracy in the detected points.
Finally these clustered positions are sent to the tracking step,
which also gets as input the previous frame’s four detected
points, Xti−1 . The output is the four detected points Xti . In
the following subsections we will describe the different steps
in more detail.
A. Detection
The rhopalia appear as dark discs on a brighter background
in the images that are quite consistent in size and appearance in
(2)
Nrel (j) =
n
�
Γ(R(Ωin (j)) < R(Ωout (j)),
(3)
k=1
and
Xdi = {Xi (j)|Nrel (j) > Ndet }.
(4)
We have evaluated our whole system in order to find templates
that generate enough possible detections, i.e. that in most
cases at least generates the four correct points, but that do
not generate excessive amounts of false positives. In figure 5
a typical detection result is shown. Since there are many
rhopalia-like structures in the images we get a large number
of false positives, but these will be eliminated in the following
clustering and tracking steps.
B. Clustering
We take a scale space approach for clustering the detections; by smoothing the input image Ii isotropically with a
Gaussian kernel we get a low scale version Ism . We then
methods
20
Chapter
Chapter 3.3. Detection
Detection methods
methods
(c)Inside
Insidepattern
pattern1.1.
(c)
20
(d)Inside
Insidepattern
pattern2.2.
(d)
(c) Inside pattern 1.
pattern 1.
pattern 3.
(d) Inside pattern 2.
(e)
(e)Inside
Insidepattern
pattern3.3.
(c) Inside
Inside pattern
pattern 1.
1.
(c)
(d) Inside pattern 2.
(f) Outside pattern 1.
Nloc (j) =
(e) Inside pattern 3.
(g)
(g)Outside
Outsidepattern
pattern2.2.
(f) Out
(h)
(h)Outside
Outsidepattern
pattern3.3.
Nd
�
Γ(||Xloc (j) − Xd(k)||2 < �cluster ),
(5)
(h) Out
possible subsets of four points Yi of Xdi we change coordinate
system so that the first three points are at (0, 0), (0, 1) and
(1, 0) and the fourth point at Ya . If there are n such subsets
we find the best subset by
kopt = arg min (Ya (k) − Xm )T Σ−1 (Ya (k) − Xm ),
k
(8)
k=1
and
(h) Outside pattern 3.
(i)
(i)Outside
Outsidepattern
pattern4.4.
Xti = Yi (kopt ).
(9)
and if there are a minimum
number Nmin of detections, then
(g) Outside pattern 2.
(h) Outside pattern 3.
(i) Outside pattern 4.
we add this local minimum
to our
(g) Outside pattern
2. clustered points Xci , (h) Outside pattern 3.
Figure
Figure3.5:
3.5: The
Thedifferent
differentpatterns
patternsused
usedinindetection
detectionmethod
method5.5.
For both equation (7) and Figure
(8) we3.5:
canThe
choose
Xt
i = Xt
i−1in detection
different
patterns
used
Xci = {Xloc (j)|Nloc (j) ≥ Nmin }.
(6)
if the optimal value is to large, i.e. if the best clustered points
are too far away from the previous frame’s points we choose
This gives a fast, accurate and quite robust way of clustering
the previous frame’s point positions for the new frame. See
the detections. See figure 6b for an example result of the
figure 6c for an example result of the tracking.
clustering.
C. Tracking
3.5:
(f)
(f)Outside
Outsidepattern
pattern1.1.
Fig. 4. A number of templates for the detection step. Top row shows pixels that should be inside the rhopalia, and hence should be darker. Bottom row shows
(e)to
Inside
pattern 3.
3.the rhopalia.
(f) Outside
Outside pattern
pattern 1.
(g) Outside pattern 2.
(e)
Inside
(f)
1.
the pixels that are assumed
be pattern
outside
find all local minima Xloc of Ism . The reason for this is the
appearance of the rhopalia as dark spots in the images. We
then calculate how many detections we have within a vicinity
of each Xloc ,
e pattern 2.
(d) Ins
We have not yet focused on the tracking, and more complex
motion models are of course possible and quite easy to
implement into the framework, such as e.g. Kalman filters [20]
or Particle filters [21].
For the tracking step we have looked at a number of simple
algorithms.
The input consists of the four points from the
(i) Outside pattern 4.
previous frame, Xti−1 and a number(i)ofOutside
possible
candidate
pattern 4.
(i) Outside
pointspatterns
Xci . used
In our
model method
we could
also
look pattern
at the4. scenario
The different
in detection
5.
IV. EXPERIMENTAL RESULTS
Figure 3.5:
The differentof
patterns
used in
detection
method 5.
where the general statistical
distribution
the
four
points
is
Figure 3.5: The different patterns used in detection method 5.In total we have in our test setup 1469 frames, and we
given in some form. Arguably the most simple tracking is
have manually marked the true coordinates of the rhopalia in
just choosing as new points, the four closest points from the
each frame. We have tested our system on the video sequences
candidate points to the points from the previous frame, i.e.
and compared it to ground truth in the following way. For the
Xti (j) = arg min ||Xti−1 (j) − Xci (k)||2 , j = 1, . . . , 4.
detection accuracy we count for each of the four rhopalia in
Xci (k)
the images if there are N or more detections within a circle
(7)
with radius of 10 pixels around the true coordinate. We have
We have also looked at learning an affine shape model of
used N = 10 and N = 20. For the cluster accuracy we have
the four points. In this case we can fix a canonical coordinate
counted the percentage of rhopalia that have a cluster within a
system by placing the first three points at (0, 0), (0, 1) and
circle of 10 pixels around the correct coordinate, and likewise
(1, 0). The final points will then be placed at some point Xa .
for the tracking accuracy we count the percentage of rhopalia
Using a large number of ground truth positions we can estimate
with a tracked point within 10 pixels. We do this for all the
the mean Xm and covariance matrix Σ for Xa . This gives us a
1469 frames, with four rhopalia in each frame. We have mainly
way of finding the four points that statistically most resemble
tested different detection parameters and settings. In figure 7
a four-point configuration, given affine deformations. For all
the resulting accuracy percentages can be seen. We see that
Chapter 2. Framework
11
Figure 2.4: The detections.
Figure 2.4: The detections.
Figure 2.6: The clusters.
Figure 2.5: The clusters formed from the detections.
Figure 2.7: The final four coordinates.
Fig. 6. The figure shows the output of the three steps of our system for an example input image. Left all the detected points are shown in red. These are fed
into the clustering which outputs the resulting yellow
points step
in the middle. Finally the tracking
outputs the
four points to the right, depicted in green.
2.3clustered
Clustering
2.4 Tracking
step
Chapter 5. Results
24
100
90
90
80
80
>= 20 detections accuracy (%)
>= 10 detections accuracy (%)
Here the goal is to reduce the amount of detections to a much smaller
amounts
The tracking
stepofisclusters.
the step where the final four coordinates are chosen from the clusters
100
70
60
50
40
30
which
takesdistance
the
coordinates
as input.cystophora,”
It will produce Vision
false positives
local minima. And if there are three or more detections
five pixels
from
atracked
of the inlesser
eyes
of previous
the
box
jellyfish
tripedalia
40
when
detections
found
because this means
local minima it will be marked as a cluster. Theresearch,
resulting cluster
detections
can1061–1073,
vol.nofrom
48,
no.
8,are
pp.
2008.that the tracked positions will be static
30
10
4000
is because a rhopalium should present itself
dark
spot will
inF.the
image,
i.e D.-E.
acoordinates.
beblurred
the next
tracked
There
is a flaw structure
having thisand
sort optics
of algorithm,
[5]as aA.
Garm,
Andersson,
and
Nilsson,
“Unique
50
20
2000
vol.with
108,
1982.
in the blurred image. The reason for correlatingJb
theAnat,
detections
thepp.
local107–123,
minima
tracked
coordinates
is used as the
ground truth and the closest clusters within 20 pixels
60
10
0
to get rid of all noise and then correlating the detections
together
withtripedalia
the local
nesorgane
von
cystophora
Zool
film
sequence
is theminima
ground
truth forconant
the first (cnidaria,
image. In thecubozoa),”
next picture the
previous
70
20
0
This particular clustering method first smoothen
theLaska
image which
by
using
[3] 2.6 and 2.7. Theund
canGaussian
be
seen inblur
figure
first input
to the tracking
[4] G.
and
M.
Hündgen,
“Morphologie
ultrastruktur
der algorithm
lichtsin-in each
be seen in figure 2.5
Figure120002.5:14000The
clusters formed00 from
16000
2000 the
4000detections.
6000
8000
6000
8000
10000
Mean number of detections
10000
Mean number of detections
≥ 10 detections accuracy.
2.3 (a) Clustering
step
Cluster accuracy (%)
70
60
50
40
30
20
10
0
16000
100
Here the goal is to reduce the amount of detections to a much smaller amounts of clusters.
90
80
This particular clustering method first smoothen the image by using Gaussian blur [3]
70
to get rid of all noise and then correlating the detections together with the local minima
Tracked accuracy (%)
80
14000
(b) ≥ 20 detections accuracy.
100
90
12000
60
in the blurred image. The reason for correlating the detections with the local minima
50
40
is because a rhopalium should present itself as a dark spot in the blurred image, i.e a
30
local minima. And if there are three or more detections in five pixels distance from a
20
local minima it will be marked as a cluster. The resulting cluster from detections can
10
be seen in figure 2.5
0
2000
4000
6000
8000
10000
Mean number of detections
12000
14000
0
16000
0
2000
(c) Cluster accuracy.
4000
6000
8000
10000
Mean number of detections
12000
14000
16000
(d) Tracked accuracy.
Figure 5.1: Mean number of detection in relation to the different accuracies. These
four diagrams present the number of detections related to the accuracy. The difference
7. between
Evaluation
of a number of different detection parameters
those measures is how it’s calculated which can be seen in 4.1.
Fig.
and
settings. The accuracy for the different steps in the system is shown. The
best performing systems have a tracking accuracy of 95%.
100
5
90
the best performing systems have a tracking accuracy of 95%.
Mean cluster accuracy (%)
80
4
70
60
50
V.
40
C ONCLUSIONS
We have in this paper investigated how a system for
detection of the special eyes – the rhopalia – of box jellyfish
can be constructed. We have shown that using a low-level
detection method in combination with clustering and tracking
we can get very good performance on a varying dataset. The
system
can
runcluster
in real-time.
Thetobasic
idea is
that theThe
system
Figure
5.2:be
Mean
accuracy in relation
mean number
of detections.
cluster accuracy
withinto
each
detection
method
with outliers
should mean
be used
in order
learn
more
about
the removed.
visual system
and neural processes of box jellyfish. The next step would be
to, from the tracking of the rhopalia, measure the motion of
the whole bell of the jellyfish in order to measure the motor
response of the animal given certain visual stimuli.
30
1
20
3
10
2
0
0
500
1000
1500
2000
Mean number of detections
2500
3000
R EFERENCES
[1]
E. W. Berger, “The histological structure of the eyes of cubomedusae,”
Journal of Comparative Neurology, vol. 8, no. 3, pp. 223–230, 1898.
[2] C. Claus, “Untersuchungen uber charybdea marsupialis,” Arb. Zool. Inst.
Wien, Hft, vol. 2, pp. 221–276, 1878.
[3] J. S. Pearse and V. B. Pearse, “Vision of cubomedusan jellyfishes.”
Science (New York, NY), vol. 199, no. 4327, pp. 458–458, 1978.
until close by detections and clusters are found. This will mean that, even though no
[6] J. Piatigorsky
and Z. Kozmik, “Cubozoan jellyfish: an evo/devo model
found, the rhopalia will eventually overlap with with the static tracked
for eyes detections
and otheraresensory
systems,” Int J Dev Biol, vol. 48, no. 8-9,
coordinates and be marked as found.
pp. 719–729, 2004.
[7] Z. Kozmik, S. K. Swamynathan, J. Ruzickova, K. Jonasova, V. Paces,
C. Vlcek, and J. Piatigorsky, “Cubozoan crystallins: evidence for
convergent evolution of pax regulatory sequences,” Evolution & development, vol. 10, no. 1, pp. 52–61, 2008.
[8] F. S. Conant, The Cubomedusæ: a memorial volume.
The Johns
Hopkins press, 1898, vol. 4, no. 1.
[9] D.-E. Nilsson, L. Gislén, M. M. Coates, C. Skogh, and A. Garm,
“Advanced optics in a jellyfish eye,” Nature, vol. 435, no. 7039, pp.
201–205, 2005.
[10] V. J. Martin, “Photoreceptors of cubozoan jellyfish,” in Coelenterate
Biology 2003. Springer, 2004, pp. 135–144.
[11] M. Koyanagi, K. Takano, H. Tsukamoto, K. Ohtsu, F. Tokunaga, and
A. Terakita, “Jellyfish vision starts with camp signaling mediated by
opsin-gs cascade,” Proceedings of the National Academy of Sciences,
vol. 105, no. 40, pp. 15 576–15 580, 2008.
[12] E. Buskey, “Behavioral adaptations of the cubozoan medusa tripedalia
cystophora for feeding on copepod (dioithona oculata) swarms,” Marine
Biology, vol. 142, no. 2, pp. 225–232, 2003.
[13] A. Garm and S. Mori, “Multiple photoreceptor systems control the
swim pacemaker activity in box jellyfish,” The Journal of experimental
biology, vol. 212, no. 24, pp. 3951–3960, 2009.
[14] A. Garm, M. Oskarsson, and D.-E. Nilsson, “Box jellyfish use terrestrial
visual cues for navigation,” Current Biology, vol. 21, no. 9, pp. 798–
803, 2011.
[15] M. M. Coates, “Visual ecology and functional morphology of cubozoa
(cnidaria),” Integrative and comparative biology, vol. 43, no. 4, pp.
542–548, 2003.
[16] S. E. Stewart, “Field behavior of tripedalia cystophora (class cubozoa),”
Marine & Freshwater Behaviour & Phy, vol. 27, no. 2-3, pp. 175–188,
1996.
[17] B. Werner, C. E. CUTRESS, and J. P. STUDEBAKER, “Life cycle of
tripedalia cystophora conant (cubomedusae),” 1971.
[18] R. Petie, A. Garm, and D.-E. Nilsson, “Velarium control and visual
steering in box jellyfish,” Journal of Comparative Physiology A, vol.
199, no. 4, pp. 315–324, 2013.
[19] ——, “Contrast and rate of light intensity decrease control directional
swimming in the box jellyfish tripedalia cystophora (cnidaria, cubomedusae),” Hydrobiologia, vol. 703, no. 1, pp. 69–77, 2013.
[20] G. Welch and G. Bishop, “An introduction to the kalman filter,” 1995.
[21] N. J. Gordon, D. J. Salmond, and A. F. Smith, “Novel approach to
nonlinear/non-gaussian bayesian state estimation,” in IEE Proceedings
F (Radar and Signal Processing), vol. 140, no. 2. IET, 1993, pp.
107–113.
© Copyright 2026 Paperzz