In Berthouze, L., Kozima, H., Prince, C. G., Sandini, G., Stojanov, G., Metta, G., and Balkenius, C. (Eds.)
Proceedings of the Fourth International Workshop on Epigenetic Robotics
Lund University Cognitive Studies, 117, ISBN 91-974741-3-4
The Effects on Visual Information in a Robot in
Environments with Oriented Contours
Lars Olsson, Chrystopher L. Nehaniv, Daniel Polani
Adaptive Systems Research Group
Department of Computer Science
University of Hertfordshire
College Lane, Hatfield Herts AL10 9AB
United Kingdom
{L.A.Olsson, C.L.Nehaniv, D.Polani}@herts.ac.uk
Abstract
For several decades experiments have been
performed where animals have been reared in
environments with orientationally restricted
contours. The aim has been to flnd out what
efiects the visual fleld has on the development
of the visual system in the brain. In this
paper we describe similar experiments performed with a robot acting in an environment
with only vertical contours and compare the
results with the same robot in an ordinary offlce environment. Using metric projections of
the informational distances between sensors
it is shown that all visual sensors in the same
vertical column are clustered together in the
environment with only vertical contours. We
also show how the informational structure of
the sensors unfold when the robot moves from
the environment with oriented contours to a
normal environment.
1.
Introduction
In nature one flnds that most animals are highly
adapted to their speciflc environment. One example
of this is the wide variety of sensory organs that are
well adapted to the speciflc animals and their respective environment (Dusenbery, 1992). It is believed
that in many animals the functionality of the sensory
organs is almost completely innate while in other animals the functionality can be altered by experiences
during the lifetime of the speciflc animal. This ageold question of nature versus nurture has been particularly studied in the visual system (Callaway, 1998).
Many experiments have been performed with animals where their visual fleld somehow has been restricted to contours of a certain orientation, see for
example (Wiesel, 1982) and (Callaway, 1998) for an
overview. The results of these experiments are not
completely conclusive but some experiments show
that animals that have been reared in for example an
environment with only vertical contours have more
neurons selective for vertical contours. It has also
been found that ferrets have more neurons selective
for vertical and horizontal contours than other angles
(Chapman et al., 1996).
Why is it so that animals have more neurons selective for vertical and horizontal contours than contours of other angles? In a very interesting study
by Coppola et al. (Coppola et al., 1998) the distribution of contours of difierent orientations in both
man-made environments and a natural forest environment was analysed. That they found more vertical and horizontal contours than oblique angled contours in the man-made environments might not be
a big surprise. But, interestingly enough, they also
found that natural environments contain more vertical and horizontal contours than contours of other
angles. This might explain why visual systems innate have more neurons selective for horizontal and
vertical contours.
Why, then, is this important when studying and
building robots? In contrast with natural systems,
sensors of artiflcial systems are often, due to practical and historical reasons, seen as something that
is “given” and flxed. But, given that robots usually are limited by computational resources as well as
power consumption, robots need to use their limited
resources e–ciently. One way to do this is to try and
extract relevant information from the environment
as early as possible in the processing steps and then
focus the computational resources on these relevant
pieces of information. But, how can a robot know
what information that is relevant to perform its certain task? One notion of relevant information was
introduced and formalized in (Nehaniv, 1999) and
extended in (Polani et al., 2001) by associating the
relevance of information with the utility to an agent
to perform a certain action. Given knowledge of the
most relevant information from a number of sensors
83
it might then be possible to adapt the sensoric system to discriminate only between events that are of
use for the system. The layout of sensors can also be
evolved to a certain environment, which for example has been studied in (Olsson et al., 2004a). It is
also possible to use the sensor reconstruction method
flrst described in (Pierce and Kuipers, 1997) and extended in (Olsson et al., 2004b) to flnd sensors that
produce the same (redundant) information. If several sensors produce more or less the same information the information from all of them but one can
be discarded or some of the sensors can be placed at
difierent positions.
In this paper we perform a similar experiment similar to the ones peformed with for example kittens
(Wiesel, 1982) with a robot where the robot is acting
in an environment with only vertical contours. Using the sensor reconstruction method we show that in
this kind of environment most visual sensors produce
redundant information and can be discarded without
a loss of information. The results also indicate that a
rich visual environment is necessary if it is to learn to
distinguish between contours of difierent orientation.
We also show an example of unfolding of sensors,
where the robot moves from the environment with
oriented contours to a normal o–ce environment. In
this case the sensors move in the metric projections
from the clusters of all sensors from a certain column
of visual sensors to a layout that re ects the physical
layout of the sensors.
The remainder of this paper is organized as follows. Section 2 contains a brief overview of informational distances between sensors and the sensory reconstruction method. Section 3 contains the results
of the experiments with a robot in an environment
with oriented contours, and also results where the
robot moves from the vertical environment to a normal environment. Finally we summarize the paper
and discuss some potential applications of the results
and possible future directions of the presented work.
2.
Information
Sensors
Distances
between
In order to discuss the information distance between sensors a distance metric is needed. To
do this a number of difierent methods can used,
e.g., the Hamming distance and frequency distribution distance (Pierce and Kuipers, 1997). In
(Olsson et al., 2004b) these distance metrics are
compared with the information metric, which
was deflned and proved to be a metric in
(Crutchfleld, 1990). The distance between two information sources is there deflned in the sense of classical information theory (Shannon, 1948) in terms of
conditional entropies. To understand what the information metric means we need some deflnitions from
84
information theory.
Let X be the alphabet of values of a discrete random variable (information source, in this case a sensor) X with a probability mass function p(x), where
x ∈ X . Then the entropy, or uncertainty associated
with X is
X
H(X) = ¡
p(x) log2 p(x)
(1)
x∈X
and the conditional entropy
XX
p(x, y) log2 p(y|x)
H(Y |X) = ¡
(2)
x∈X y∈Y
is the uncertainty associated with the discrete random variable Y if we know the value of X. In other
words, how much more information do we need to
fully predict Y once we know X.
The mutual information is the information shared
between the two random variables X and Y and is
deflned as
I(X; Y ) = H(X)¡H(X|Y ) = H(Y )¡H(Y |X). (3)
To measure the dissimilarity of two information sources Crutchfleld’s information distance
(Crutchfleld, 1990) can be used. The information
metric is the sum of two conditional entropies, or
formally
d(X, Y ) = H(X|Y ) + H(Y |X).
(4)
Note that X and Y in our system are information
sources whose H(Y |X) and H(X|Y ) are estimated
from the time series of two sensors using (2).
It is worth noting that two sensors do not need to
be identical to have a distance of 0.0 using the information metric. What an information distance of 0.0
means is that the sensors are completely correlated.
As an example, consider two sine-curves where one
is the additive inverse of the other. Even though
they have difierent values in almost every point the
distance is 0.0 since the value of one is completely
predictable from the other. In this case, the mutual
information, on the other hand, will be equal to the
entropy of either one of the sensors.
In
the
sensory
reconstruction
method
(Pierce and Kuipers, 1997,
Olsson et al., 2004b)
metric projections(maps) are created that show the
informational relationships between sensors, where
sensors that are informationally related are close
to each other in the metric projections. To create
a metric projection the value for each sensor at
each time step is saved, where in this paper each
sensor is a speciflc pixel in an image captured by
the robot. A number of frames are captured from
the camera of the robot and each frame is one time
step. The flrst step in the method is to compute the
distances between each sensor. This is computed
by considering the time series of sensor values from
a particular sensor as an information source X.
The distance between two sensors X and Y is then
computed using equation 4. From this 2-dimensional
distance matrix a 2-dimensional metric projection
can be created using a number of difierent methods
like metric-scaling (Krzanowski, 1988), Sammon
mapping, and elastic nets (Goodhill et al., 1995),
which positions the sensors in the two dimensions
of the metric projection. In our experiments we
have used the relaxation algorithm described in
(Pierce and Kuipers, 1997).
3.
Experiments and Results
In our experiments we have used a SONY AIBO1
robot dog. The robot walked around more or less at
random in an ordinary o–ce environment and one
other environment where the visual fleld consists of
black vertical lines on a white background. This environment was created with big white sheets of paper with 2cm wide stripes of black paper glued to
the white paper, see Figure 1. Since the AIBO did
not move its head up and down and the environment
was quite small, most of the collected frames of the
visual fleld consisted only of the striped walls but in
some frames part of the uniform oor was also visible. Figure 2 shows an example frame captured by
the AIBO in the environment with vertical lines.
Figure 2: The environment from the robot’s perspective.
as averages over flve seconds. To make the results of
the sensor reconstruction method easier to interpret
we used a 10 by 10 pixel image from each frame taken
from the upper left corner of the image. The pixels
of this 10 by 10 image are numbered from 1 to 100,
see Figure 3. To verify that the results were not due
to the fact that we only used a part of the visual fleld
we performed experiments with the whole image with
similar results.
1
2
3
4
5
6
7
8
9
10
11 12 13 14 15 16 17 18 19 20
21 22 23 24 25 26 27 28 29 30
31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50
51 52 53 54 55 56 57 58 59 60
61 62 63 64 65 66 67 68 69 70
71 72 73 74 75 76 77 78 79 80
81 82 83 84 85 86 87 88 89 90
91 92 93 94 95 96 97 98 99 100
Figure 3: The layout of the visual sensors (individual
pixels).
Figure 1: The SONY AIBO robot in its environment with
vertical contours.
To collect data we used the wireless network of
the AIBO to download images from the camera of
the robot where the image is 88 pixels wide and 72
pixels high. The frame rate was on average 10 frames
per second with a minimum of 9.7 frames per second
and a maximum of 10.2 frames per second, where
the maximum and minimum values were computed
1 AIBO
is a registered trademark of SONY Corporation.
First consider when the robot is walking in
the ordinary o–ce. Contours of all possible angles are visible even though there are probably
more vertical and horizontal contours as found in
(Coppola et al., 1998). Figure 4 shows the metric
projection of the 100 sensors after 50, 100, 200, 300,
400, and 500 frames have been captured. After only
50 frames the sensors are spread out over the metric
map but it is hard to distinguish some real order.
After 100 frames the metric projection of the sensors has more order. As more frames are processed
the structure becomes clearer and clearer and after
85
tical contours. After 50 frames no real order can be
found. After 100 frames it is possible to see that
the sensors start to become clustered in a number
of groups. After more frames these clusters become
98 100 88
V2
0.5
91
0.0
71
−0.5
99
79
69
−1.0
2
73 57
63
21
31
59
67
7
41
51
61
78
−0.5
8 18 28
38
11 3121 41
1
51
61
81
71
91
78
3747
27
92 42 12
72 6252 22
32
28
18
0.5
2.0
57
−2
67
−1
0
1
2
−2
−1
0
1
4
7080
9 19
29
49 39
3
63 73
53 33
43
13 393
23
83
99
18 38 48
58
28
6878
414
24 7464
344454
98
3747
27
88
(b) 100 frames
7
−2
(a) 50 frames
8
59
69
79
17
57
86
96
76
66
56
97
87
67
−2
−1
8595
0
8
18 38
28
1
2
10
31 1
2111
90100
7080
41
81 61 51
7191
9
29
49 39 19
9282
52 12
72
62
2 223242
48
58
93
88 98
7
37 47
27
17
86
96
76
66
56
97
57
67
87
77
16
6 362646
77
69 59
89 79
99
68
78
8494
75
65
55
25 3545
15
5
20
5060
2
3
89
2
V1
−1
V1
90100
91
1
92
66
56 46
22
42 1232
92
82 52 2
72
62
V2
76
87
4030
0
97
0
96
86
67
77
41 9151
61 8171
−1
88
98
(b) 100 frames
3
−2
75
65
55
25 45 85
15
5 35 95
26
16
6 46
36
−2
−1
85 95
47
57
111 31
21
20 10
4030
6050
2
2
1
68
78
99
55
6575
1
−1
89
46 56 66
97 87
−2
45
58
69
79
91
76
77
59
70
90 80
V2
36
26
67
35
49 50
48
60
100
2
13 12 1
11
24 34 23 22
21
32 31
41
33
42
44
43
52 51
54
53 61
62
64 63
84
73
7271
74
94
83
93
82
81
25
2
3
4
14
515
36
37
4039
V2
V2
6
57
47
34
44
14 2212 2
3
4
11 1
32
41
25
31 21 1323
42
33
35
45
43
52 51 61 71
55
53
54 62
65 84
72
63 73
94
75
64
85
95 82
9383
96
74
86
92
1
V1
(a) 50 frames
16
6 26
9
0
98
37
20 19
18
28
30 29
27
38
24
58
88
7
17
81
16
9
15
5
50
79 69
89
7868
70
49
29
2019
0
2
1
48
7
17
27
0
60
59
28
39
40
−1
80
99
10
30 10
18
8
3
8
38
15 5
85
25
95
65
35
5545
94
414
24 34
4484
4353
6454
74
2313 63
33 3
93
73 83
87
17
V1
100
90
46
6 36
2616
96 56
86 7666
75
97
2
77
1.5
40 30
20
10
82
8
1.0
60 50
70 80
98
7
0.0
100 90
49
29 9
19 39
4858
88
38
68
−1.0
11
1
34
48
17
74
83
94
4454
64
84
46
4 14
3323 27134353
24
3
89
93
37
58
81
68
45
1
60
35
95 55
80
75
49
70
29 923996 47 87
9
97
56 6682
62
72
32
52
219
12
22
42
7686
77
50
85
V2
1.0
36
89 69
59
99 79
90
65
−1
1.5
25
5 16 15
6
26
0
2.0
10 20
30
40
−1.5
500 frames the sensors are ordered in more or less a
square with sensor 1 in the upper right corner and
sensor 100 in the lower left corner. Thus, the layout
of the sensors shown after 500 frames is the mirror
image of the real layout found in Figure 3. This orientation of the layout as a mirror image of the real
sensors in this case is just a coincidence and in fact
all eight possible orientations are equally likely. This
is because the data contains no directional information and it is therefore impossible to flnd the correct
physical layout without applying higher-level image
analysis. Therefore only the relative positions can be
computed, see (Olsson et al., 2004b).
−1
0
1
2
3
V1
0
77
91
1
2
3
−3
−2
−1
74
0
82 81
83
84
94
92
93
1
91
2
3
59
7969
89 99
6878
(d) 300 frames
88
4
8696
76
66
56
26
166 3646
98
17
7
−1
37
27
47
57
67
97
87
77
0
1
2
2
12
13
57
69
78
88
98
−2
77
87
97
65
66
63
64
73
74
75
76
86
96
85
95
84
94
92
2
1
62 61
71
72
82
83
93
51
52
53
68
67
54
55
56
79
42 41
43
44
45
50
60
V2
46
0
2
V1
(e) 400 frames
7
17
18
81
70
80
90
100
28
6
5
16
15
27
38
39
49
48
37
24
36
46
58
69
59
7969
8999
12
7262
22 3242
52
1333 3
53 43 23 73
63
9383
9 19
29
49 39
414 24 34 44
645474 94
84
45
44
66
77
76
65
54
−2
87
97
63
64
73
78
88
98
31
52
85
95
0
84
94
82
83
93
2
92
51
62 61
72 71
74
75
86
96
41
42
43
53
67
85
95
4858
8182838
6878
3
88
98
86
96
76
66
56
46
1626
6 36
47
2737
17
57 8797
7
67
77
−2
−1
0
1
25 355545 6575
8595
155
2
3
V1
(f) 500 frames
11
33 32
34
45
25 35
515
1
Figure 5: Metric projections of the sensors after 50, 100,
200, 300, 400, and 500 frames of visual data in the vertical
environment.
81
91
4
V1
(f) 500 frames
Figure 4: Metric projections of the sensors after 50, 100,
200, 300, 400, and 500 frames of visual data in the normal
office environment.
Now consider Figure 5 with metric projections for
the visual data from the environment with only ver-
86
10
20 90
100
30
40 5060 7080
22 21
23
35
55
68
99
2
12
57
56
79
89
3
13
47
59
91
4
4
14
26
25
40
31
32
33
47
11
22 21
23
34
0
24
30 29
(e) 400 frames
8
10 9
20 19
1
3
3
36
35
58
89
99
4
14
−3
−1
100 90
59
5
15
25
37
48
1
0
V2
60
70
80
6
16
26
27
38
39
49
7
17
−1
3
2
40
50
−2
18
−2
4
8
9
28
4
V1
10
73
3
6353 43 1333
23
94
414 647454 84
24 3444
75 65
55
8182838 4858
−2
20 19
2
9282
3
93 83
9 2919
39
49
−2
(c) 200 frames
30 29
51 41 31 21 111
9181 7161
V1
−1
V1
32 725242
12 22 92 62
82
2
10
20
90100
30
40 6050 7080
2
72
73
75
76 86
96
9585
87
97
62 61
71
63
64
65
66
4
2
82
81
56
67
88
98
111 2131 41 516171 9181
51
52
53
54
1
68
78
99
42 41
43
44
45
55
0
−1
83
92
46
57
89
62 61
32 31
V2
84
94
93
−2
72 71
74
58
69
79
22 21
33
0
9585
63
73
64
75
59
70
80
90
100
23
34
35
−1
−2
65
43 42 41
53 52
51
24
36
47
1
11
−2
96
76 86
31
54
2
13 12
−3
66
77
33 32
44
55
25
48
(d) 300 frames
3
3
56
87
97
−3
3
46
57
67
27
37
49
60
4
14
38
50
22
21
5
15
26
2
0
−1
88
98
34
45
68
78
1
11
23
(c) 200 frames
6
16
7
17
28
40
39
2
12
13
24
29
8
18
1
69
79
3
14
25
36
35
58
89
99
26
27
37
47
4
10
20 9
19
V2
1
V2
90
80
38
48
59
70
5
15
30
1
2
50
49
60
100
6
16
V2
40
39
10
8
20 9
17 7
19
18
28
0
29
−1
3
30
−2
4
4
V1
83 635373
43 33
23133
414
2464
5474
94
3444 84
more and more noticeable and after 500 frames the
sensors are grouped in a horse-shoe shape of 10 clusters with 10 sensors in each. This is very difierent from the metric projection formed by the frames
from the o–ce environment. Looking closer at these
clusters we flnd that each cluster correspond to one
column in the sensor layout in Figure 3. For example, the upper right end of the horse-shoe contains
the sensors 1,11,21,. . . , 91 which are the sensors of
the leftmost column of the layout of visual sensors in
Figure 3. If we follow the horse-shoe shape starting
4.
Conclusions
In this paper we have tried to capture some of the
properties of experiments done with real animals in
environments with oriented contours in experiments
using a real robot. A SONY AIBO robot has moved
around in an environment with only vertical contours. Metric projections have been created from the
visual sensors that show the informational relationships between the sensors. The metric projections
show that the sensors from the same column in the
visual fleld are clustered together which means that
72
62 52 42 32 2212
2
93 83
−1
414
27
37 47
57
67
77 87
97
86
76
36 5666 96
46
1626
6
−3
−2
717
−3
−2
−1
515
25
35
4555
65
75
85
95
0
1
24
0
V2
6353 73
43 33
2313 3
88
98
34
44
6454
749484
2
3
4
5
15
25
35
16
26
36
6
46 5666
76 86
96
−2
37
47
57
67
77
87
97
−2
0
33
43
34
2
89
2
99
8
81
91
53
63
73
83
93
44
54
64
74
84
94
0
28
38
48
58
68
78
88
98
5
15
6
16
2
0
V2
89
28
38
48
58
68
78
88
98
4
4
18
7
17
6
16
27
37
47
57
67
77
87
97
−2
26
46
56
66
76
14
15
24
25
34
35
36
45
55
65
75
85
95
86
96
13
5
0
30
1
2 11
12 21
22
31
3
8
49
10
8
20 9
7
6
18
19
17
16
28
27
26
38
100
90
9
4
41
32
23
42
33
52
62
72
43
44
54
64
74
84
94
53
63
73
82
92
83
93
2
V1
(e) 2100 frames
51
61
71
60
70
80
4
39
81
49
37
89
99
25
35
3
13
46
23
22 21
34
33
32
57
68
78
−2
52
53
67
66
77
87
97
76
65
75
86
96
85
95
0
63
64
41
51
62 61
72 71
54
55
56
31
42
43
44
45
2 1
12 11
24
47
58
88
98
−4
4
14
15
36
48
59
90 69
100 79
91
5
29
40
50
2
59
99
53
63
73
83
93
54
64
74
84
94
(d) 1900 frames
V2
2
69
79
10
43
44
0
41
51
61
71
52
62
81
72
91
82
92
V1
0
39
30 20
29 19
34
31
42
33
45
55
65
75
85
95
86
96
−2
32
24
25
46
56
66
76
22
23
35
26
36
1
11
21
2
12
13
4
14
7
17
27
37
47
57
67
77
87
97
4
−2
4
40
4
3
18
(c) 1500 frames
60 50
44
54
64 74
84
94
2
V1
80 70
53
6373
83
93
34
10 100
20 90
40 30
80 70 60 50
9
29 19
39
7969 59 49
4
52
62
72
82
92
23
35
45
55
65
75
8595
51
61
71
42
24
5
15
25
16
6
26
36
4656
66
76 86
96
41
V2
4
14
31
22
32
0
2
V2
0
−2
68
78
88
98
1
11
21
2
12
3
13
7
17
27
43
(b) 1100 frames
−2
4
10
40 30 20
10090 80 70 60 50
9
29 19
7969 59 39
49
28 18
38
48
58
8292
33
45
55
65
75 85
95
81
91
62 72
V1
(a) 700 frames
8
52
23
14
24
V1
99 89
42
3
13
27
17 37
7
47
57
67
77
87
97
4
51
6171
22
32
8 1828 38
48
58
68
78
−2
0
V2
58
88 98
68
78
1
11
21 31
41
2
12
29 19
79
99 89 69 59 4939
1
2
10
30 20 100
40
5060 80 90
70
9
9282
79
9
89 69 59 493929 19
99
31 1
21 11
4
9181 71 51
61 41
2
3
4
10
30 20
40
50 90
60 70
80 100
81828 38
48
−2
from sensors 1,11,21,. . . , 91 we flnd that the next
cluster is all sensors in the second column (ending
with 2) and so forth. Finally the leftmost cluster of
the horse-shoe shape contains all sensors of the rightmost column of sensors (10, 20, . . . , 100) in Figure
3.
What is the reason for this clustering of columns
of sensors in the vertical environment? In an environment with only vertical contours and a uniform
background all sensors with the same horizontal position will at each time step return the same value.
Thus the informational distance between these two
sensors will be close to or 0.0. In the results presented in Figure 5 we note that the distance between
the sensors in the same column is larger than 0.0.
This is due to the fact this is data from the real world
and hence the light is difierent in difierent parts of
the visual fleld and not all lines are aligned at exactly
the same angle. The shape of the group of clusters is
dependent on the distribution of vertical lines within
the environment and the width of the lines and the
robot’s distance to the lines. In this particular experiment the vertical lines were often thinner than
the width of the visual fleld (10 pixels). Thus we
flnd that the informational distance between the two
outermost columns is shorter than the distance between for example the leftmost column and the middle columns. This is the reason for the horse-shoe
shape of the clusters.
Now consider a situation where the AIBO after
600 time steps move from the environment with only
vertical contours to a normal o–ce environment with
contours of all angles. In Figure 6 metric projections
are shown of the visual sensors after the robot has
moved from the vertical environment to the o–ce
environment. After 700 time steps the sensors of
each vertical column are still clustered together even
though they have started to move apart. The longer
time that the AIBO has spent in the normal environment the more the metric projection looks like Figure
4 which is the layout of sensors in the normal environment. This is an example of unfolding of sensors
where the sensors are separated in the metric projection when they distinguish difierent information.
73
74
81
82
83
93
84
94
2
92
91
4
V1
(f) 2300 frames
Figure 6: Metric projections of the sensors after 700,
1100, 1500, 1900, 2100, and 2300 frames of visual data
in the normal environment where the robot moved after
600 frames from the vertical environment.
the informational distance between them is small,
and would in an simulation without noise be 0.0.
We also showed how the sensors unfold in the metric
projections when the robot moves from the vertical
environment to an o–ce environment with contours
of all angles.
The methods and results in this paper can be applied to robots that adapt and evolve over time,
where the robot is constrained by the available number of sensors and the processing power available to
process the information from the sensors. First of
all is it important to note that all robots interact
in some environment and that knowledge about this
environment usually can be utilized to optimize the
87
robots’ sensors. Consider for example the fact that
most mammals have more neurons selective for vertical and horizontal contours than contours of other
orientations (Callaway, 1998). This can be explained
by the fact that most environments contain more
contours of those orientations (Coppola et al., 1998).
Thus, the mammal visual system has been adapted
by flnding generic properties of many difierent environments. Similarly, is it possible to adapt the
visual system of robots by studying the properties
of the environment that the robot will interact in.
This can for example be done using the sensory
reconstruction method (Olsson et al., 2004b) or by
evolution of the sensory layouts, see for instance
(Olsson et al., 2004a). This was illustrated in this
paper where the sensors were clustered in ten groups
and most of the sensors could have been discarded
or moved to other positions on the robot.
There are several issuses that need to be considered when designing a robot with sensors that can
adapt and be optimized to a certain environment as
discussed above. First of all there is the obvious
question of how to actually design a sensoric system
that can adapt to a speciflc environment. Another
important issuse is how this specialisation might affect future adaptations to other environments. For
example, consider a robot with a sensoric system optimized for the environment with only vertical contours. How can the robot detect that it has moved
to a more complex forrest environment using sensors
adapted to the restricted vertical environment? This
is related to the trade-ofis between redundancy and
novelty that any designer of a sensoric system is faced
with (Olsson et al., 2004a). These are all questions
that we intend to investigate in future work.
Acknowledgements
We wish to express our gratitude to the anonymous
reviewers for their helpful and insightful comments.
References
Callaway, E. M. (1998). Visual scenes and cortical
neurons: What you see is what you get. Proceedings of the National Academy of Sciences,
95(7):3344–3345.
Chapman, B., Stryker, M. P., and Bonhoefier, T.
(1996). Development of orientation preference
maps in ferret primary visual cortex. Journal of
Neuroscience, 16(20):6443–6453.
Coppola, D., Purves, H. R., McCoy, A. N., and D.,
P. (1998). The distribution of oriented contours
in the real world. In Proceedings of the National
Academy of Sciences, volume 95, pages 4002–
4006.
88
Crutchfleld, J. P. (1990). Information and its Metric. In Lam, L. and Morris, H. C., (Eds.), Nonlinear Structures in Physical Systems – Pattern
Formation, Chaos and Waves, pages 119–130.
Springer Verlag.
Dusenbery, D. B. (1992). Sensory Ecology. WH
Friedman & Co, flrst edition.
Goodhill, G., Finch, S., and Sejnowski, T. (1995).
Quantifying neighbourhood preservation in topographic mappings. Institute for Neural Computation Technical Report Series, No. INC9505.
Krzanowski, W. J. (1988). Principles of Multivariate Analysis: A User’s Perspective. Clarendon
Press, Oxford.
Nehaniv, C. L. (1999). Meaning for observers and
agents. In IEEE International Symposium on
Intelligent Control / Intelligent Systems and
Semiotics, ISIC/ISIS’99, pages 435–440. IEEE
Press.
Olsson, L., Nehaniv, C. L., and Polani, D. (in press,
2004a). Information trade-ofis and the evolution
of sensory layouts. In Ninth International Conference on the Simulation and Synthesis of Living Systems (ALIFE9) September 12-15th 2004
Boston, Massachusetts, USA. MIT Press.
Olsson, L., Nehaniv, C. L., and Polani, D. (in press,
2004b). Sensory channel grouping and structure from uninterpreted sensor data. In 2004
NASA/DoD Conference on Evolvable Hardware
June 24-26, 2004 Seattle, Washington, USA.
IEEE Computer Society Press.
Pierce, D. and Kuipers, B. (1997). Map learning
with uninterpreted sensors and efiectors. Artiflcial Intelligence, 92:169–229.
Polani, D., Martinetz, T., and Kim, J. (2001). An
information-theoretic approach for the quantiflcation of relevance. In Kelemen, J. and Sosı́k,
P., (Eds.), Advances in Artiflcial Life, 6th European Conference, ECAL 2001, Prague, Czech
Republic, September 10-14, 2001, Proceedings,
Lecture Notes in Computer Science, pages 704–
713. Springer Verlag.
Shannon, C. E. (1948). A mathematical theory of
communication. Bell System Tech. J., 27:379–
423, 623–656.
Wiesel, T. (1982). Postnatal development of the
visual cortex and the in uence of environment.
Nature, 299:583–591.
© Copyright 2026 Paperzz