Rapport – Khepera - Umeå universitet

Umeå Universitet
Institutionen för psykologi
Institutionen för datavetenskap
Kognitionsvetenskapliga programmet
Visuell kognition, vt 2008
Project Report
Khepera
Course: Visual Cognition
Date: 2008-05-06
Group members:
Emma Haugen
Helena Lewandowska
Susanna Lyxell
Stefan Söderlind
Supervisor: Fabien Lagrifoulle
INTRODUCTION
Visual perception and intelligent behaviour
In this report we will have a look at intelligent behavior guided by visual perception,
which is the most studied type of perception. The purpose of this project is to program a
mobile robot with a certain sets of perception to accomplish some sort of task/tasks in a
specific environment. We have based our algorithm and thoughts of acquiring an
intelligent behavior on the article Intelligence without representation by the author
Brooks (1991). Brooks argues that an explicit central representation in simple intelligent
systems does not aid the purpose but rather gets in the way. The software implemented in
any form of robot (creature) should rather be produced by different layers of behavior
producing activities that are individually connected to action. In this way of thinking
there is no central control or representation but rather different parts of specialized
problem solving systems that all work parallel and independently. So not only is there no
central representation but also no central system what so ever. The problem solving
jumps between the different layers, (who each have a direct effect on the behavior of the
creature) depending on the environmental situation. In this sense it is really the
environment itself that guides the behavior of the creature. It has a ‘reactive behavior’ to
its environment.
Brooks also defines a certain set of requirements for these creatures:
“• A Creature must cope appropriately and in a timely
fashion with changes in its dynamic environment.
• A Creature should be robust with respect to its
environment; minor changes in the properties of
the world should not lead to total collapse of the
Creature's behavior; rather one should expect only a
gradual change in capabilities of the Creature as the
environment changes more and more.
• A Creature should be able to maintain multiple
goals and, depending on the circumstances it finds
itself in, change which particular goals it is
actively pursuing; thus it can both adapt to
surroundings and capitalize on fortuitous
circumstances.
• A Creature should do something in the world; it
should have some purpose in being.”
So our workflow will be guided on the principle of different layers were we will
set up some sort of defined behavior and thoroughly test it in the environment before we
move on to the next layer. This will be done with Brooks set of creature requirements in
mind. Brooks also points out the importance of testing the creatures in the real world
“that we humans inhibit” and “[..] not fall into the temptation to test them in a simplified
world”. We will for this project however limit the environment drastically to simplify the
acquisition of an intelligent behavior.
The robot used in this project is called Khepera and is essentially a two wheeled creature
with a camera with the ability to pick up a gray scale representation of its surroundings
and eight proximity sensors (six in the front and two in the back) that pick up the
distance, within a certain range, to any object in front of the sensor. Since our robot
happened to have a non functioning camera we could only use the proximity sensors to
guide the robot in its environment. Since this is about visual cognition the proximity
sensors will be regarded as the ‘sight’ of the robot. We therefore have an eight eyed,
nearsighted robot to work with. This should not be regarded as sight in any normal sense,
but rather as ‘seeing only distances to objects’.
METHOD
Access and users guide
The program can be found in the catalog ~kv04sll/edu/Temp/Kephera. Our
main function is called testamb.
Problem specification
We were given a robot, Khepera along with different kinds of wooden blocks to act as
obstacles and objects in the environment which was a small enclosed area with black
floor and white walls. There were also other kinds of environments we had access too.
The assignment for this project was to implement a simple behavior for the Kephera
using several layers of reactive behaviors.
It was decided that the Khepera should be able to distinguish between a wall and an
object and also be able to avoid the walls and push the object towards a wall. To find an
object the robot had to be able to search through its environment.
Matlab
In order to accomplish this task we had to use matlab. We were given a set of existing
functions such as functions to stop, start, set speed, read ambient light or sensor
proximity.
Specifications of the Khepera and the environment
The Kephera has eight proximity sensors placed around the robot (figure 1), which is
sensitive to the light reflected by obstacles. It also has a camera that can be used to
differentiate between different intensities of light. The Kephera has two wheels, one on
each side with which it can move around in the environment.
The environment is a small, closed area with white surrounding walls and black floor, it
measures 50x80cm, with 8cm high walls (figure 2). The object used is a thin wooden
block, with a shiny surface which can reflect light better than a plain wooden surface.
The object measures 6cm (height), 3cm(width), 1,5cm(depth).
8
7
Cam
6
1
5
80cm
2
4
3
Figure 1
50cm
Figure 2
Algoritm description
Our original algorithm
1. Function identify
a. If sensor 3 or 4 return high values something has been found.
i. While only sensor 3 has a high value
1. Turn towards the potential object.
ii. While only sensor 4 has a high value
1. Turn towards the potential object.
iii. Stop when infront of object.
b. While sensor 3 and 4 have high values
i. If it previously turned to the left
1. turn right.
ii. Else turn left
c. Read proximity sensors
d. If three of the sensors have high values at the same time
i. Assume it is a wall and return that information.
e. If sensor 3 but not 4 has high values
i. Assume it is an object and return that information.
f. If sensor 4 but not 3 has high values
i. Assume it is an object and return that information.
g. Turn back until the robot is in it’s original position.
2. Function searchTask
a. Check tag If allowed
i. Randomise a number between 1 and 6 for the left wheel.
ii. Randomise a number between 1 and 6 for the right wheel.
b. Return values for left and right wheels.
3. Function Push
a. If an object has been found.
i. Set the value for the left wheel to 4.
ii. Set the value for the right wheel to 4.
b. If the object is slipping off to the left.
i. Compensate by setting the left wheel to -1.
ii. Compensate by setting the right wheel to +1.
c. If the object is slipping off to the right.
i. Compensate by setting the left wheel to +1.
ii. Compensate by setting the right wheel to -1.
d. If a wall is detected
i. Set an appropriate speed for left and right wheels.
ii. set the tag to allow new numbers to be randomised.
iii. Return information about recognising a wall.
e. Return values for left and right wheels.
4. Function testamb
a. Set a speed for left and right wheel to start with.
b. Create an array to save sensorvalues in.
c. Create a variable, Tag to keep track of when it is ok to randomise new
values.
d. Create a counter to avoid the robot from going in circles.
e. Create a variable to keep track of wall/object./nothing.
f. While
i. Read the proximity sensor values.
ii. If there is no object continue looking for one.
iii. If there is a wall, avoid it.
1. set the tag to randomise new values for right and left
wheels.
iv. If there is an object
1. set the right and left wheels to values obtained from the
push function.
v. Else if nothing is found
1. continue searching
2. set tag not to change wheel values.
3. check counter values to avoid going in circles or searching
in the same way for too long.
a. If it has been going in circles
i. Set tag to randomise new wheel values and
reset the counter.
g. Set the speed of the wheels according to what has been calculated above.
System Description
The previously existing functions where:
 avoid_obst - avoids obstacles.
 comm._close - closes communications with the Khepera.
 comm_open - opens communications with the Khepera.
 Khepcom - checks that the values are properly formatted.
 read_amb - which returns ambient light values.
 read_prox - returns the values from the proximity sensors.
 read_vision – the camera.
 set_speed - sets the speed for the left and right wheel. E.g. set_speed ([4 4]).
 start - enables programs to be run on Khepera. E.g. Start (0).
 stop - stops all programs running.
New functions (layers) created for this project:
 testamb - This is our main function with a while loop that runs continuously.
First describe the function on a high level: Infinite loop with the following elements:
(use a flowchart). Ex:
-
-
Reading sensors values
Identify
 Avoid_obstacle
 searchTask
 Push
Send motor commands
The following is really not clear. Try to simplify by referring to the flowchart.
It has global variables netR, netL, Tag and r. It sets an original speed for the
wheels, creates an array, x of 8 to store proximity values and sets r and Tag initial
values to 0. Inside the main while-loop proximity values are read and stored in the
sensorvalues_prox variable. It then checks if an object is already found, if not it
uses sends identify the current sensorvalues. If a wall has been found it sets a new
speed for the wheels using the avoid_obst function and then randomises a new
speed for the wheels by resetting the variable tag. If a wall has been found it sets
the speed for the wheels by sending the sensorvalues to Push which returns the
proper values for speed. It then resets the variable tag to randomise new speed
values for the wheels. If nothing was found it continues to drive on and the speed
returned from searchTask is the same. If it searches for too long using the same
speed the variable counter will eventually break the loop and reset the variable tag
to randomise new values for speed. (See description above in the algorithm
descriptions.
 identify – This function distinguishes between objects and walls. It receives
proximity sensor values from testamb and returns a value for r (1 is an object, 2 is
a wall, 0 nothing) (See description above in the algorithm descriptions.)
explain how it differenciates between a wall and an object. use a drawing:
(this is an example, do your own!)
Refer to “active perception”

searchTask – This function returns speed values for the wheels so the robot can
search the environment. Uses a global variable tag (tag = 1 no new numbers, tag =
2 randomise new numbers) sets the right and left wheel to a random number in
order to search in a new way each time. It returns values for the right and left
wheel. (See description above in algorithm descriptions.)
 Push - This function pushes an identified object towards a wall. Uses a global
variable r ( r = 2 is a wall) and a global variable Tag ( tag = 0 randomises new
values in searchTask). It gets proximity sensor values from testamb. It returns two
values, one for the left wheel and one for the right wheel. (See description above
in the algorithm descriptions.)
How does it stop pushing?
LIMITATIONS
Limitations of perception
Sensor 5 of our Khepera has been giving us problems during this project since it
occasionally stops sending data. This has made it difficult to identify objects at times.
Another more serious problem we had to overcome was the fact that our Khepera had no
functioning camera. The camera would return different values for the exact same stimuli
which made it impossible to use in coding. This was the fact even though we tried to
tweak it. Our first idea was to use the camera for identifying walls since the camera is
located higher up on the Khepera while the proximity sensors are lower down. That
would have made it possible to use the sensors to recognise an object and the camera to
recognise a wall.
Since we ended up using the sensors to recognise both an object and a wall we had to
distinguish between the both. In order for the Khepera not to mistake an object for a wall
we had to use rather thin objects in order for only one sensor to detect an object while
when more than one sensor activates it recognises a wall.
We also discovered that since we were using only sensors we were limited to searching
for objects very close up which limits the time to take actions before bumping into things.
We also discovered that the back sensors, seven and eight are not properly weighted
which means that the Khepera runs into problems when backing into a wall.
During our experiments with our Khepera the left wheel stopped working or running
more slowly than the right wheel. This limited us greatly until we could get hold of a
replacement Khepera. Overall the Khepera seems very fragile and sensitive which makes
some tasks risky to accomplish.
Physical limitations
The Khepera has some physical limitations. It has rather small wheels with a rubber
coating. This gives it good traction against most surfaces however it also attracts a lot of
dust particles which seems to affect the wheels ability to spin. The small wheels mean
that the surface has to be even.
Programming limitations
Programming limitations we have succumbed to have been that we had to try to keep the
amount of code to a minimal and we had to adapt to already existing code. Also our
knowledge of matlab was limited to begin with and the short amount of time for this
project didn’t give us the opportunity to learn how to make the most efficient
programming.
If the Khepera discovers a wooden object and goes to push it and the object then is
removed from the Khepera it has to be able to adapt instead of going on thinking there is
an object and acting according to premises that is no longer valid. According to the code
being used in this project the Khepera would in such a situation continue to believe there
is an object until it approaches something that is identified as a wall, only then will it
adapt its behaviour to the new premises. This behaviour is not entirely in accordance with
reactive behaviour. Instead the Khepera goes into a sort of trance when it starts to push
something and stays in that trance until it comes to a wall. This unwanted behaviour is
due too the fact that the layer push is not constructed to handle itself as an independent
layer. Instead it checks premises step by step in a specific order which means it will have
to find a wall and exit the push function before it can start using reactive behaviour again.
If instead the push-layer was something that is used only while there was indeed
something to push its behaviour would still be reactive. This could easily be
accomplished by enclosing the push function inside of a while-loop only allowing it to
push as long as there is something to push according to at least one sensor.
A rather problematic limitation occurs when the sensor readings from the Khepera fails.
We have a built in limitation in the already existing code that limits the amount of trials if
a sensor reading fails. At times that limit is to low and the result is that the Khepera
basically becomes blind and starts to behave in an odd fashion or exits the program.
PROBLEMS AND REFLECTIONS
COME ON! THIS SECTION SOUNDS LIKE A “COMPLAINS” SECTION!!! The
other group had no problems to adapt to Matlab Language, cope with hardware
problems, and had time to try new approaches, WHY ???
 Because they have spent more time as you in
the lab!!!
We have come to know that programming reactive behaviour is difficult and that it is
deceivingly easy to ease over to programming a robot to act in a regular, conditional
fashion that originates from a main function.
The fact that our robot Khepera had a lot of physical problems made it difficult since our
first ideas had to be scrapped and turned into new ideas, which takes time.
We did not really get information about what to do and what not to do with a Khepera.
We later leaned from the manual and another tutor that we should always terminate the
power supply first before handling the Khepera in any way. Also it was a bit hard to
grasp what abilities were available to us since the Khepera we got had a very poor
working camera.
The start of this project was somewhat delayed and scheduling our time has been hard
work. Also it has been difficult to divide the workload in between group members. The
Khepera limits us to using one computer which mostly means there will be one active
person writing code while the others wait and try to contribute. In the end of this course
one of us discovered a Khepera simulator that can be used which would have enabled us
to work with different layers for the Khepera simultaneously. Information about this
simulator would have been greatly appreciated in the beginning of the project.
Using matlab with no previous experience and while being under stress to complete the
project on time might have been avoided by including a short introduction to matlab in
the course.
Though many difficulties this has been a fun and useful experience teaching us the
difference between a robot using reactive behaviour and a robot following a preset set of
actions even if circumstances change.
I would like a reflection about advantages and drawbacks of reactive and deliberative
behaviors:
Advantage(s)
Drawback(s)
Reactive
deliberative
REFERENCES
Brooks, R. A. (1991), Intelligence without representation
Artificial Intelligence 1991, NO. 47, 139–159