Voice and remote controlled Electric Powered Wheel chair

Proceedings of the 2nd International Conference on Engineering & Emerging Technologies (ICEET), Superior
University, Lahore, PK, 26-27 March, 2015.
Voice and remote controlled Electric Powered Wheel chair
Engr. Muhammad Ahmed Sikandar
Engr. Ahmed Hassan
Department of Electronic Engineering
Hamdard Institute of Engineering & Technology
Hamdard University, Karachi, Sindh, Pakistan
[email protected]
Department of Electronic Engineering
Hamdard Institute of Engineering & Technology
Hamdard University, Karachi, Sindh, Pakistan
[email protected]
Abstract – To address the needs of physically impaired people,
this paper expresses the concept and development of a prototype
wheel chair which can be controlled via voice commands. The
model presented in this paper comprises of a fully functional
electric powered wheel chair integrated with a voice synthesising
capability. The user (typically patients or people with little or no
physical mobility) can control the wheel chair easily by simply
speaking the directional commands to the system via microphone.
The current prototype is tested to perform according to five basic
commands spoken in simple English (e.g. ‘forward’ or ‘stop’ etc.)
but the flexible nature of the system allows further extension in the
set of recognizable words. For the people with little physical
mobility or people who are speech impaired, a remote control
module is provided as an alternative to voice controlling
mechanism. Obstacle detection is also integrated into the system
for collision avoidance.
Index Terms – Voice controlled. Human aid. Wheel chair.
Collision avoidance
I. INTRODUCTION
Transportation and handling of patients with temporary or
permanent physical disability has always been a matter of great
concern. The simplest mode of transportation used for
performing everyday activities is a Wheel chair. A patient
capable of at least upper body can easily operate the wheel chair
without any external help. But if a person is incapable of upper
body movement, or too old to gather enough strength, operating
a wheel chair might become a problem.
To cater the needs of such patients, a prototype wheelchair
is developed which consists of voice recognition based control
capabilities. By integration of voice control, any patient with
physical disability would simply have to speak simple
commands like “forward”, “back” or “stop” etc. in order to
operate wheel chair.
In case of speech impaired user, additional control in the
form of remote control is provided to ensure simple operation.
With the use of both spoken and remote commands, the
prototype system becomes highly flexible and suitable for use
by a wide variety of patients.
With the addition of sensors for safety mechanisms like
proximity sensor makes the wheel chair more user friendly. The
proximity sensors keep a constant look out the distance between
any obstruction and wheel chair to avoid collision. If safe
distance is crossed, the sensors automatically halts the operation
and the user is saved from a possible collision leading to either
minor, or in some cases, severe injuries.
II. RELATED WORK
Providing more and more comfort and ease of operation,
especially to people with limited mobility has been a focus of
research all around the world and numerous techniques have
been developed to cater this problem.
C. Aruna et al [1] focused their study on paraplegic persons
(people suffering from paraplegia: impairment in motor or
sensory functions of the lower extremities [2]). In this system,
two types of inputs; Voice recognition and Touchscreen are
implemented. The operation can be controlled simply by
command input using touch screen or voice commands. The
accuracy of touch screen based input of this system is found to
be 50%, while accuracy of Voice recognition system was found
to be 80.8%.
Rajesh Kannan Megalingam et al [3] proposed an
Intelligent home Navigation System (IHNS). The system is
focused on elderly people who sometimes forget their way to
different rooms. In this system, they have associated a word to
each of the designated rooms. When the user calls out the name
of a particular room, the wheelchair is automatically navigated
to the room corresponding to the spoken word. The IHNS
minimizes the need of external help requirement by the user.
M. Anousouya Devi et al [4] took the matters a step
further and implemented a brain to computer interface to
control the movement of a wheelchair. The channel established
between the brain and wheelchair was named as Hybrid BCI.
The Hybrid BCI has been made more efficient by integrating a
voice recognition based control mechanism. The brain signals,
synchronised with voice commands are used to control the
movement of wheelchair.
Another rather new approach has been developed by S. A.
Akash et al [5] for serving the same purpose. They have used
the Internet of Things phenomenon to control the movement of
wheelchair. Internet of Things, incorporated with a numerous
control mechanisms like joystick, chin control, voice activation,
control via head movement makes it a highly flexible system.
The Internet of things concept enhances the system further by
introducing the control of the wheelchair via internet. This
means the system thus developed can be remotely operated
from greater distances and may also introduce a type of remote
monitoring capability to the system. In this way, supporting
staff of the patients/users which cannot be left alone can easily
track and monitor the activity of such patients remotely with
much ease.
Proceedings of the 2nd International Conference on Engineering & Emerging Technologies (ICEET), Superior
University, Lahore, PK, 26-27 March, 2015.
III. SYSTEM DESIGN APPROACH
IV. VOICE RECOGNITION MODULE OF THE SYSTEM
The design of the system comprises of the following main
parts:
1. System hardware.
2. Control Algorithm.
The main feature of the system, the voice recognition
function is served by integrating SPCE061A sound controller
[7]. Sound controller, together with the mic and interface
circuitry is referred to as Voice recognition module for the
system. Some of the application fields suitable for the selected
controller are voice recognition products, intelligent interactive
toys, general speech synthesizer etc. Some of the key features
of the module are as follows:
i.
16-bit µ’nSPTM microprocessor
ii.
CP clock: 0.32 MHz – 49.152 MHz
iii.
Operating voltage: 2.4V – 3.6V
iv.
32K-word flash memory.
v.
Software-based audio processing
The low power consumption feature of the module
enables us to reduce the power source for the module to a bare
minimum. Moreover high processing speed ensures processing
delay and prompt action corresponding to the spoken command.
The block diagram of the SPCE061A sound controller is shown
below:
A. System Hardware
The system hardware consists of a basic wheelchair, with
modifications to drive it by means of DC Electric Motors. The
DC motors are used because they can be easily powered on
board by means of Lead acid batteries. The motors provide the
following movement capabilities to the wheelchair:
a. Forward and reverse
b. Turn Right or Left
c. Full 360° turn.
To serve this purpose, a relay based motor control circuit
has been developed to control the direction of rotation plus the
speed of the motors. The speed control is necessary to avoid the
initial jerk of the motors when the wheelchair starts movement
from rest, which may result in dislodging of the user.
To avoid the accidental collision of the wheelchair to an
obstacle (Wall etc.) or to maintain safe distance from an object
(furniture etc.), HC-SR04 ultrasonic ranging modules have
been installed around the wheelchair. The proximity sensors use
sound waves to determine the distance between the source and
the obstruction. The ultrasonic modules used in the system
provide an effective distance of measurement ranging from 2
cm to 400 cm (1in. to 13 feet) with a resolution of 0.3 cm and
measuring angle of 30° [6].
The hardware of the prototype developed is able to support
the movement of a person weighing 45-50 kg. The payload of
the current prototype is not much as patients incapable of
physical mobility may weigh much higher than that. By
integrating more powerful motors to the system, we can easily
increase the payload of the system so as to accommodate people
of heavier weight categories.
B. Control Algorithm
The control algorithm developed for the system ensures
smooth functioning of the system by providing the following
necessary operations:
a. Constantly monitor the control inputs.
b. Correct recognition and inference of the voice
command by the user.
c. Error free functioning (forward, reverse etc.)
corresponding to the input generated by the user
d. Sudden ‘start’ from rest condition or sudden
‘stop’ while in motion.
e. Effective obstacle detection
f. Obstacle avoidance (slowing down to a halt if safe
operating distance threshold is not fulfilled).
g. System shutdown/ halt if any undesired condition
is reached.
h. In case of invalid input, halt if moving, maintain
current position and wait for new input
Fig. 1 Block diagram of SPCE061A sound controller [8]
V. VOICE RECOGNITION MODULE OF THE SYSTEM
For streamlining the overall system and to carry out
necessary operations, Arduino Mega 1280 [9] is used. Arduino
Mega 1280 module features an AVR ATMEGA 1280
microcontroller with the following peripherals:
1. 54 Digital inputs/outputs
2. 16 analog inputs
3. 4 UARTs (hardware serial port)
4. Operating voltage: 5V
5. DC Current per input/output pin: 40mA
Proceedings of the 2nd International Conference on Engineering & Emerging Technologies (ICEET), Superior
University, Lahore, PK, 26-27 March, 2015.
VI. SYSTEM FLOW DIAGRAM
The overall working of the system is explained by means
of flow chart as shown below:
Start
No
No
Voice
Command
Input
Joystick
Input
Yes
Main Controller:
Generates
command for
required action
No
Safe
distance
from
obstacle?
Yes
A. Available voice commands and joystick inputs
The VRM used in the system is capable enough to facilitate
the recording of 15 separate speech commands, but for the
current system we are using only 4 commands, namely:
1. Forward
2. Backward
3. Left
4. Right
Since the control inputs are also being generated using
joystick, so the commands generated by the joystick are kept
same to that of VRM
B. Safe operating distance from obstacle (if detected)
As mentioned earlier, the ultrasonic ranging modules are
used for obstacle detection. These modules use the concept of
sonar to detect an obstacle and to determine the distance
between the source and the obstacle. The minimum safe
operating distance for the system can be set using the following
formula:
ܵ = ‫ݒ‬. ‫ݐ‬
(1)
Here,
S = distance between the source and the obstacle,
v = velocity of sound waves (340 m/s)
t = time taken by sound waves to return after emission
We must note that since the sound waves are emitted, and
then return after striking the obstacle, so the distance covered
by the sound waves is doubled, so:
Is obstacle
detected?
ܵ = 2݀
(2)
Where,
d = required distance to be calibrated in meters.
No
So, by rearranging (1), we get:
‫=ݐ‬
Yes
Motor circuit
drives the wheel
chair in required
direction
(3)
Equation (3) shows that by monitoring the time required by
the sound waves to return to the module, we can calibrate the
distance in inches as pre requirement. For the calibration of a
safe distance of 10 inches, calculation is shown below:
‫=ݐ‬
Display of current
status (moving or
rest) on LCD
ଶ௩
ଵସ଼ௗ
2 ܺ 340 ݉/‫ݏ‬
148 ܺ 10 ݅݊.
࢚ = ૝૞ૢ. ૝૞ μࡿ ࢕࢘ ૝૟૙μࡿ
VIII. CONCLUSION & FUTURE ENHANCEMENTS
Fig. 2 Flow chart of the system
VII. SYSTEM ANALYSIS AND CALIBRATION PARAMETERS
For the purpose of system performance analysis and
calibration, different formulas have been used for various
system components, these formulas are mentioned in detail as
under:
The prototype of the system designed undergone a test run
and has successfully completed the basic performance
parameters. The wheel chair executed the basic spoken
commands. The wheel chair was operated by an individual
weighing 45-50 kg.
For future enhancement, it is suggested to use more
powerful and light weight motors to support patients/users of
higher weight categories. Moreover, by increasing the no. of
Proceedings of the 2nd International Conference on Engineering & Emerging Technologies (ICEET), Superior
University, Lahore, PK, 26-27 March, 2015.
available voice commands, the system can be modified to cater
a wide variety of applications
ACKNOWLEDGMENT
The authors would like to thank Jahanzaib Ali, Muhammad
Bilal Tariq and Syeda Ambreen Haider, from department of
Electronic Engineering, Hamdard University Karachi, for their
efforts to transform the project from idea to reality. Also Syeda
Ambreen Haider also volunteered to carry out the test run of the
system.
REFERENCES
[1] C. Aruna, et al, “Voice recognition and touch screen based wheel chair for
paraplegic persons,” in 2014 International Conference on Green
Computing Communication and Electrical Engineering, Coimbatore,
March 2014, pp. 1-5.
[2] Wikipedia contributors. (2015, February 14). Paraplegia. [Online].
Available: http://en.wikipedia.org/wiki/Paraplegia
[3] Rajesh Kannan Megalingam et al, “Automated voice based home
navigation system for the elderly and the physically challenged,” in 2011
International Conference on Wireless Communication, Vehicular
Technology, Information Theory and Aerospace & Electronic Systems
Technology, Chennai, February-March 2011, pp. 1-5.
[4] M. Anousouya Devi et al, “Hybrid brain computer interface in wheelchair
using voice recognition sensors,” in 2014 International Conference on
Computer Communication and Informatics, Coimbatore, January 2014, pp.
1-5.
[5] S. A. Akash et al, “A novel strategy for controlling the movement of a smart
wheelchair using internet of things,” in 2014 IEEE Global humanitarian
technology Conference-South Asia Satellite,” Trivandrum, September
2014, pp. 154-158.
[6] HC-SR04 Ultrasonic Sensor-Product User’s Manual, Cytron Technologies
Sdn. Bhd., Johor, Malaysia, May 2013, vol. 1, pp-3
[7] SPCE061A 16-Bit sound controller Datasheet, Sunplus Technologies Ltd,
Taiwan, August 2002, Version 0.1.
[8] SPCE061A 16-Bit sound controller Datasheet, Sunplus Technologies Ltd,
Taiwan, August 2002, Version 0.1, pp-4
[9] (2015, February 27). Arduino Mega. [Online].
http://arduino.cc/en/Main/arduinoBoardMega