R. Gomathi and T. Preethiya - tifac

Electronic Textiles and Wearable Computing for Autonomous Location Awareness
1
Electronic Textiles and Wearable Computing for Autonomous
Location Awareness
R. Gomathi1 and T. Preethiya2
PSNA College of Engineering and Technology, Dindigul
E-mail: [email protected], [email protected]
ABSTRACT: This paper describes an autonomous, wearable location awareness system that determines a user’s
location within a building given a map of that building. The system uses a moderate number of ultrasonic range
transceivers as the sensing elements. Given a set of range readings from these sensors, the system attempts to
match those actual readings to expected readings associated with a set of candidate locations for the wearer. These
expected readings are calculated using a simulation model of the propagation of ultrasonic signals within a building.
A complementary algorithm is given for determining the wearer’s movement between rooms, allowing for the
uncertainty associated with sensor readings in complex, multi- room environments. A wearable prototype system is
described and results from this system in a range of scenarios are presented and analyzed.
Keywords—Electronic Textiles, E-Textiles, Wearable Computing, Location Awareness, Context Awareness,
Ultrasonic Simulation Model.
INTRODUCTION
M
any mobile computing applications require some
knowledge of the location and orientation of the
user[1],[2],[3]. In most outdoor settings, location and
orientation can often be satisfactorily determined using a
combination of a Global Positioning System (GPS) unit
and a digital magnetic compass. In most buildings,
however, the GPS signal is typically unavailable and
readings from a digital compass can be distorted. To
address this limitation, systems have been proposed that
include infrastructure installed in the building to assist in
determining the location of a given user. However,
autonomous location systems are preferable because they
do not require the extra cost of installing the infrastructure,
and no questions arise of trusting the infrastructure to
maintain location privacy.
The ultimate goal is to have a user enter a building
through a doorway, mark that doorway as the origin, and
then have the system automatically map the areas that the
user walks through by finding distances to nearby walls,
doorways, and other surroundings. Not depending upon an
installed infrastructure is particularly important in
emergencies where the buildings electrical and wireless
infrastructure may be damaged or turned off. The system
could also be used by the blind to navigate through a
building. Because the building is mapped by finding the
distances to nearby obstacles, the system could also
indicate an impending collision to a blind user. For
navigation applications, the system should have sufficient
accuracy such that the user does not go through the wrong
door.
This paper describes a proof-of-concept prototype of
that first step, an autonomous wearable system for location
awareness within a building that does not rely on an
installed infrastructure but does depend upon having a floor
plan of the building. The system uses an array of ultrasonic
transceivers to sense the structure of the user’s current
environments. It matches the current snapshot of the
surroundings to a transferred version of the blueprint of the
building. The transformed version of the blueprint is
constructed based on models of ultrasonic propagation. A
wearable belt-based prototype has been constructed for
sensing distances and offline algorithms have been
implemented to generate preliminary results for this
system.
In this paper, the model used for ultrasonic propagation
in the system is described in Section 2. The algorithms
employed to match the current ultrasonic readings to a
location in the building are given in Section 3. Finally,
experimental results are presented in Section 4 and then
concluding remarks are given.
SIMULATION MODEL
The algorithm that have implemented for finding a user’s
location, described in section 3, depends upon comparing
actual ultrasonic range readings to simulated ultrasonic
range readings from a set of candidate positions. This
section describes the ultrasonic propagation model used for
the simulated range readings portion of the location
algorithm.
The range measurement in the proposed system is
achieved by computing the time, t, that it takes for an
ultrasonic signal to travel from a transceiver to an obstacle
and return to the transceiver.
The first echo whose amplitude exceeds a threshold is
taken to be a valid echo for time measurements, and any
158
Mobile and Pervasive Computing (CoMPC–2008)
further echo received is ignored. The range measurement is
calculated as d = c × t/2, where c is the speed of sound in
air. This process is repeated, sequentially, for every sensor
in the system to form a 360 degree representation of the
surroundings.
where p(τ) is the pulse waveform. The first echo received at
the receiver with its signal strength greater than a certain
threshold will give the range reading.
Ultrasonic Signal Characterization
A new algorithm for computing a user’s location and
orientation given readings from a set of ultrasonic sensors
and a map of a building is described in this section. Before
running the algorithm, a text version of the buildings floor
plans from an architectural drawing is constructed. While
this format of the map is not essential to the algorithm, it is
included here for the completeness. The architectural map
of the building is partitioned into rooms and each room is
then converted into text format. The architectural map of a
room is converted into text format by classifying the
features of the room into walls, corners, and edges. The
walls are marked by its end points, where as corners or
edges are marked by the intersection point of the two walls
forming them. The syntax to describe a wall is
{1}{x1}{y1}{x2}{y2},
where (x1, y1) and (x2,y2) are the end points of the wall. The
syntax for the description of the corners is
{2}{x}{y}
and for an edge is
{3}{x}{y},
where (x, y) is the intersection of the walls forming the
corners/edges.
The ultrasonic range measurements come from an array
of ultrasonic transceivers placed on a single plane of the
user’s body and positioned to produce a 360 degree scan of
the surroundings. The algorithm attempts to match these
readings to simulated measurements computed from an
implementation of the model in the preceding section. The
matches are attempted against a set of postulated locations
and orientations for the user and the best match is selected
as the user’s current location and orientation.
The location awareness algorithm can be sub- divided
into two parts, knowledge of location and orientation of the
user within a room, and knowledge of the room in which
the user is currently located.
Unfortunately, the propagation and interpretation of
ultrasonic signals is not simple. The wave transmitted from
an ultrasonic transmitter is a cone rather than an arrow
beam. The ultrasonic wave interacts with elements of the
environment in a complex way such that the echo received
by the receiver is not necessarily due to reflection from an
obstacle in the transmitter’s line of sight. The echo can be
received after multiple reflections, reflection from any
obstacle in the beam, or after diffraction from an obstacle
as shown in Fig. 1. The sensor in the figure emits an
ultrasonic signal toward the corner C, but after leaving the
transmitter, the wave spreads in the shape of a cone and is
reflected by A, B and D. This signal also experiences
multiple reflections at E and F before it reaches the
receiver. The reflection that produces a range reading is
dependent on the angle of inclination of the ultrasonic wave
front to the reflecting object, distance to the obstacle, radius
of the sensor, beam width, and the operating frequency of
the ultrasonic sensors used. The algorithms in the prototype
system require a simple yet explanatory model that reflects
the physical characteristics and behavior of the ultrasonic
wave propagation.
Single
Reflection: A, C and D
Multiple
Reflections: E and F
Diffraction: B
Fig. 1: Multiple Sources of Reflection
It is important to note that the walls and corners cause
acoustic waves to reflect, whereas an edge will diffract
waves, with the edge as the point source for the diffracted
waves. The diffraction will attenuate the signal by a factor
of (2π√z/λ)–1 where λ is the wavelength of the ultrasonic
waves. The waveform detected at the receiver after
reflection/diffraction from walls, corners, or edges can be
represented as
r (t )  


p( )  hT / R  t    d
DESCRIPTION OF THE ALGORITHM
Pose Estimation
This algorithm [4], [6] takes as initial input an architectural
map. In addition, the system requires the user to provide an
initial estimate of the user’s location and orientation, for
example, which room the user is currently in or which door
to a building the user is entering. Doorways are good
starting points because the search method described below
increases its search area around doorways. As described in
previous sections, the ultimate goal is to have the user enter
an unmapped building through doorway and use that
doorway as the origin for the map to be created by the
system. In the current version, an existing map is used and,
159
Electronic Textiles and Wearable Computing for Autonomous Location Awareness
thus, the user must supply the initial estimate of the
position.
Once the algorithm begins execution, the system takes
range readings from all of the sensors, making two passes
and averaging them to mitigate the effect of noisy data.
Because the echo received at the receiver can be due to
specular reflections and may not represent the actual
distance to an obstacle, the following procedure is used to
attempt to estimate such spurious data. In case of multiple
reflections, the distance returned by the sensor will be
substantially greater than the actual distance. Specifically,
the data samples in the set outside of the range of (mean +
2 × std) will be removed from the set based upon the
“Empirical Rule”. Also, results from the Motion capture
library of CMU [7], [8], [10] indicate that a normal person
walks at approximately 1.65 feet/second.
In this algorithm it is expanded to 5 feet/second.
Similarly a person rotates a maximum of 15 degrees/second
when veering left or right during walking. In this, it is
extended to 20 degrees between samples.
Given these bounds, a discrete set of points and
orientations is generated that represent candidates for the
user’s position and orientation within those bounds as
shown in Fig. 2. For each of these candidate points, a
simulation of ultrasonic range measurements from that
point is run, and the resulting simulated range
measurements are then compared to the actual range
measurements using a matching process that will be
described later. Prior to matching a candidate point against
the real data, the point is first tested for inclusion to make
sure that point is indeed within the room being checked,
using a standard point-in-polygon test.
walls an odd number of times, then the point is within the
room and can be matched with the real ultrasonic data. Fig.
3 shows a scenario with a hypothetical room in which a line
through points A, B, and D intersects the room boundary an
even number of times and the line through point C
intersects the room and will not be included for matching,
whereas simulated measurements will be generated for the
point C and will be matched with the real set of
measurements.
Fig. 3: Candidate points within and outside the room
Specifically, the matching process selects the candidate
location/orientation that minimizes the weighted sum of the
differences of the real points and the simulated points. In
the weighting system, the reflections from walls are given
more weight than the reflections from corners or edges
based on the observation that single reflections from the
walls received at the sensor will not be specular and, thus,
should be given more weight than others. The objective
function to be minimized is the function
  real  simulated    real  simulated 
1
2
c
1n  2 m
where n is the number of wall readings and m is the
number of corner/edge readings. The best results are
achieved when the wall weighting, w1, is twice or more
than the corner/edge weighting, w2.
Room Occupancy
Fig. 2: Set of postulated positions and orientations to test
for location
The inclusion test begins by drawing a horizontal line
through the candidate point and counting the number of
times it intersects with the walls of the room. If it intersects
the walls of the room an even number of times, then the
point is outside the room, whereas if the line intersects the
In this section, an algorithm that uses the algorithm for
location within a room to determine which room a user
moves to when leaving a room is described. In addition to
testing a set of hypothetical positions within a room, the
algorithm also tests adjacent rooms as candidate for the
user’s location when the user moves near an exit from the
current room.
160
This algorithm is invoked whenever a wearer’s current
position is within four feet of an exit from the current room.
At this point, all adjacent rooms are examined for potential
matches with the current set of sensor readings. Note that
the radius of postulated user orientation change is also
doubled. The reason for this expansion is that the sensor
readings near a doorway are often very poor matches for
those predicted by the model; this is due to the reflections
from nearby corners, doors whose positions can vary from
fully closed to fully open, and door frames that have
unusual reflective properties. The room occupancy
algorithm runs until either the user leaves the exit area and
remains in the room, or another room has a match clearly
better than the current room.
In a program, the while loop is run once every time the
ultrasonic sensors are sampled, and a current “goodness”
value Gnew is calculated for each candidate adjacent room
by using the formula,
Gnew = matchCR/(matchCR+match i)
where CR = Current Room and i is index for each room.
While the goodness values will range between 0 and 1,
the goodness is not a probability because the sum of the
goodness over all of the candidate rooms will not
necessarily equal to 1. However scaling the goodness
values to be in the range of 0 to 1 allows us to set
thresholds for choosing and discarding candidate rooms.
The current goodness for each candidate room is then
combined with the room’s previous goodness such that
there is some inertia when changing rooms, i.e., transitions
between rooms are smoothed and do not rapidly oscillates.
Because the match value calculated using is a smaller is
better metric, if the match of the current room is less than a
candidate room, then the current room has a better fit and
the candidate rooms current goodness is combined with the
candidates previous goodness as a weighted heavily toward
recent readings. If the candidate room has a better fit than
the current room, then its goodness is updated using the
equation for the union of two independent events (i.e.,
P(A)UP(B) = P(A) + P(B)–P(A)P(B) = P(A) + P(B)(1–
P(A)), not because the goodness are probabilities but
because it maintains the property that all G(i) values are in
the range of 0 to 1 while allowing a candidate room with a
high goodness to quickly become chosen as the new room.
These two methods of updating the goodness are heuristics,
but, empirically, we found that using them allowed
candidate rooms with a very poor goodness to be rapidly
discarded and candidate rooms with a very high goodness
to be rapidly chosen without much oscillation during
transitions between rooms. If the goodness of the candidate
room exceeds a threshold, then the candidate room is
assumed to be the new current room. Likewise, if the
goodness of the candidate room falls below a threshold,
then the room is purged from the candidate queue.
Mobile and Pervasive Computing (CoMPC–2008)
RESULTS
Prototype System
The current prototype system used is in the form of a
wearable belt with an array of 15 Polaroid ultrasonic range
sensors [11] placed approximately 24 degrees apart, as
shown in Fig. 4. The kit is designed to operate as a single
sensor. With this kit, the algorithm is implemented in
MATLAB, the simulation model and the inclusion test are
implemented in C and invoked through the mex mechanism
in MATLAB. Using MATLAB’s ability, the execution
time is reduced.
Fig. 4: Prototype
Performance Evaluation
To evaluate the performance of the algorithm, the pose
estimation test was carried out in rooms of different sizes,
as shown in Fig. 5. In each of the tested rooms, the user’s
location was computed at 10 different places; this set of 10
measurements was repeated three times for each room.
Table 1 shows the error range in each set along with the
mean and standard deviation for each room. This set of
measurements was not taken in the area labeled “Room 2”
because it is essentially a narrow halfway, about 5 feet
long, and open to Room 6. There was not enough room in it
to take measurements at 10 places with approximately the
same density as the measurements in the other rooms. From
Table 1, it is evident that the algorithm’s performance as
measured by the absolute error was poorest in Room 6 and
the hall, with errors averaging more than 2.5 feet, an error
250 percent greater than in the other rooms. The large
absolute error of the hall is to be expected given the large
distances to be measured in this room as well as many
features in the hall that are not reflected in the floor plan of
the room. Room 5 is problematic because over 50 percent
of the area of the room is occupied by tall cabinets that are
not in floor plan.
161
Electronic Textiles and Wearable Computing for Autonomous Location Awareness
correcting the floor plan could be done by the user online in
a similar fashion to that described in [5]. Fig. 6 shows the
quality of match for Room 3 and Room 6 with and without
modification.
Table 2: Accuracy of the algorithm after adding the new
reflective element type to the floor plan of those rooms
Set 1
Room 3
(Old)
Room 3
(new)
Room 6
(old)
Room 6
(new)
Set 2
Error (Feet)
Set 3
1.02  0.57 0.84  0.39 0.89  0.46
Set 4
0.91
Std
0.47
0.77  0.68 0.54  0.41 0.52  0.47
0.61
0.62
2.88  3.45 2.63  2.70 2.62  2.72
2.71
2.59
1.15  0.95 1.52  0.88 1.54  0.99
1.40
0.94
Fig. 5: Room Configuration
Table 1: Accuracy of the algorithm for each room in Figure
4 across three repeated sets of measurement
Set 1
Set 2
Error (Feet)
Set 3
Room 1
0.61  0.62 0.45  0.28 0.47  0.31
Set 4
0.51
Std
0.40
Room 4
0.55  0.39 0.30  0.25
0.35  0.3
0.41
0.31
Room 5
1.20  1.01 1.23  0.93 1.03  1.12
1.15
1.02
Room 3
1.02  0.57 0.84  0.39 0.89  0.46
0.91
0.47
Room 6
2.88  3.45 2.63  2.70 2.62  2.72
2.71
2.59
Hall
2.77  1.44 2.53  1.22 2.59  1.07
2.63
1.24
In our experiments, these cabinets reflect the ultrasonic
signals in the same manner as walls. In a similar room
without the cabinets (Room 4), the error is much lower.
Such problems can be addressed by adding such significant
elements to the floor plan. The error in Room 6, however,
cannot be addressed in a similar fashion. When the lab
benches were inserted as simple walls in the floor plan, we
did not see a decrease in the average error. Our experiments
indicate that the lab benches, combined with the equipment
density distributed on the lab benches, reflect ultrasonic
signals arriving from any direction. Thus, the model of
reflection used for walls is not correct for this situation. To
account for this, the simulation model was updated to
include a new type of object that reflects signals
irrespective of the angle of incidence. This new object was
placed in the floor plan in the location of the benches. This
modification to the floor plan results in a significant
improvement in the results for Room 6 and Room 3, as
shown in Table 2. Room 6 shows a larger improvement
because the percentage of the room occupied by lab
benches was much larger than in Room 3. In practice,
Fig. 6: Quality of match with & without simulation model
modification
CONCLUSIONS
The design of a wearable autonomous system in an outdoor
environment was described, including the choice of
sensors, number of sensors, and the algorithm used for
calculating location and user orientation. Several choices of
type of sensors, number of sensors and algorithms are
available for a location awareness system, but their use in a
wearable system is restricted. The sensors were chosen
considering their usage, cost, power consumption and
wearability. The need to understand the complex behavior
of the ultrasonic sensors motivated the use of a simulation
model that can predict actual sensor behavior. Also, a
feature addition in the existing simulation model is
proposed that can enhance the prediction of the sensor
readings in the cluttered environment. The two part
location awareness algorithm computes the location and
orientation within a room as well as determines the user’s
movement between rooms.
The use of the weighting system ensures that the most
prominent readings of the sensors are given more emphasis
162
than the generated readings due to fixtures or moving
people. A simple prototype that can collect the readings
from the sensors was constructed and used to conduct the
experiments for location awareness. The choice of
parameters within the system, such as the number of
sensors and weights assigned to the sensor readings, was
experimentally justified. The performance of the algorithm
was demonstrated in a series of experiments involving
several rooms demonstrating its efficacy. The location
awareness tests were successful even when the test
environment consisted of identical rooms.
REFERENCES
[1] Jones, M., Martin, T., Nakad, Z., Shenoy, R., Sheikh, T. and
Chanda, M., “Analyzing the use of e-textiles to improve
application performance”, in Proceedings of the 58th IEEE
Vehicular Technology Conference, Vol. 5, pp. 2875–2880,
IEEE Computer Society, Oct.2003.
Mobile and Pervasive Computing (CoMPC–2008)
[4] Madhup Chandra and Mark T. Jones, “E Textiles for
autonomous location awareness”, IEEE Transactions on
Mobile Computing, Vol. 6, No. 4, pp. 367–377, April 2007.
[5] Lee, S.W. and Mase, K. “Activity & location recognition
using wearable sensors”, IEEE Pervasive Computing, Vol. 1,
pp. 24–32, 2002.
[6] Nakad, Z.S. “Architectures for e-textiles”, PhD thesis,
Bradley Deparatment of Electrical and Computing
Engineering, Virginia Tech, 2003.
[7] Farringdon, J., Moore, A.J., Tilbury, N., Church, J. and
Biemond, P.D., “Wearable Sensor badge and sensor jacket
and context awareness” in Proceedings of the 3rd
international Symposium on Wearable Computing, pp. 07–
113, ISWC 1999, Oct. 1999.
[8] Clarkson, B., Mase, K. and Pentland, A., “Recognizing user
context via wearable sensors”, in Proceedings of 4th
international Symposium on Wearable Computing, pp. 69–
75, ISWC, 2000, Oct. 2000.
[9] Weiser, M., “The computer for the 21st century”, Scientific
American, Vol. 265, pp. 66–75, Jan. 2005.
[2] Reitmayr, G. and Schmalsteig, D., “Location based
applications for mobile augmented reality”, in Proceedings
of the 4th Australian user interface conference on user
interfaces, Vol. 18, pp. 65–73, 2003.
[10] Priyantha, N.B., Chakraborty, A. and Balakrishnan, H., “The
cricket location support system”, in Mobile Computing &
Networking, pp. 32–43, 2006.
[3] Hightower, J. and Borriello, G., “Location systems for
ubiquitous computing”, IEEE Computer, Vol. 34, pp. 57–66,
August 2001.
[11] Post, E., Orth, M., Russo, P. and Gershenfeld, N., “Ebroidery design and fabrication of textile based computing”,
IBM systems Journal, Vol. 39, No. 3&4, pp. 840–860, 2005.