Development of an Internet of Things (IoT) Enabled Household

Development of an Internet of Things (IoT) Enabled
Household
Adeena Javed
[email protected]
Kapil Kanwar
[email protected]
Max Legrand
[email protected]
Noam Miller
[email protected]
Samuel Scherl
[email protected]
New Jersey Governor’s School of Engineering and Technology 2016
July 22, 2016
Abstract
images. Smart pantry also uses application programming interfaces (APIs) such
as Twilio to text the user and Walmart’s
API to get information to recommend
products. For an initial $300 for a home
smart pantry and gateway and an additional $40 for every additional smart appliance, the system is inexpensive. With a
better camera and better weight sensors,
the smart pantry could accurately monitor the pantry, assisting the users in their
household errands.
Grocery shopping can be a hassle,
and remembering exactly what to buy
can be even more trying. This project
developed the technology behind an Internet of Things enabled pantry. Integrated with cloud computing, the technology would monitor the pantry and alert
the user, either via text or a webapp,
about shortages and recommend products
for a grocery list. The system uses an Arduino microcontroller to collect weight information about products in the pantry,
a Raspberry Pi microcomputer equipped
with a camera to take pictures of the
pantry, and a gateway server to process
this information to be sent to the central server. Smart pantry uses various
Python libraries such as PiCamera to operate the camera, ZBar to read and recognize barcodes, and OpenCV to process the
1
Introduction
The Internet of Things (IoT) concept
is to make objects and appliances more
intelligent through interconnectivity. By
collecting and analyzing data aggregated
from multiple sources throughout the Internet, the IoT makes people’s lives easier
and gives them insight into the patterns
1
that govern the users lives. It is already
used in environmental management, infrastructure management, health care systems, transportation, home automation,
and much more to make society function
more smoothly [1]. IoT enabled equipment is also used to assist transportation systems through traffic control, electronic toll systems, and safety vehicle assistance. The IoT is also applied to automated health monitoring with specialized sensors to oversee the general health
of senior citizens as well as track exercise
through products like Fitbit.
Focusing on this project specifically,
IoT enabled devices can be used to
make the home smarter by controlling
the mechanical and electrical operations
of households or tracking the status of appliances, from self-learning thermostats to
remote controlled security systems. The
smart pantry concentrates on simplifying
the process of buying and making food by
notifying users when pantry items run out
and by providing recommendations of new
items to buy based on the items already in
the user’s pantry. Using a force-sensitive
resistor, a chipKIT, a Raspberry Pi, and a
high-resolution camera, the smart pantry
is able to identify items in the user’s
pantry, weigh them, determine when an
item is almost empty, and notify the user
about what should be restocked based
on previous patterns of use. Users are
able to remotely view the pantry’s contents and receive purchase recommendations through the smart pantry web application. There is also the option to set up
the smart pantry texting service through
the webapp, which would give the user access to all of the same features available
online through the wider network of SMS.
On the go or abroad, users are able to see
exactly what is in the user’s pantry. As a
bonus, the technology would recommend
products, draft shopping lists, and offer
recipes, ameliorating the everyday problems that surround bringing food to the
table.
2
2.1
Background
An Internet of Things
Enabled Household
Internet of Things enabled devices
have become more common in households
over the past decade. Some well-known
products are the Nest Thermostat, Philips
Hue LED light bulbs, and the Samsung
Family Hub Refrigerator. The Nest Thermostat is a self-learning thermostat that
enhances heating and cooling in homes
while saving energy. It collects data on
the user’s temperature preferences and automatically set a schedule based on the
user’s living patterns and seasonal changes
in just a week [2]. Another innovative IoT
enabled device is the Philips Hue LED
lighting system, which is programmed to
learn lighting patterns in a household and
imitate those routines. For example, it
can dim the lights at night or turn on
the outdoor lights before the user arrives
home from work. Another IoT enabled
product is the Samsung Family Hub Refrigerator, which has three built-in cameras that take photos of what is in the
pantry every time the door closes. The
user is able to access these photos through
the user’s cell phone to manage what is
in the fridge [3]. The smart pantry detailed in this paper took inspiration from
the Samsung fridge by offering real time
monitoring of the pantry and recommending new items to try (see Figure 1).
2
Figure 1: The Samsung Family Hub Refrigerator helped to inspire this project [3].
2.2
Raspberry Pi
The Raspberry Pi is a miniaturized
computer about the size of a credit card,
an appropriate size for a computer that
needs to sit between pantry shelves (see
Figure 2). It runs its own distribution
of the open-source Linux operating system Debian called Raspbian, which is very
similar to the Linux operating system for
desktops. Each Raspberry Pi 2 unit costs
$35, making it suitably inexpensive for
this application [4]. The Pi comes with
USB ports through which it can communicate to microcontrollers such as the Arduino in a serial interface.The Pi also has a
built-in camera port for its camera, which
the Pi will receive images from. Those images will be sent over to the server to be
analyzed using image processing and barcode recognition. Python was used to program the Raspberry Pi as it comes built-in
with Linux systems.
Figure 2: The project used the Raspberry
Pi 2 depicted above [5].
2.2.1
Raspberry Pi Camera Module
The camera module is a fixed-focus
camera specifically designed for the Raspberry Pi. Fixed-focus means that the focus of the camera is set to infinity by default and practically, this means any object closer than 11 inches appears blurry.
The camera captures images in 1920x1080
pixel resolution, which is extremely high
quality for such a low price. This makes
image processing and barcode recognition possible, even though the images are
blurry. Because it is small and out of the
way, it is well suited for this application,
as a larger camera would be difficult to
3
2.4
place and store in the pantry.
2.3
Gateway Server
In order to perform the more complicated image processing calculations that
could not be performed efficiently by the
small Raspberry Pi, an Intel i7-based
custom-built computer was used. The
computer is essentially a standard machine in a relatively portable case, although not nearly the same size as the
Raspberry Pi. Like the Pi, it runs Linux,
which means that programming on it is
straightforward and intuitive. Because is
has a more powerful processor, it can run
calculations that take over thirty seconds
on the Raspberry Pi in less than five seconds. However, it is not small enough to
fit within the pantry, so it must be located
outside the pantry, communicating with
the Raspberry Pi for sensor and camera
input. In turn, it communicates with the
central server (which is hosted on this device itself for testing purposes), to send
information about the user’s pantry, making it a gateway server for all devices in
the household. The gateway is also programmed using Python, due to the simplicity of fast Python development and
the ability to import various necessary libraries easily.
Arduino
Figure 3: An Arduino was used to read
raw sensor data from a force-sensitive resistor [6].
The Arduino is a microcontroller used
to control basic electronic circuits (see
Figure 3). Each Arduino comes integrated
with a C++ development environment, allowing low-level control of both input and
output. For example, a force-sensitive resistor can be connected to the Arduino.
Unfortunately, the value returned by the
sensor is variable and not linear to the actual weight of the product, so special processing is necessary to get a useful value.
The raw sensor value is sent to the Raspberry Pi through a serial USB connection,
and the Raspberry Pi performs the actual
processing of the value for later use. In
this particular project, a chipKIT Uno32
was actually used because Arduinos were
unavailable; however, the two are practically identical, and for the sake of clarity
and brevity, the remainder of this paper
will refer to the chipKIT Uno32 as an Arduino.
2.5
Application
Programming Interface
An Application Programming Interface (API), is a software interface that a
company or an individual releases to the
public. Coupled with the release is documentation and instructions on use. APIs
allow others to use tools that have already
been created in order to facilitate processes and speed up work. In this project,
three APIs are utilized: Walmart, UPC,
and Twilio. The Walmart API is used to
allow for product recommendations to be
4
provided to the user. The UPC API is
used to get product information on objects that can be scanned for barcodes.
Twilio is used in order to send and receive
text messages from the user.
2.6
the pantry. The easiest way to accomplish
this was to read barcodes, which appear in
accessible locations on nearly every product and link to extensive product information. Because Python was the programming language used, the library of choice
for image processing was OpenCV.
Initially, code was written to identify
the location of a barcode based on what
were identified as the two main characteristics of a barcode: rapid changes from
black to white and a rectangle of a certain dimension. To identify rapid changes
in color, the image gradient was taken using the Sobel filter, which essentially finds
the the difference in brightness of the color
in the X and Y directions. Then, the
magnitude of the vector formed by these
two values is taken, to determine the total color gradient at this point. Then, all
the points with an image gradient above
a certain threshold were taken as possible
candidates to be a barcode and put in a
new image. This image was smoothed and
then turned into a series of polygons using the OpenCV library. If the polygons
were sufficiently close to common barcode
dimensions, they were officially considered
barcodes and barcode reading would then
begin. However, problems were encountered using this technique because normal
text, which also has a high image gradient,
was often mistaken for a barcode. The top
two images in Figure 4 demonstrate this
error. As a result, it simply became more
practical to use a barcode reading library
than try to implement a more complex image processing algorithm.
Due to the difficulties of creating a barcode recognition algorithm from scratch,
the ZBar library, written for Python, became the library of choice for barcode
recognition. ZBar is very accurate when
presented with zoomed in, high quality,
focused images of barcodes. However, the
Django
Django is a free, open-source framework for web development in Python. It
allows for easy and rapid website deployment. The webapp was written within
the Django framework, which made it
easy to display user specific information in
Hypertext Markup Language (html) formatting. Django also makes it easy to
store user specific information, such as
username and phone number, in Structure Query Language (SQL) tables, the
standard programming language for web
server databases.
3
3.1
Methods and Experimental Procedure
Data Collection
The first step in creating a smart
pantry was to build functions to obtain
information about the products in the
pantry and measure relevant quantities.
This was split primarily into two problems: recognizing products in the pantry,
as well as the use locations, and measuring the object’s weight based on its location in the pantry. These two problems
were solved in parallel, and combined afterwards because both are needed to assess the status of products in the pantry.
3.2
Product Recognition
The first step in data collection was
to distinguish and recognize products in
5
barcodes often appeared out of focus in
the images taken by the Raspberry Pi
camera, which also had no ability to automatically adjust the focus. This presented
numerous difficulties, because the ZBar
library could not recognize the barcodes
and return the corresponding Universal
Product Code. A significant amount of
pre-processing was necessary to make the
barcode recognition possible.
The first attempt at solution was to
try and clarify the difference between the
black and the white lines. When white
lines were manually drawn between the
black lines of the barcode, the ZBar library was able to decode the image. However, detecting lines using a Hough transform proved to be a difficult task and
hardly more accurate than the ZBar library alone. The next attempted solution was to use a simple threshold, because even though the image was blurry,
there was still usually a defined difference between the black and white regions.
This also failed, because the difference between the two regions was varied in different lighting conditions. Then, testing various threshold values between black and
white proved successful for many test images of barcodes, but still failed under suboptimal lighting conditions. Finally, an
adaptive thresholding algorithm was used,
which adjusted for varying light conditions throughout the image, which works
in many more cases. The results of the
adaptive thresholding algorithm (see Figure 4).
To detect the product’s location, a
similar approach was taken. The image
was simply sieved, assuming a brightly
colored base, such that the objects in the
pantry were black, and the background
was white. Then, the image was smoothed
using the OpenCV library such that stray
pixels were switched from either white to
black or black to white to match the pixels surrounding them. This resulted in
smooth shapes whose boundaries could
be easily found. Once the boundaries
were found, the center of the product was
found in the image and this mapped to
the weight sensors in the pantry were depressed by the product, which allowed
the weight sensing code to figure out the
weight of each individual product based
on all the weight sensor values.
6
Figure 4: The top two images depict the previous version of the barcode detection system,
which often detected barcodes incorrectly. The bottom two images show the the current version. On the left is a barcode before processing and on the right is a barcode after processing,
ready to be scanned by the ZBar barcode reading library.
3.3
Weight Detection
resistor, ensuring that all force would be
directed through the resistor, and not the
table.
The next step was to translate the values from the voltage across the circuit to
a consistent force value in grams. This
presented difficulties because the data initially appeared to be very nonlinear and
very inconsistent. The first attempted solution was to average the sensor input over
time, to get a more reliable and constant
reading. Initially, 100 samples were taken
over the period of a second, but this was
reduced for the sake of efficiency to 20,
which still produced relatively reliable values. Then, linear regression was used to
produce a linear equation that mapped
the voltage value to weight in grams based
on the weight of a known object such as
The weight sensing was accomplished
by using a force-sensitive resistor connected to an Arduino. As mentioned
earlier, the force-sensitive resistor is
noisy, depending on slight changes in object placement, and extremely inaccurate—the voltage across the circuit being
non-linear to the actual force applied. As
a result, significant efforts were required
to be able to actually process the information from the force-sensitive resistor.
The first step was creating a solid base
for the resistor such that product placement would have little effect on the value
read. This was accomplished by creating
a small circle base out of construction paper. The base would actually contact the
7
Figure 5: The graph shows the relationship between the actual mass of the object and the
voltage read by the Arduino across the force-sensitive resistor.
a cell phone and a wallet (See Figure 5).
These measurements were verified with a
digital scale.
3.4
list.
In addition, the contents of the user’s
pantry was used to predict products that
the user might also be interested in. For
this, the Walmart Product Recommendation API was used. Walmart is a large
company that has millions of data points
regarding customer preferences, and the
algorithm used by this project combines
the top ten recommendations for each
product and weights it based on the last
time the product appeared in the pantry.
This preference for recent products decays exponentially over time. When all
the products’ recommendations weighted
by time are aggregated, a list of recommended items that are relevant specifically to the user can be constructed.
Data Processing
To process the information, two APIs
were used. First, the scanned barcodes
were used to pull product information,
i.e. weight, name, and description. Most
products have Universal Product Codes
(UPC), which encode a number that
uniquely identifies the product. The UPC
was sent to the UPCitemdb API which
returned the weight of the product, the
name of the product, and a description
to show on the web interface. Using the
official weight of the product and the actual weight measured by the weight sensor
in the pantry, the system can determine
whether or not the user needs to buy the
product in the next shopping trip and put
it in the automatically generated shopping
8
Figure 6: The force-sensitive resistor was connected to the chipKit Uno32 setup using a
breadboard as shown above.
3.5
3.5.1
User Interface
user.
Web Application
3.5.2
The web application will provide the
users with an interface to receive and communicate information about their pantry.
Users can log in and register their personal smart pantries with an alphanumeric Globally Unique Identifier (GUID)
provided on purchase of the device. Users
can also register their phone numbers for
the texting interface. The device registration form stores the user’s GUIDs and
phone numbers in Structure Query Language (SQL) database as standard in website maintenance. This information is
stored with the username as the primary
key. To display the products in a user’s
pantry, the username is fed to the SQL
database which returns the user’s GUIDs,
which can be used to locate pantry information stored on the server in the form
of a Javascript Object Notation (JSON)
file. The same approach is used to bring
up product recommendations specific to a
Texting Interface
In order to send and receive text messages to a user, the Twilio API for Python
was used. This API allows the user to request access to the contents of the pantry
via text and also allows the pantry to
send texts to individual users when food is
reaching a low amount. The texting interface allows the pantry to send notifications
to the user as food levels depletes enabling
the pantry to have a communication with
the user.
3.6
Network
Ultimately, the system is comprised of
five layers of interaction: the user, the
server, the gateway, the Raspberry Pi and
the Arduino. When the user wants information, the user obtains it either through
text messaging or through the web application, which are both provided by the
9
Figure 8: The image above depicts the relationships between all the components of the Internet
of Things enabled pantry.
server. The server receives semi-processed
information periodically from the gateway
about the products within the pantry and
processes the information fully to present
to the user when requested. Before that,
the gateway receives information in the
form of images and weight values from
the Raspberry Pi and any other information from any other Internet of Things enabled devices that may be added to the
household. In the future, if more features or products are added, the gateway
will be able to add them in a modular
fashion, acting as the one node connecting the central server to all the devices
in each household. Then, the Raspberry
Pi receives information directly from the
camera module and the Arduino which
is connected to it. Finally, the Arduino
reads its information as an analog input
from the force-sensitive resistor, completing the lowest-level layer of the system.
4
server as to the status of the product.
This data is then processed in conjunction
with the original weight of the product to
determine if the product needs to be refilled. This code would then notify the
user through text if the product is getting
very close to being empty. This informa-
Results
The current iteration of the smart
pantry has basic functionality with room
for expansion. Using just a simple Raspberry Pi weight sensor the smart pantry
was capable of constantly updating the
10
tion would also be updated so that the
user would have access to it constantly
through the web interface. This basic
functionality provides the first real-time
integrated system for food monitoring and
notification. With the development of a
more advanced weight and image recognition system, and the possibility of an
app for easier access, the smart pantry
could become a very compelling product
for even the everyday consumer.
4.1
consistent nor accurate. Various different designs were created to make it easier to place objects consistently on the
sensor, but there was still significant error. Minor inconsistencies in the weight
data were manageable, however, because
the system only uses the weights to decide when an item is finished. Clearly,
an empty item weighs much less than a
full item, and the difference in weight is
certainly greater than the weight sensor’s
margin of error. Even still, the less accurate data could be an inconvenience to
the user if it leads the user to buy unnecessary products when a product is half full.
In order to resolve this problem in the future, more reliable weighing scales would
be required, because the force-sensitive resistors currently being used are simply not
made for the application of weighing largescale items.
Limitations and Setbacks
Throughout the process of constructing the smart pantry, there were two major obstacles that were faced. One major challenge was attempting to transition
from barcode recognition to object-based
recognition for the convenience of the user.
With barcode recognition, the products
in the pantry need to be in very specific
positions to be recognizable to the system. First of all, a non-fixed-focus camera would make the problem much simpler, because products would not have to
be placed an exact distance away from the
camera. However, the idea of using barcodes itself could be improved upon, because oftentimes, users will not want to
place objects facing the camera because
it would be inconvenient. If the system
were somehow able to recognize objects
based on what they look like rather than a
specific code, the placement of the pantry
items would not be a concern because
the items could be facing any orientation.
This would allow for more compact storage of products without confusing the system. Ultimately, the product would be
more convenient and easier to use.
The other struggle was that the data
collected from the weight sensors was not
in grams. When trying to come up with
a conversion factor, the data was neither
5
5.1
Conclusion
Overview
The smart pantry prototype was designed to be an exploratory device demonstrating the feasibility and usability of a
pantry based smart device. The prototype is currently capable of taking a single item, determining its weight through
the use of the weight sensor, determining its type and brand through the use of
the barcode recognition, and keeping the
user updated on the state of this product
in real time, while also providing notifications if the product seems to be nearing being empty. While still much work
would be needed to develop a usable product, the prototype certainly shows how a
simple design would be immensely useful
to the user.
11
5.2
Future Steps
Watson developed by IBM, which takes in
a list of ingredients and outputs possible
recipes [7].
The prototype that was made used
barcode-recognition to determine what
objects were in the pantry. Although this
was functional and worked at identifying
barcodes, and by extension items, this
would not be feasible for a commercial
product. Because barcodes come in a variety of different orientations, some of which
might not be visible or lighted sufficiently
by a camera, image recognition would be
ideal in developing this product further.
Many websites, such as Wolfram Alpha,
offer image based recognition, yet these
services are not developed enough yet to
recognize brand and size of the products.
Image recognition or some other alternative would be absolutely critical in creating a usable product.
The smart pantry prototype only features one weight sensor that can measure
a single object and report its mass back to
the program. In order to make a practical
pantry, there would have to be a web of
weight sensors on the floor that would all
feed back their info into the software that
would distinguish what weight sensors are
carry which products weight. This would
allow users to place their products anywhere in the smart pantry without concern for what weight sensors the product
was resting on.
Due to the fact that the smart pantry
is not yet full scale and only houses
one product it was not possible to build
in functionality to recommend recipes to
users based off what is in their pantry.
This would be a great feature to develop in
a commercial product as it would take the
stress off the users to decide what to make
based on the ingredients available in the
kitchen. This would incorporate aspects
of machine learning and artificial intelligence similar to the already existing Chef
6
Acknowledgements
The authors would sincerely like to
thank their project mentor, Nick Lurski,
for his constant assistance throughout this
project and the Residential Teaching Assistant, Lawrence Maceren, for his guidance. The authors also gratefully acknowledge Dean Jean Patrick Antoine and
Dean Ilene Rosen for their continued support of aspiring engineers through the NJ
Governor’s School. Thanks for providing
so many students with this opportunity to
connect, learn, and create.
Additionally, thanks to the NJ Governor’s School of Engineering and Technology for providing the forum for this collaboration. This experience would also
not be possible without the many sponsors of GSET for enabling the schools continued existence. These sponsors include
Rutgers, the State University of New Jersey; Rutgers School of Engineering; Lockheed Martin; South Jersey Technologies;
and Printrbot.
7
References
[1] “Internet of Things (IOT),” What
is the Internet of Things (IoT). [Online]. Available: http://www.sas.com/
en_us/insights/big-data/internetof-things.html.
[Accessed: 16-Jul2016].
[2] “Meet the Nest Learning Thermostat,” Nest. [Online]. Available: https:
//nest.com/thermostat/meet-nestthermostat/. [Accessed: 16-Jul-2016].
12
http://www.androidguys.com/2015/
11/20/deal-kick-off-the-holidayswith-the-complete-raspberry-pi-2starter-kit-plus-a-bonus/.
[Accessed: 21-Jul-2016].
[3] “Home has a new hub,” Family
Hub Refrigerator. [Online]. Available:
http://www.samsung.com/us/explore/
family-hub-refrigerator. [Accessed:
16-Jul-2016].
[6] “Picture of an arduino,” Ebay.
[Online].
Available:
http://i.
ebayimg.com/00/s/nzmwwdczma=-=/z/
fiwaamxqz7dtlwup/$_32.jpg.
[Accessed: 16-Jul-2016].
[4] “Raspberry Pi 2 on sale now at $35
- Raspberry Pi,” Raspberry Pi Raspberry Pi 2 on sale now at 35 Comments,
2015. [Online]. Available: https://www.
raspberrypi.org/blog/raspberry-pi2-on-sale/. [Accessed: 16-Jul-2016].
[7]
“Cognitive
Cooking,”
IBM.
[Online].
Available:
http://
www.ibm.com/smarter-planet/us/en/
cognitivecooking/tech.html.
[Accessed: 16-Jul-2016].
[5] “Kick off the Holidays with The
Complete Raspberry Pi 2 Starter Kit,”
AndroidGuys.
[Online].
Available:
13