The User Activity Reasoning Model in a Virtual Living

International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015), pp. 53-62
http://dx.doi.org/10.14257/ijseia.2015.9.6.06
The User Activity Reasoning Model in a Virtual Living Space
Simulator
Bokyoung Park, Hyeongyu Min, Green Bang and Ilju Ko
Department of Global Media, Soongsil University, 369 Sangdo-ro, Dongjak-gu,
Seoul, 156-743 Korea
{bokyoungp, norwoods, banglgreen, andy}@ssu.ac.kr
Abstract
Smart homes identify the implicit intentions and needs of occupants through the
detection of the activities of residents and their internal environment, and they provide a
convenient service to them. In smart home research, context-awareness technology is
essential. Context-awareness research requires a large amount of data of residents’
activities in living spaces. Data collection using a simulator is a recently developed
method to overcome the difficulties of data collection in the real world, and to get
consistent data. In this paper we present the usage of a simulator consisting of a virtual
space similar to an actual living space, with virtual sensors and virtual character.
Through this simulation we collected user context data which has a high probability in
the real world. Collected data is analyzed via a classifier, and resultantly a user activity
reasoning model in a virtual living space is generated.
Keywords: Smart home, Context-awareness, Virtual space simulator, User activity
reasoning model
1. Introduction
Smart homes can be categorized into passive smart homes and active smart homes
depending on the service provided. Passive smart homes respond to residents’ commands,
whereas active smart homes react independently through interaction with the user’s
intentions or requests, and various studies about them are ongoing [1]. Smart home
technology has developed from mechanical operation by simple sensor detection, to
performing an independent function through detecting the internal environment and the
residents’ activities. For this smart home research, the technology to recognize the
occupants’ activities and to be aware of the context is essential. Context-awareness is
technology that can recognize human activities, voices and the environmental changes
surrounding people [2], and it makes it possible to identify their wishes and to control the
environment to provide necessary services to the occupants. Much further study is needed
on the collection of data, including on activities of daily living, for context-awareness
research in living spaces, to do this the end various research is progressing in actual living
spaces equipped with devices, and in which users have wearable sensors [3].
However, it is difficult to consistently collect data during a long experimental period in
a real living space [4]. Accordingly, the activity types of users and data can be limited in
experiments [5]. To remove the kinds of constraints that cause this situation, such as high
costs, long duration and the resistance of participants to sensors and wearable devices
needed in a real environment, researches can use simulators which make a virtual space
and generate data in a relatively short period of time. Much recent progress has occurred
in the development of such applications [6].
In this paper, we will review works related to the use of simulators in smart homes,
propose a user activity model in a virtual living place, and draw some conclusions from
the results of context-data collection using simulation and activity reasoning.
ISSN: 1738-9984 IJSEIA
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
2. Related Works
Smart home simulator research is divided into two areas: firstly the implementation of
user activity recognition algorithms and service validations using context-awareness [7].
The simulators used for user activity recognition algorithms are Persim [7] and the WSN
Simulator [6]. And secondly simulators used for service validation using contextawareness are UbiREAL [8] and the Smart house simulator [9].
Simulators for activity recognition research are based on event-driven simulation. The
actions in a virtual space are created as events and are used in the activity recognition
algorithm test with sensory data. Persim is a simulator that creates realistic sensory data
using 3D space, sensors, smart character modeling and generates an automatic scenario. It
creates the data in a virtual space, in which virtual characters live and actuate the sensors,
and ultimately provide data similar to data from the real world [7]. Ariani, et al., used a
simulator to make a profile and action list of the residents in a house, and collected output
signals via virtual sensors in a virtual living space. To get a large amount of data, the
simulator automatically repeated the process using the users’ profiles and action lists and
executed the simulation process. The experiment results were saved in a database. The
researchers are using this data to develop a fall detection algorithm test [6].
Simulators for context-awareness are based on services with conditions and rules. The
conditions are the context information, including the occupants and the objects in smart
homes, and the rules are the actions related to the devices. The simulators aim to validate
predefined services and the functions of smart homes before their implementation in the
real world. UbiREAL is a simulation application to automatically control virtual devices
in a 3D virtual space based on contexts. In addition, UbiREAL provides a GUI module
allowing users to place virtual devices in a 3D smart home, and to change the state of a
virtual device and visualize the route of an avatar, so that users can intuitively observe the
action of the devices by context information [8]. The smart house simulator presented by
Jahromi, et al., is an application which allows programmers to design a smart house
equipped with power, temperature, light, and location sensors. The simulator allows for
various possible combinations of device states to be tested, independent of any interaction
with users [9].
3. User Activity Model
In this paper we present a simulator for collecting residents’ activity information and a
user activity reasoning model based on this. The simulator is a context data collection
module using a virtual living space, and the activity reasoning model is composed of a
learning module from the simulator. This process is illustrated in Figure 1. To generate
the data for the reasoning model, we built a virtual living space in the simulator and
arranged objects with sensors to create a virtual character. The virtual character in the
living space is designed to perform a series of actions as if he/she is in an actual living
space. The virtual character can use the objects and home appliances in the virtual living
space in various combinations. In this space, activity and environment information is
collected through the data collection module. The data of a user’s activities is analyzed
via the classifier module. The activity classification results are used to create a model to
infer and validate the user’s activity.
54
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
Figure 1. User Activity Reasoning Model with Simulator
3.1. Virtual Space Model
In the research using a simulator, a virtual space similar to an actual living space
should be created and the context data should be collected from this space. In addition,
after configuring the virtual space, objects and sensors needs to be placed in the virtual
space in accordance with the purposes for which the user uses the space [16].
Figure 2. Layout of a Virtual Living Space
The living space consists of a series of detailed spaces based on the main purposes of
each specific space. In each detailed space, various objects associated with peoples’
actions exist and the spaces are further divided based on these objects [10]. Zhu, et al.,
segmented the experiment space into 6 semantic areas based on the probabilities of user
behavior pattern: a walking area, a book shelf area, a bed sitting area, a bed lying area, a
sofa area, and a workstation area. In this study, these areas improve the accuracy of the
recognition results used together with sensory data, user activity data and segmented area
data. The spatial information of a user in a living space is used to increase the accuracy of
the recognition and inferences of user activity [11].
The activities in a living space tend to occur in a corresponding purpose space serving
those purposes. For example, personal hygiene activities occur mostly in a bathroom, or
dining activities occur in a kitchen. In this way the area determines the specific activity. In
this study, the living space is divided into 5 space types: a living room, a kitchen, a
bedroom, a hallway, and a bathroom and this division is based on the purpose of each
location. The overall view and layout of a living space is shown in Figure 2.
Copyright ⓒ 2015 SERSC
55
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
Table 1. Location Classification by Purpose
Purpose
Location
Living
room
Location specialized
area
TV, Game Console
Hallway
Burner, Sink, Water
purifier, Refrigerator
Shoes
Bathroom
Toilet
Bedroom
Bed
Kitchen
Residence area
Comfort area
Sofa, Carpet
Light, Air conditioner,
Audio, Flowerpot, Sports
equipment, Computer,
Table
Cupboard, Light
Mirror, Door
Mirror, Bathtub,
Washstand
Desk, Dressing
Table, Stool,
Computer, Mirror
Shoe chest, Light
Cabinet, Light, Washing
machine
Closet, Window, Lamp
Additionally, in Sung, et al., [10], location is classified using three categories: sociality,
purpose, and mobility based on the object of activity. In this paper, focusing on the
category of purpose, we further divided the detailed space using characteristics of goal
and movement into a specialized area, a residence area and a comfort area based around
how furniture and appliances are used. The location of these objects is shown in Table 2.
Figure 3. Living Room and Segmentation Into Areas
The locations are split into as a static area and a moving area based on the
characteristic of movement with flow line in a space. Three areas defined in Table 2
correspond to a static area; the remaining area is a moving area. Based on purpose and
mobility, the segmentation of a living room is shown in Figure 3. This segmented area can
be added to or changed when objects or spaces are added or varied accordingly.
3.2. User Context Data Collection
It is important to arrange the sensors in the virtual living space to capture the context
information of the user in that place and the status of the objects. We use a wide range of
sensors, including a camera sensor, current sensor, pressure sensor, contact sensor, light
detection sensor, temperature and humidity sensor. They are all designed to have the same
function they would have in the real world so that they collect data similar to real world
data in a virtual space. The placement of camera sensors and pressure sensors is illustrated
in Figure 4, and an explanation and location of each sensor is as follows.
56
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
Figure 4. Placement of the Sensors
Camera sensors are used in order to determine a user’s location. A camera is located in
each location: in the living room, kitchen, bedroom, bathroom, and hallway, as per the
areas illustrated in Figure 3. When a user enters each area, the location information of a
user is obtained by utilizing this image information. Current sensors detect the current
flow of electronic products whether on or off, and they are attached to the air conditioner,
TV, game console, audio, sports equipment, and computer. Pressure sensors detect the
pressure when a user sits on or uses a bed, sofa, stool, carpet, or toilet, they identify
whether the object is in use. Cupboards, closets, the front door, other doors and the
refrigerator door each have contact sensors to determine whether or not they are in use.
Each space is equipped with light detection sensors to detect light variation by season,
time and the user’s lighting usage. To measure changes of temperature and humidity, and
the way a user uses the air conditioner according to season and time, temperature and
humidity sensors are installed in the living room. All of these sensors are designed to
function exactly as they would in the real world.
Context information data for classification should be of high quality in two criteria:
discernment and dimension. It is desirable to minimize the dimension while enhancing the
classification performance with the minimum time and effort required for computation
[12]. Features satisfying the above two criteria are extracted from the data of the residents
in the living space, including information about a user’s current behavior, spatial
information, objects in use, the gestures and posture of a user, previous activities, start
time and duration of the current activities, and whether they vary on weekdays or holidays,
and according to weather conditions.
The user activity data log collected by using these extracted features is not significant
information in itself. Data needs to be given meaning in order to be usable in the learning
module. To give meaning to user activity data, each activity’s data needs to be classified
by activity category. Activity type behavior of occupants in a living space is extracted
from statistical data collected every five years in South Korea on basic information about
people’s quality of life and what activities they are engaged in [13]. In the user activity
reasoning, we use activity types such as meal preparation, eating, washing, putting on
clothes, laundry, folding clothes, cleaning, watching TV, using a computer, going out,
relaxing, and sleeping. Composite activity is a set of the user’s activities in a virtual living
space, each composite activity is defined as a basic actions like usage of hands, sitting,
lying down, standing, bending at the waist, walking, running as well as the set of
environmental information related to a user’s behavior, i.e., the start time of an activity,
duration time, space and other environmental data. A User performs a series of actions by
the defined activity type in a simulator, the activity inference result of the classifier is also
one of the activity types used in reasoning. The composite activity is composed of some
of the basic activities including the use of objects; this is then completed by performing a
series of basic activities using a virtual character in a simulation.
Copyright ⓒ 2015 SERSC
57
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
For this research we used statistical data based on activities in a living place to extract
the activity type. Since the activity type differs according to the characteristics of the
space and associated objects, if the domain of the user’s actions is changed, for example
from the living space to the office or a shopping mall, then the new activity type is
extracted using that place and the changed objects.
In this paper, we used supervised learning, a machine learning task of interring
function from labeled training data after collecting user data by user activity label. For the
learning module in this research, we used the SVM classifier developed by Vapnik, a
Russian statistician, in 1995. SVM is currently recognized as the classifier with best
generalization ability [14], and it shows the best performance in user activity recognition
research as compared to other classifiers [15].
To obtain highly accurate results for the activity reasoning model, we needed a certain
amount of data to be accumulated; this data should be used in the learning module. To
train the SVM classifier, the data values represented by a vector of numbers rather than
unstructured data type is required. Therefore, context information created in a virtual
space and represented by character type or categorized data has to be converted to a
numerical value. Of the data acquired from defined sensors, if it has a value of variable
length or non-numeric form, then it has to be transferred to numeric form data as well. We
obtained the activity classification results through this data after preprocessing and
training for classifiers. We then processed training for classifiers using the subset data
randomly selected, and we validated the activity reasoning using the remaining data set.
When the training and the validation of the proposed model is completed, from this model
it is possible to deduce the specific activities using space, user and environment
information.
4. Experiments
In this paper, we implemented the simulator using Unity3D to collect the user’s
context-data for the activity reasoning model. The simulator was composed of a virtual
living space, virtual sensors attached to the objects, and virtual character. We collected the
context-data including activity data, spatial information, and sensor values through
operation of a virtual character. Of the data collected from simulation, some was used in
the training of the SVM classifier to learn the user’s activities, and some data was used to
evaluate the activity reasoning model.
Table 2. Dataset Generated Through the Simulator
object
posture
location
area
Tim
e
Duratio
n
(sec)
TV
Standing
LivingRoom
LocationSp
9
9
TV
UsingHand
s
LivingRoom
LocationSp
9
10
TV
Standing
LivingRoom
LocationSp
9
11
TV
Walking
LivingRoom
LocationSp
9
13
TV
Walking
LivingRoom
Residence
9
14
TV
Standing
LivingRoom
Residence
9
16
Sofa
Standing
LivingRoom
Residence
9
17
Sofa
UsingHand
s
LivingRoom
Residence
9
17
Sofa
Standing
LivingRoom
Residence
9
19
Sofa
Sitting
LivingRoom
Residence
9
22
58
preActivity
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
MealPreparatio
n
activity
WatchT
V
WatchT
V
WatchT
V
WatchT
V
WatchT
V
WatchT
V
WatchT
V
WatchT
V
WatchT
V
WatchT
V
weather
Sunny
Sunny
Sunny
Sunny
Sunny
Sunny
Sunny
Sunny
Sunny
Sunny
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
Collecting data using a simulator is a method to overcome restraints collecting data
from the real world, and to get consistent data. Furthermore, repeated execution of the
simulation means data can be collected at a lower cost and in a shorter period than in the
real world. In addition, through the repeated execution of the simulation, it is possible to
obtain approximate data on factors that negatively affect the performance and accuracy of
the system and variations in the real-world. [6]. The simulator which creates the contextdata is composed of a virtual space, virtual sensors and a virtual character. A virtual space
is a detailed place such as a bedroom, a living room, a kitchen, a bathroom and a hallway.
The detailed place is segmented into sub-areas according to their purpose and mobility.
We then implemented a 3D model after we placed objects with the characteristic of each
space. In a virtual space, there exists a detailed space and sensors which are programed by
each object. Then data such as the activity data of virtual character, location data, and
object usage is collected. A user interacts with objects by operating a keyboard and a
mouse and moving the character in a virtual space. A user is able to move, use objects,
and perform basic and composite activities by operating the simulator. When a user
moves in a living space while performing an action after a user selects a composite
activity, sensors detect this change and generate data, which as shown in Table 2 is saved
in a database.
Figure 5. Simulation Screen
A virtual character spends 24 hours a day in a simulator in a virtual space. However,
time in a virtual space is defined differently from time in actual space. When the user
starts a specific behavior, he/she can adjust the time value as shown in the simulation
screen to reduce time in the virtual space relative to the actual time. In the experiment
data is stored at one hour interval to reduce the complexity of calculation, and the
duration time of the activity is represented in seconds. Before starting a specific activity,
we enter the value of the activity type, previous activity, temperature and other weather
conditions and time, and then start the experiment. After the selected activity is completed,
we finalize it and proceed with the experiment by selecting another activity type. For
activities like sleeping and going out, which do not impact on the state of a user or the
sensors, we can adjust the flow of time during the experiment (Figure 5).
In order to generate data, a user selects the activity type and performs the actions by
controlling the virtual character. The data is generated by each action type, composed of
an object, state, space, time, and environmental information, and is stored in the database.
This data is created if a particular event occurs, or the state of the sensors changes. Using
the simulator, while changing a set of environmental factors of a series of actions in a
living space, we can repeatedly generate data. For the experiment, based on the behavior
of an adult having normal patterns we created 7 weeks’ worth of data while giving
different environmental information, and resultantly 66,520 cases of data were recorded.
Copyright ⓒ 2015 SERSC
59
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
Table 3. Reasoning Rate for Activities
Activity
Meal preparation
Eating
Washing (face, body, brushing teeth)
Putting on clothes (make up etc.)
Laundry
Folding clothes
Cleaning
Watching TV
Using a Computer
Going Out
Relaxing
Sleeping
Accuracy
90.85
100.00
97.78
90.97
76.00
50.00
97.90
57.75
69.09
90.86
75.23
70.12
The 12 types of activity data created by the simulator are used in a user’s activity
reasoning after training the classifier. We used the default parameter for the SVM training,
and the experiment was based on a five-fold cross validation [17]. The accuracy of the
inference is the probability to correctly infer the context-data of a specific activity. After
we create the confusion matrix by activity type, the amount of accurate data is calculated
as a percentage divided by the total incidence, and the reasoning result is summarized as
Table 3. As shown in Table 3, the activities which occurs frequently in a home, such as
meal preparation, eating, washing, putting on clothes, cleaning or going out shows high
accuracy; activities which have a low frequency of occurrence such as laundry, folding
clothes, and using a computer or the activities which have a high probability of occurring
in the same place like relaxing, watching TV and sleeping shows relatively low accuracy.
The proposed model is based on the learning module, so it is possible to create a new
reasoning model after we change the objects/activity types in the simulator or if we apply
new behavior patterns. In such cases, we simply regenerate the user data after reflecting
the changes.
5. Conclusions
In this paper, we presented a simulator consisting of a virtual living space and a virtual
character, and a user activity reasoning model based on this. We performed the simulation
using a virtual character based on typical activities that can occur in a living space and the
data collected by virtual sensors is saved as contextual information. The user’s activities
and environment information is used to train the classifier module for each activity type,
the activity classification results are used for the user’s activity reasoning model.
In order to collect the user’s activity data, including the interaction between the user
and the object in the actual living space, the object must be installed in that space for a
considerable period of time and for the duration of the experiment. In the case of
additions/modifications of new objects or a change in the environmental conditions, the
same experiment must be repeated from the beginning. The proposed model is based on
the learning module, so it is possible to create new reasoning models after we change the
objects/activity types in the simulator or we apply a new behavior pattern. In that case, we
simply regenerate the user data after reflecting the changes.
We expect that the proposed simulator and reasoning model in this paper can be used
for context-awareness of multiple-residents in places where users exhibit behavior that
includes interactions between them, and in drawing inferences from the activity of singleinhabitants. However depending on the user’s behavioral patterns, the accuracy of the
inference according to the frequency of occurrence in a living space can vary. In our
60
Copyright ⓒ 2015 SERSC
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
future work, we will evaluate the performance of user activity reasoning through
additional data generation using the proposed simulator.
Acknowledgments
This research was supported by next-generation information computing development Program
through the National Research Foundation of Korea (NRF) funded by the Ministry of Education,
Science and Technology (No. 2012M3C4A7032783)
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
J. Lertlakkhanakul, J. W. Choi and M. Y. Kim, “Building data model and simulation platform for spatial
interaction management in smart home,” Automation in Construction, vol. 17, no. 8, (2008), pp. 948957.
M. Weiser, “Some computer science issues in ubiquitous computing,” Communications of the ACM, vol.
36, no. 7, (1993), pp. 75-84.
A. GhaffarianHoseini, N. D. Dahlan, U. Berardi, A. GhaffarianHoseini and N. Makaremi, “The essence
of future smart houses: From embedding ICT to adapting to sustainability principles,” Renewable and
Sustainable Energy Reviews, vol. 24, (2013), pp. 593-607.
L. Wang, T. Gu, X. Tao, H. Chen and J. Lu, “Recognizing multi-user activities using wearable sensors
in a smart home,” Pervasive and Mobile Computing, vol. 7, no. 3, (2011), pp. 287-298.
D. J. Cook, M. Schmitter-Edgecombe, A. Crandall, C. Sanders and B. Thomas, “Collecting and
disseminating smart home sensor data in the CASAS project”, Proceedings of the CHI Workshop on
Developing Shared Home Behavior Datasets to Advance HCI and Ubiquitous Computing Research,
(2009).
A. Ariani, S. J. Redmond, D. Chang and N. H. Lovell, “Simulation of a smart home environment”,
Instrumentation, Communications, Information Technology and Biomedical Engineering (ICICI-BME),
2013 3rd International Conference on, IEEE (2013), pp. 27-32.
A. Helal, K. Cho, W. Lee, Y. Sung, J. W. Lee and E. Kim, “3D modeling and simulation of human
activities in smart spaces,” Ubiquitous Intelligence & Computing and 9th International Conference on
Autonomic & Trusted Computing (UIC/ATC), 2012 9th International Conference on, IEEE, (2012), pp.
112-119.
H. Nishikawa, S. Yamamoto, M. Tamai, K. Nishigaki, T. Kitani, N. Shibata and M. Ito, “UbiREAL:
realistic smartspace simulator for systematic testing,” Ubiquitous Computing, (2006), pp. 459-476,
Springer Berlin Heidelberg.
Z. F. Jahromi, A. Rajabzadeh and A. R. Manashty, “A Multi-Purpose Scenario-based Simulator for
Smart House Environments,” arXiv preprint arXiv:1105.2902, (2011).
B. K. Sung, G. R. Bang, H. G. Min, M. H. Lee and I. J. Ko, “Research of Space model for Context
awareness based on user activity in shared living space,” Workshop on Convergent and smart computing
systems, (2013), pp. 117-120.
C. Zhu and W. Sheng, “Motion-and location-based online human daily activity recognition,” Pervasive
and Mobile Computing, vol. 7, no. 2, (2011), pp. 256-269.
I. Guyon and A. Elisseeff, “An introduction to variable and feature selection,” The Journal of Machine
Learning Research, vol. 3, (2003), pp. 1157-1182.
“Behavioral classification casebook”, Life time usage survey statistics of whole population, Statistics
Korea, (2009), http://meta.narastat.kr/.
C. J. Burges, “A tutorial on support vector machines for pattern recognition,” Data mining and
knowledge discovery, vol. 2, no. 2, (1998), pp. 121-167.
D. J. Cook, N. C. Krishnan and P. Rashidi, “Activity discovery and activity recognition: A new
partnership,” Cybernetics, IEEE Transactions on, vol. 43, no. 3, (2013), pp. 820-828.
B. K. Park, H. G. Min, G. R. Bang and I. J. Ko, “The user activity reasoning model based on contextawareness in a virtual living space,” Proceedings International Workshop Ubiquitous Science and
Engineering, vol. 86, (2015) April 15-18, Jeju Island, Korea.
C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on
Intelligent Systems and Technology (TIST), vol. 2, no. 3, (2011), pp. 27.
Copyright ⓒ 2015 SERSC
61
International Journal of Software Engineering and Its Applications
Vol. 9, No. 6 (2015)
Authors
Bokyoung Park, She received the Bachelor’s degree from the
Choongnam University in 1994. She is in the M.D course at the Soongsil
University at Seoul, Korea, in 2014-present. She is interested in Smart
homes, machine learning and context-awareness research.
HyeonGyu Min, He received the Bachelor’s degree from the Soongsil
University in 2013. He is in the M.D course at the Soongsil University at
Seoul, Korea, in 2013-present. He is interested in the area of machine
learning, UX and simulator.
Green Bang, She graduated the M.D course from the
University in 2011. She is in the Ph.D course at the
University at Seoul, Korea, in 2011-present. Her
research interests include UX, context cognition and
emotion.
Soongsil
Soongsil
primary
artificial
Ilju Ko, He graduated the Ph.D course from the Soongsil
University in 1997. He is a professor at the Soongsil University
at Seoul, Korea, in 2003-present. His primary research interests
lie in the area of content-based research, UX, and artificial
emotion.
62
Copyright ⓒ 2015 SERSC