Department of Computer Science
Ph.D. in Computer Science
Cognitive Models for Usability
Evaluation of Interactive Systems
Supervisor:
Dr. Paolo Milazzo
Ph.D. Candidate:
Giovanna Broccia
October 2016
Contents
1 Introduction
2 Neurological Background
2.1 Cognitive Psychology . .
2.2 Memory System . . . .
2.3 Reward System . . . . .
2.4 Decision System . . . .
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3 State Of The Art
3.1 Human-Computer Interaction and Usability
3.1.1 The interaction . . . . . . . . . . . .
3.1.2 Usability Evaluation Methods . . . .
3.2 A Cognitive Framework Based on Rewriting
3.2.1 Maude Rewrite System . . . . . . .
3.2.2 The Cognitive Process Model . . . .
3.2.3 The ATM Case Study . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
3
5
7
. . . .
. . . .
. . . .
Logic
. . . .
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8
8
10
11
16
17
18
24
.
.
.
.
.
.
.
.
.
.
.
.
4 Proposal and Work Plan
25
4.1 Remold the Cognitive Model Short Term Memory . . . . . . 26
4.2 Add the rewriting rule randomDelete to the Cognitive Model 26
4.3 Test the validity of the Cognitive Model . . . . . . . . . . . . 27
4.4 Add the Long Term Memory to the Cognitive Model . . . . . 27
4.5 Add the Reward System to the Cognitive Model . . . . . . . 28
4.6 Add the Decision System to the Cognitive Model . . . . . . . 29
1
1
Introduction
Human-Computer Interaction studies the design and use of interactive systems (computer systems, devices, control systems, etc.) focusing on the interaction between the human component and the interface of the computer
component.
One of the most important objectives in Human-Computer Interaction
is usability: a product or a service is considered usable if it can be used with
effectiveness, efficiency and if it satisfies the user who uses it. However, in
the last years the attributes of usability have increased: now a product or
a service is considered usable if it is efficient to use, subjectively pleasant,
easy to learn, easy to remember and if it has a low error rate.
Many are the methods to evaluate usability, but none of them takes
in account at the same time all of the attributes of a usable system. For
example, many of them deal with the effectiveness and efficiency of a system
– evaluate if the user can reach the goal, how much time it takes for a user to
complete a task; others deal with the subjective impression (and satisfaction)
of users – evaluate if the user is satisfied by the product, if the user would
add some characteristic to the product; others deal with the error rate of
the system – evaluate the number of errors that users can do by interacting
with the system.
The proposed Ph.D. thesis has as its main objective to fill this gap by
creating a computational model adding to the traditional cognitive models
(with which it is possible to estimate the time users take to perform a given
task and reach the goal) the memory system, the reward system and the
decision system.
The memory system will manage the memorability and the learnability
of a interactive system and also the insurgency of errors which user does
when, for example, it is necessary to remember too much information.
The reward system will manage the satisfaction of users in interacting
with the computer.
The decision system will manage the situations in which the user has to
decide between two (or more) different actions.
The rest of this thesis proposal is organized as follows. Section 2 presents
a brief background knowledge needed for the work about cognitive psychology, the memory system, the reward system and the decision system. Section
3 presents the state-of-the-art and it is divided into two subsections: in the
first subsection it is presented a brief introduction on what usability is and
which are the main methods for evaluating it; in the second subsection it is
presented in a more detailed way a cognitive framework based on rewriting
2
logic for the evaluation of interactive systems that is taken as starting point
for the Ph.D. work. In section 4 it is described the work, the main research
topics and developments needed to achieve the final aims of the study and
the work plan for the second and third year.
2
Neurological Background
2.1
Cognitive Psychology
The reflection about the human mind and its processes grows since the
times of ancient greeks: in 387 BC Plato suggested that the brain was
the seat of the mental processes. Over the years many debates have grown
regarding whether human thought was solely experiential or included innate
knowledge.
In 20th century some influences arose and inspired the inception of cognitive psychology: the development of computer science led to reflection
about the parallelism between human thought and the computational functionality of computers; the 1959 Chomsky’s critique of empiricism (human
thought are solely experiential) initiated what is now known as cognitive
psychology. However it has to wait the 1967 Ulric Neisser book [26] to the
definition of cognition and cognitive processes:
“The term cognition refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is
concerned with these processes even when they operate in the absence of
relevant stimulation, as in images and hallucinations. [...] Given such a
sweeping definition, it is apparent that cognition is involved in everything a
human being might possibly do; that every psychological phenomenon is a
cognitive phenomenon.”
Cognitive psychology is the study of higher mental processes such as
attention, language use, memory, perception, problem solving, creativity
and thinking1 .
2.2
Memory System
Psychologists nowadays make two distinctions about memory: the first one
is about the three stages of memory – encoding, storing and recovering –
(Figure 1); the second one is about the three different types of memory.
1
American Psychological Association. Glossary of psychological terms. [on line]. Available at: http://apa.org/research/action/glossary.aspx?tab=3 (Retrieved 01-09-2016)
3
Figure 1: Three Stages of Memory.
The encoding stage consists in the translation of the environmental information in a meaningful entity which is stored. The storing stage consists
in the maintenance in the time of the stored information. The recovering
stage consists in the recovery from the memory of information earlier encoded and stored. The memory may fail in each of the three stages. The
second distinction about memory is about the three different types of memory. In 1968 Richard Atkinson e Richard Shiffrin have formalized the basis
for the distinction between different memories [2, 3]. The main principles of
the Atkinson-Shiffrin theory are:
1. When an environmental stimulus is detected by the senses it is briefly
available in what Atkinson and Shiffrin called the sensory registers.
Though this store is generally referred to as “the sensory register” or
“sensory memory”, it is actually composed of multiple registers, one
for each sense. The sensory registers do not process the information
carried by the stimulus, but rather detect and hold that information
for use in short-term memory. Information is only transferred to the
short-term memory when attention is given to it, otherwise it decays
rapidly (in about seconds) and is forgotten.
2. The short term memory (STM), also called working memory (WM)
stores all the sensory information to which is given attention and it
is possible to use it as base to take decision. The information stored
in the short term memory decays in approximately 20 seconds. It
is nevertheless possible to maintain it in STM if the information is
actively rehearsed. For auditory information rehearsal can be taken
in a literal sense: continually repeating the items. However, the term
can be applied for any information that is attended to, such as when
a visual image is intentionally held in mind. Finally, information in
the short term store does not have to be of the same modality as its
sensory input. For example written text which enters visually can
be held as auditory information, and likewise auditory input can be
visualized. On this model, rehearsal of information allows for it to be
stored more permanently in the long-term store. There is a limit to the
amount of information that can be held in the short-term store: 7 ± 2
4
chunks. These chunks, which were noted by Miller in his seminal paper
“The Magical Number Seven, Plus or Minus Two” [22], are defined
as independent items of information. It is believed that individuals
create higher order cognitive representations of the items that are more
easy to remember; e.g. while recalling a telephone number such as
3284275665 individuals usually break it into groups like 328 42 75 665,
thus, instead of remembering 10 separate digits they remember only
four chunk. This phenomenon is called “chunking”.
3. The long term store (LTM) is the place where are stored all the information one has. The information enters in the long term store from
the short term store through the rehearsal. The maintenance rehearsal
is used to maintain the information in the short term store, the computing rehearsal is used to maintain the information in the long term
store. Long-term memory is assumed to be nearly limitless in its duration and capacity. It is most often the case that brain structures
begin to deteriorate and fail before any limit of learning is reached.
This is not to assume that any item which is stored in long-term memory is accessible at any point in the lifetime. Rather, it is noted that
the connections, cues, or associations to the memory deteriorate; the
memory remains intact but unreachable.
2.3
Reward System
In neuroscience the reward system is a group of neural structures responsible for reward related cognition, including positive reinforcement and both
“wanting” (i.e. desire) and “liking” (i.e. pleasure). Schultz in [34] define
a reward as “any stimulus, object, event, activity or situation that has the
potential to make us approach and consume”.
Primary reward are those necessary for survival, both homeostatic (e.g.
food) and reproductive rewards. Intrinsic rewards are unconditioned rewards
that are attractive because they are natural. Extrinsic rewards (such as
money) are conditioned and derive their motivational value as result of a
learned association with intrinsic rewards [34].
Some of the brain structures that compose the reward system are the
ventral tegmental area (VTA), the nucleus accumbens, the dorsal striatum,
the substantia nigra, the prefrontal cortex, the hippocampus, the amygdale.
The group of neurons known as mesolimbic dopamine pathway connect the
VTA to the nucleus accumbens (Figure 2) and is a critical component of the
reward system that is directly involved in the immediate perception of the
5
Figure 2: The Dopamine Pathways.
motivational component of a reward.
Dopamine in the brain functions as a neurotransmitter, a chemical released by neurons to send signals to other nerve cells. In the VTA is originated dopamine that is stored in vesicles. When the stimulus activates
the action potential the vesicles fuses together with the presynaptic membrane and dopamine is released in the synaptic cleft where it bounds with
dopamine receptors that are located in the postsynaptic membrane; at this
time there is the reuptake of dopamine which is in the synaptic cleft and is
unbound. The reuptake happens because the nucleus accumbens regulates
the release and the absorption of dopamine. It knows how much dopamine
is needed to feel pleasure; in some cases the reuptake is inhibited by some
kind of drugs.
In [4] it is noted that a reward is often called pleasant stimulus but it is
useful to keep in mind that reward is not a unitary process but is actually a
composite or complex process containing several psychological components.
The major components of reward include liking (the pleasure component of a
reward), wanting and learning. In particular, there are different experiments
about reward and learning, the first and most known is the Pavlov one [32]:
whenever he gave food to his dogs, he also rang a bell. After a number of
repeats of this procedure, he tried the bell on its own. As one might expect,
the bell on its own now caused an increase in salivation. So the dog had
learned an association between the bell and the food and a new behavior
had been learned.
This can be seen as a dopamine error: when a reward is expected the
6
Figure 3: The Neuro-psychologic System [35].
level of dopamine grows, even if the reward doesn’t come. It is as if dopamine
regulates itself in the moment something is learned.
The same thing happens with drugs addiction: once the human is addicted his body learns to release dopamine when it expects to receive the
drug. In [17] for example some models are done to describe the dopamine release in the case of nicotine addiction: also in this study there is a dopamine
error, that is when the human is expected to smoke his level of dopamine
grows even if he actually doesn’t smoke.
2.4
Decision System
What previously has been called decision system is actually a group of systems in human brain which deals with decision making, problem analysis,
problem solving and coping activity (the process of dealing with internal or
external demands that are perceived to be threatening or overwhelming2 ).
It’s understandable that the human brain is not a simple system and
every part of it collaborates and influences the others. When a human makes
a decision or solves a problem in a given way is influenced by the other
systems: in the memory there are specific information and the dopamine
pathway follows a specific trend.
In figure 3 it is shown how every system collaborate and interact with
2
American Psychological Association. Glossary of psychological terms. [on line]. Available at: http://apa.org/research/action/glossary.aspx?tab=3 (Retrieved 01-09-2016)
7
each others.
The Prefrontal Cortex and the Serotonergic System have the function of
controller of the reaction to the sensory input: they accomplish the voluntary
control of the behaviors. The controller employs its inhibitory activity over
the emotional drive and it is the place where are located the functions of
decision making, problem analysis, problem solving and coping function.
The Amygdala, the Insula and the Noradrenergic System have a function
of emotional drive important both for impulsiveness and instinctivity. The
Amygdala enables the human to give an emotional connotation to information, thus this influences the choices.
The Thalamus is the “orchestra leader” in the motivation, by activating the Prefrontal Cortex, the Nucleus Accumbens and the VTA and the
Driver System. The motivation for acting is thus the result of the balance
of impulses which come from the Drive and from the Controller.
3
3.1
State Of The Art
Human-Computer Interaction and Usability
Human-Computer Interaction (HCI) studies methods and techniques for
designing and developing interactive systems that have to be usable, reliable
and that ease human lives.
This term first appears in a 1975 paper [8] and then it is used in a 1980
book by Stuart K. Card, Allen Newell and Thomas P. Moran [6] but it
becomes more popular with the Card, Newell and Moran 1983 book “The
Psychology of Human-Computer Interaction” [7].
In the last years HCI had a great development with the growing penetration of computing devices in everyday life: computer science becomes more
and more a interactive discipline oriented to communication with users.
One of the most important objectives in HCI field is usability.
The most recognized definition of usability comes from ISO standard
9241 (Ergonomics of human-system interaction) and defines usability as
“The extent to which a product can be used by specified users to achieve
specified goals with effectiveness, efficiency and satisfaction in a specific context of use”.
Effectiveness is about achieving the intended goal(s); efficiency is about
the resources of time or effort needed by users to achieve their goals; satisfaction of users is important particularly where users can choose over different
products for achieving their goals.
8
Figure 4: Nielsen framework of system acceptability.
Jackob Nielsen, a usability consultant, has proposed a framework of system acceptability (Figure 4) and has defined the five quality components of
his usability goals in the 1994 book “Usability Engineering” [24]. For him
usability is not a single, one-dimensional property of a user interface, it has
multiple components and it is associated with these five attributes:
• Learnability: the system should be easy to learn so that the user can
rapidly start getting some work done with the system.
• Efficiency: the system should be efficient to use, so that once the user
has learned the system, a high level of productivity is possible.
• Memorability: the system should be easy to remember, so that the
casual user is able to return to the system after some period of not
having used it, without having to learn everything all over again.
• Errors: the system should have a low error rate, so that users make
few errors during the use of the system, and so that if they do make
errors they can easily recover from them. Further catastrophic errors
must not occur.
• Satisfaction: the system should be pleasant to use, so that users are
subjectively satisfied when using it; they like it.
Many are the reasons why usability is important. To take just few examples,
if a website is difficult to use, if it doesn’t clarify what the company offers
or what the user can do on it, if users get lost, if the site’s information is
difficult to read, users leave the website and search for another one. On
intranets usability deals with employee productivity: the time an employee
9
Figure 5: Norman cycle of interaction.
wastes on searching instructions is time he spends at work without working.
Also with other products usability deals with revenues: a product that can
do its work without problems is better sold than others.
It is possible to identify some quantitative measurements of usability,
e.g. the number of times the user achieve a goal, the number of goals a user
has completed in a time interval, the ratio of errors to correct interactions
(in this case we consider as error an action useless to reach the goal), the
number of errors, the number of tasks performed by the user, the number of
tasks not performed by the user, the number of times the user doesn’t solve
a problem, the ratio of users that have chosen the better strategy to users
that have not, the number of time the user was distracted [28].
3.1.1
The interaction
To better understand the interaction between human and computer and
thus the usability it is possible to refer to Norman model [33] in which it is
possible to identify the main phases of interaction of the user with the system
(Figure 5). This model provides a valid logic structure, even if simplified,
for designing and evaluation of systems.
The user perceives activity in the system, evaluates whether it is what
is expected given the goals he is trying to accomplish, sets up an intention
of the next step, retrieves the way to enact this intent on this system and
executes the appropriate motor movement. This produces new activity in
the system and the user cycles through the process again.
Norman locates his stages in the context of cycle of action and evaluation
10
and identifies the “gulf of execution” and the “gulf of evaluation”.
In systems with low usability, where the possible tasks are badly supported, the two gulf are useful to identify the mismatching between user’s
actions and expectations (execution gulf) and system’s actions and presentation (evaluation gulf).
3.1.2
Usability Evaluation Methods
The Usability Evaluation Methods (UEMs) are used to evaluate the interaction of the human with the computer for the purpose of identifying aspects
of this interaction that can be improved to increase usability [16].
There are a lot of UEMs; some of them use data from users, others use
an expert analysis. They differ depending on the type of product, on the
stage of design and development. UEMs can be classified into subcategories.
3.1.2.1 Cognitive Modeling Methods
These kinds of method rely with creating a computational model to estimate
the time users take to perform a given task [26].
Cognitive models has emerged since the work of Card, Moran and Newell
[6, 7, 5] and are based on psychological principles and experimental studies to determine times for both cognitive processing and motor movements.
Below are some examples of cognitive models.
GOMS
The GOMS models [19] – actually a family of models – describes the knowledge and the four cognitive components of skilled performance in task: goals,
operators, methods and selection rules.
Goals are what the user has to accomplish and are often broken down
into subgoals; all of the subgoals must be accomplished in order to achieve
the overall goal. Goals and subgoals are often arranged hierarchically, but a
strict hierarchical goal structure is not required, in particular some versions
of GOMS models allow several goals to be active at once.
An operator is an action performed in service of a goal and can be perceptual, cognitive or motor acts. Operators can change the user’s internal
mental state or physically change the state of the external environment.
Execution time may be approximated by a constant, by a probability distribution or by a function of some parameters. E.g., the time to type a word
might be approximated by the average time for an average word by an average typist, or a statistical distribution or by a function involving the number
of letters in the word and the time to type a single character. The accuracy
of execution time predictions obtained from a GOMS model depends on the
11
Figure 6: Example of CMN-GOMS text-editing methods edited by an analyst.
accuracy of these assumptions.
Methods are sequences of operators and subgoal invocations that accomplish a goal. The content of the methods depends on set of possible
operators and on the nature of the tasks represented.
Selection rules regulate the choice of a method in case there are more
than one to accomplish the same goal or subgoal.
There are several variants of the GOMS models, for instance:
• The Keystroke-Level Model (KLM): is the simplest version presented
by Card, Moran and Newell: the analyst lists the sequence of operators and then totals the execution times for the individual operators
to estimate execution time for a task. The analyst must specify the
method used to accomplish each particular task instance. The KLM
model include six operators: K to press a key button, P to point with
a mouse to a target on a display, H to home hands on the keyboard or
12
other device, D to draw a line segment on a grid, M to mentally prepare
to do an action or a series of primitive actions, and R to represent the
system response time during which the user has to wait for the system.
Each of these operators has an estimate execution time, either a single
value, a parameterized estimate or a simple approximating function.
• The Card, Moran and Newell GOMS (CMN-GOMS): presented in
[7, 5] (Figure 6), has a strict goal hierarchy. Methods are represented
in an informal program form that can include submethods and conditionals. A CMN-GOMS model, given a particular task situation,
can predict both operator sequence and execution time. Card et al.
do not describe these model with an explicit “how to” guide but they
illustrated nine models at different levels of detail.
• The Natural GOMS Language (NGOMSL): is a structured naturallanguage notation for representing GOMS models and a procedure for
constructing them. The model is in a program form and provides
prediction of operators sequence, execution time and time to learn the
methods. The analyst constructs the model by performing a top-down
expansion of the user’s top-level goals into methods, until the methods
contain only primitive operators, typically keystroke-level operators.
• The Cognitive-Perceptual-Motor GOMS (CPM-GOMS): requires a specific level of analysis where the primitive operators are simple perceptual, cognitive and motor acts. Unlike the other GOMS models, this
one does not make the assumption that operators are performed serially; rather perceptual, cognitive and motor operators can be performed in parallel as the task demands. CPM-GOMS uses a schedule
chart to represent the operators and the dependencies between them.
Task Model [30]
Tasks are activities that have to be performed to achieve the goal, both
logical and physical activities. The purpose of task modeling is to build a
model which describes precisely the relationship among the various tasks.
The relationship can be of various type: temporal, semantic. The tasks
that can’t be divided are basic tasks that usually require one single physical
action to be accomplished. In some cases the task model of an existing
system is created in order to better understand the underlying design and
analyze its potential limitations and how to overcome them. In other cases
designers create the task model of a new application yet to be developed.
In this case, the purpose is to indicate how activities should be performed
13
Figure 7: Example of Concur Task Tree.
in order to obtain a new, usable system that is supported by some new
technology.
An example of a task model is the Concur Task Tree (CTT), showed in
Figure 7 [29] useful to support design of interactive applications specifically
tailored for user interface model-based design.
3.1.2.2 Inspection Methods
Usability inspection is the generic name for a set of methods that are all
based on having expert evaluators inspect the system.
Some examples of inspection methods are: Heuristic Evaluation, Cognitive Walkthroughs, Pluralistic Walkthroughs, Card Sort, etc [24].
3.1.2.3 Inquiry Methods
These kind of methods require qualitative data from users. Although the
data collected is subjective, it provides feedback on the system from users.
Thus the goal of these methods is to have subjective impressions about various aspects of the system. Frequently these methods are used as support
of other UEMs [18].
Some examples of inquiry methods are: Task Analysis, Focus Groups,
Questionnaires and Surveys.
3.1.2.4 Testing Methods
Usability testing is a technique used to evaluate a product by testing it on
users. It gives direct input on how real users use the system [24].
In conducting such kind of tests it is necessary to identify what designers are going to measure, or the so called usability metrics, that are often
variable and change in conjunction with the scope of the project. The ultimate goal of analyzing these metrics is to find/create a prototype design
14
that users like and use to successfully perform given tasks [15].
Below some examples of this kind of methods.
Remote usability testing
In this kind of testing the user and the evaluators are separated over space
and sometimes also over time . Remote testing, which facilitates evaluations
being done in the context of the user’s other tasks and technology, can be
either synchronous or asynchronous [1].
Thinking aloud
It is a method that involves getting a user to literally think aloud, that is
verbalize their thoughts as they perform a task.
RITE method
Rapid Iterative Testing and Evaluation (RITE) [21] is an iterative usability
method. The tester and team must define a target population for testing,
schedule participants to come into the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage
in a verbal protocol (e.g. think aloud).
3.1.2.5 Other Evaluating Methods
There are other evaluating methods which can’t be placed in the previous
categories. Below some examples.
Browser logs and task models
The method described in [27] combines two types of evaluating methods:
testing method and model-based method. The user interaction is observed
in remote, thus it’s important to obtain logs with detailed information; for
this reason it has been implemented a logging tool able to record a set of
actions. In order to understand what the user tasks and goal are is built a
task tree model of the web site (in CTT notation [30]).
The method is composed of three phases: preparation, in which the tree
task model is created, the logging data are collected and the association
between logged actions and basic tasks are defined; automatic analysis, in
which the Web Remote User Interface Evaluation (WebRemUsine) examines the logged data with the support of the task model and provides results
concerning the performed tasks, errors and loading time; and evaluation,
during which the information is analyzed by experts to identify usability
problems and possible improvements in the interface design.
MUSE
It is a new method presented in [31] for automatic detection of usability
issue indicators in mobile web applications supported by Mobile Usability
Smell Evaluators (MUSE), a proxy based web usability evaluation tool which
is able to record user behavior while interacting with a web application.
15
The identification of usability issues is carried out through an algorithm for
identification of specific interaction patterns: recorded user interactions are
compared with a repository of interaction patterns that indicate the potential presence of usability issues. The main idea is to define and formalize
structures, user behaviors and other types of anomalous data that serve as
clues for possible usability issues (also called in the paper “Bad Smells”)
and verify their potential presence. Some of the “Bad Smells” in the paper
are ‘too small or close elements’, ‘too close links’, ‘too small section’, ‘long
forms’, etc.
Since user sessions are recorded as sequence of events, even the bad
usability smells are formalized as events patterns.
3.2
A Cognitive Framework Based on Rewriting Logic
In the interaction between human and computer obviously human performance can be affected by many factors such as age, physical health, mental
health, attitude; this factors can lead to some errors.
The analysis of human errors in interactive systems has been the field
of Human Reliability Assessment (HRA). The three principal goals of HRA
is identify what errors can occur (Human Error Identification), decide how
likely the errors are to occur (Human Error Quantification) and if appropriate enhancing human reliability by reducing this error likelihood (Human
Error Reduction) [20]. However the models of interaction built in HRA
never incorporate a representation of human cognitive processes.
In 1990’s other techniques had been developed: bringing together the
concepts of formal methods and interactive systems design [14]. In the
formal analysis of interactive systems there are two different directions:
• The analysis of the the user behavior and the cognitive errors of users
in interactive tasks (involved both in everyday-life and work related)
[10];
• The analysis of the behavior of skilled operators working in critical
domains (such as nuclear plants, defence, health) [11, 12].
Users and operators behavior is naturally different: on one side users have a
selective attention and act under automatic control; on other side operators
deals with high cognitive load.
In [9] it is tried to unify these distinct directions by providing a formal
notation to model a cognitive framework where the human component (user
or operator) interact with an interface and has to reach a goal. In the
16
cognitive framework is modeled also the use of the Short Term Memory
(STM) which is rewritten every time a step is done in the interaction with
the system; also the system state is rewritten at every step. Then, by using
model checking the potential human errors are detected.
This notation is implemented by using Maude rewrite system and the
framework is used in two case studies: a user of an Automatic Teller Machine
(ATM) and an operator of an Air Traffic Control (ATC) system.
3.2.1
Maude Rewrite System
Maude3 is a high performance language and system supporting both equational and rewriting logic specification and programming for a wide range
of applications.
Rewriting logic has good properties as a general semantic framework
for giving executable semantics to a wide range of languages and models of
concurrency. In particular, it supports very well concurrent object-oriented
computation. The same reasons making rewriting logic a good semantic
framework makes it also a good logical framework, that is, a metalogic in
which many other logics can be naturally represented and executed.
Some of the most interesting applications of Maude are metalanguage
applications, in which Maude is used to create executable environments for
different logics, theorem provers, languages, and models of computation.
The goals of the Maude project are supporting formal executable specification, declarative programming, and a wide range of formal methods
as means to achieve high-quality systems in areas much as: software engineering, networks, distributed computing, bioinformatics, and formal tool
development.
Maude’s basic programming statements are very simple and easy to understand. They are equations and rules, and have in both cases a simple
rewriting semantics in which instances of the lefthand side pattern are replaced by corresponding instances of the righthand side.
This make the Maude system perfect for modeling the cognitive framework we have said before and possibly the Maude language has influenced
the way Cerone has written his rewriting rules.
3
The Maude System. Maude Overview. [on line]. Available at: maude.cs.illinois.edu/w
(Retrieved 01-09-2016)
17
3.2.2
The Cognitive Process Model
As already said, in cognitive psychology the human cognitive processes are
modeled as processing activities which use some input/output channels to
interact with the external environment and which use three different kinds
of memory to store the information: the sensory memory, the short-term
memory (STM) often studied in terms of working memory (WM) and the
long-term memory (LTM).
Cerone in [9] represents the input channels in terms of perceptions with a
strong emphasis on its potential cognitive effects; and the output channels in
terms of actions performed in response to perceptions. Generally perceptions
are stored in the sensory memory and only the relevant ones are stored in
the STM using attention, a selective processing activity.
Following the work of Norman and Shallice [25], Cerone considers two
levels of control:
1. Automatic control: fast processing activity that does not require
attention (e.g. driving a car once the automatism is acquired, after
the learning period)
2. Deliberate control: processing activity that require attention, carried out under the intentional control of the individual, who is conscious of the required effort (e.g. the attention and the conscious effort
that a person has to use while learning driving a car).
Goal, task and STM
In the interaction human-computer the aim is to accomplish a goal. A goal
may be seen as a top-level task that can be decomposed in subtasks until
reaching basic tasks which can’t be further decomposed.
In [9] a basic task is modeled as a quadruple
inf oi ↑ perch =⇒ acth ↓ inf oj
The perception of perch by the user activates:
• the retrieval of information inf oi from the STM;
• the execution of the action acth ;
• the storage of inf oj in the STM.
The information (info) that can be stored in STM is:
• task goal: actions that leads to the achievement of the goal or maintain a correct system state
18
• action reference: that is the reference to a future action to be performed
• cognitive state: that is the state of the plan developed by the
user/operator
A task goal is modeled as
goal(act, type)
If type = achieve,act is the action that has led to achieve the goal itself; if
type = preserve, act is the action that maintains the correct system state.
Both in the task goal and in the basic task, if act = none the action
of the task goal or the entities of the basic goal (information, perception or
actions) are not specified or absent.
There are three categories of basic tasks depending on the two different
levels of control:
1. Automatic task: triggered by a perception or an information in the
STM. It must include the action but may not include a perception or
may not use the STM. Performed under automatic control.
2. Cognitive task: triggered by a cognitive state. It must always have
the two information fields to contain the current cognitive state to
retrieve from STM and the next cognitive state to store in the STM,
but it has neither perception nor action. Performed under deliberate
control.
3. Decision task: triggered by a task goal in the STM. It must include a
perception and store in the STM a reference to an action that is related
to the task goal contained in the retrieval information field, with the
perception triggering the retrieval of the task goal. Performed under
deliberate control.
Interface
In this context the human perception refers to a stimulus produced by a
computer action, and a interface state coincides with a human perception
(the state of a vending machine that gives the change is identified with the
perception of the sound of falling coins or the perception of sight the money).
For this reason the interface transitions are modeled as follows
act
h
perch −−→
perck
19
Where the state of the interface are indicated by the perception and the
transition from a state to another is associated with an action.
The perceptions may induce different degrees of urgency in reacting (e.g.
when the Automatic Teller Machine, ATM, is giving to the user the money
there is a timeout for taking it) that is modeled just like a timeout. The
perceptions with timeout are modeled as follows:
perc!0 : state that produces a perception inducing no urgency in reacting;
perc!1 : state that produces a perception inducing urgency with a timeout
not already expired;
perc!2 : state that produces a perception inducing urgency with a timeout
already expired.
Thus the interface transitions become:
act
h
perch !m −−→
perck !n
An action act = none is denoted by an unlabelled arrow. The initial state of
the interface is usually not associated with a timeout (perc!0 ). The action
act that produces the state perc!m is defined as act perc!m (the initial
state becomes then none perc!0 ).
Closure and post-completion error
May occur that in the interaction between a user and a interface, when the
goal is achieved the STM is emptied, this behavior is called closure. This
may cause the removal of some important subtasks that are still not completed and cause the so called post-completion error (e.g. when the ATM
has delivered the money but has not already returned the card, may occur
that the user forgive the card).
LTM and Supervisory Attentional System
It is possible transfer information from STM to LTM (in this case is not
modeled) and transfer information from LTM to STM. This last case occurs
when the automatic tasks are inappropriate. In these cases the so called
Supervisory Attentional System (SAS) [25] is activated by perceptions that
are assessed as danger, novelty, requiring decision, source of strong feeling
such as temptation and anger.
Cerone formalizes these kinds of perception as
assess(act, perc)
Where perc is the perception that had activated the SAS and act is the
action done before the activation. The function returns one of these values:
20
danger, decision, novelty, anger, auto. Normally the automatic response to
a danger is to abandon the task; response to novelty and anger (or in general to a feeling) varies from individual to individual, thus is not modeled;
response requiring decision are driven by a specific task of the model; the
value auto doesn’t activate SAS.
Rewriting System Model
Let:
• Π be a set of perceptions;
• Σ be a set of actions;
• Γ be a set of action references;
• ∆ be a set of cognitive task with Γ ∩ ∆ = ∅.
The cognitive framework is modeled on Π, Σ, Γ, ∆ as a rewrite system
consisting of four sets of objects:
T a set of basic task;
I a set of interface transitions;
C a singleton containing the current interface state and its causal action;
M the set of entities in STM;
R a set of rewriting rules
rewrite
T ICM −−−−→ T IC 0 M0
defined as follows:
Interacting
if inf oi ↑ perch =⇒ acth ↓ inf oj ∈ T , with acth 6= none and
act
h
C = {act perch !m} and perch !m −−→
perck !n ∈ I, with m < 2, and
inf oi ∈ M and there exists a goal in M
then C 0 = {acth perck !n} and
M0 = M − {inf oi } ∪ {inf oj }
The interacting rule is enabled by an automatic task and is applied if there
is a perception perch in the current state C and/or information inf oi in the
STM M that are associated in a task of T with the execution of action
acth , there is a goal in the STM M and there is no expired timeout (m < 2)
associated with the interface state perch !m that has generated perception
21
perch . The next state C of the interface is perck !n, which results by executing action acth , and the next STM M is obtained by removing information
inf oi and storing information inf oj .
Closure
if inf oi ↑ perch =⇒ acth ↓ inf oj ∈ T , with acth 6= none and
act
h
C = {act perch !m} and perch !m −−→
perck !n ∈ I, with m < 2, and
goal(acth , achieve) and inf oi ∈ M
then C 0 = {acth perck !n} and
M0 = {inf oj }
The closure rule is enabled by an automatic task and is very similar to the interacting rule, but now the goal in the STM M must be of type achievement
(goal(acth , achieve)) and the execution of action acth results in emptying
the STM before storing information inf oj .
Danger
if inf oi ↑ perch =⇒ acth ↓ inf oj ∈ T , with acth 6= none and
act
h
C = {act perch !m} and perch !m −−→
perck !n ∈ I, with m < 2, and
inf oi ∈ M and assess(act, perch ) = danger
then C 0 = {acth perck !expired(n)} where
(
2 se n = 1
expired(n) =
n otherwise
and M0 = {inf oj }
The danger rule is enabled by an automatic task and is applied if the current perception perch that follows the execution of action act is assessed as
a danger (assess(act, perch) = danger). The user performs action acth .
Moreover the user’s normal response to a danger is to abandon the task. If
there is a timeout associated with the current state (perck !1), then the next
state is perck !2, which is the current state now associated with an expired
timeout (since expired(1) = 2), otherwise it is perck !n (since expired(n) = n
f or n 6= 1). The next STM M is obtained by removing all information and
storing information inf oj , as it happens for the closure.
22
Timeout
if C = {act perch !m} and perch !m → perck !n ∈ I, with m > 1
then C 0 = {none perck !n}
and M0 = M
The timeout rule refers to an autonomous action of the interface with no
involvement of the human component and thus no involvement of a basic
task; it is triggered by the expiration of the timeout (m > 1) and leads
through the autonomous action acth to the new interface state perck !n.
Cognitive
if inf oi ↑ perch =⇒ none ↓ inf oj ∈ T and
inf oi ∈ M ∩ ∆ and inf oj ∈ ∆
then C 0 = C and
M0 = M − {inf oi } ∪ {inf oj }
The cognitive rule is enabled by a cognitive task and refers to a cognitive
process of the human, with cognitive state inf oi retrieved from and cognitive state stored in the STM, where there’s no involvement of the interface
(therefore the state of the interface doesn’t change).
Decision
if inf oi ↑ perch =⇒ none ↓ inf oj ∈ T and
inf oi ∈ M is a goal and
assess(none, perch ) = decision
then C 0 = C
andM0 = M ∪ {inf oj }
The decision rule is enabled by a decision task and differs from the cognitive
rule because the retrieved information is a goal, which is then stored again
in the STM, and because of the presence of the assessment as a precondition. It models the SAS-induced switch from automatic control to deliberate
control due to a required decision.
The Maude implementation consists in two modules: the module of the entities, that are the perceptions, actions and information that can be stored
in STM; and the module of the architecture that describes the structure of
tasks, STM, LTM, interfaces and the Maude rewrite rules which work in
23
these structures. The analysis is done with the Maude model checker which
implies the construction of two modules: preds and check. The second one
includes properties to be verified and runs the model checker; the first one
defines predicates on perceptions, actions and STM information (Pcogn (e),
Pact (e), Pperc (e) ).
In the ATM case studies for instance is verified the property AlwaysCardBack described in temporal logic, that is the property with whom the user
is always able to collect a returned card:
AlwaysCardBack = (Pperc (cardO) → (¬Pperc (cardR) U Pact (cardC)))
3.2.3
The ATM Case Study
Let be:
Π = {cardR, pinR, cashO, cardO}
Σ = {cardI, pinI, cashC, cardC}
Γ = {cardB}
∆ 6= ∅
A simple ATM task (the user has to withdraw the cash) is modeled with
four basic tasks:
1. none ↑ cardR =⇒ cardI ↓ cardB
The interface is perceived ready (cardR), the user insert the card
(cardI) and remembers in STM that the card has to be taken back
(cardB)
2. none ↑ pinR =⇒ pinI ↓ none
The interface is perceived to request the pin (pinR) and the user inserts
it (pinI)
3. none ↑ cashO =⇒ cashC ↓ none
The cash has been delivered and the user perceives it and collects the
cash
4. cardB ↑ cardO =⇒ cardC ↓ none
The card has been returned (cardO) the user perceives it and collects
the card (cardC) and no longer needs to remember to collect it (cardB)
The goal is formally modeled as goal(cashC, achieve).
The transition of the new ATM (that returns the card before delivering
the cash) are modeled as follows:
cardI
1. cardR!0 −−−→ pinR!1
24
pinI
2. pinR!1 −−−→ cardO!1
cardC
3. cardO!1 −−−−→ cashO!1
cashC
4. cashO!1 −−−−→ cardR!0
5. pinR!2 → cardO!1
6. cashO!2 → cardR!0
7. cardO!2 → cardR!0
The first four transitions describe the passage from a state to the following
by executing an action, thus they describe the interaction between user
and computer. For example, in the interaction between human component
and ATM may occur that the card is requested by the ATM (state cardR),
the user insert the card (action cardI ) and the ATM requests the pin (state
pinR). Other transitions describe situations where there is not an interaction
with the user: for instance, when the pin is requested but the user doesn’t
insert it in the time established, the timeout is expired (state pinR!2 ), thus
the card is returned (state cardO!1 ).
The initial state is none cardR!0.
The user experience, done in automatic control are modeled as follows:
1. assess(cardI, pinR) = auto
2. assess(pinI, cardO) = auto
3. assess(cardC, cashO) = auto
4. assess(cashC, cardR) = auto
4
Proposal and Work Plan
The proposed Ph.D. thesis has as his main objectives to create a computational model adding to the traditional cognitive model the memory system,
the reward system and the decision system to menage all the attributes that
a usable system must have (effectiveness, efficiency, learnability, satisfaction,
memorability, robustness to errors).
The cognitive framework proposed in [9] seems to be a good starting
point since it adds the short term memory to a cognitive model for the evaluation of the insurgence of errors. Therefore the idea is to modify it and
work on it, adding other characteristics useful for the evaluation of systems.
25
4.1
Remold the Cognitive Model Short Term Memory
In some cases the human computer interaction fails and some problems arise
for the lack of information which may results from forgetting something.
For example in a human-computer interaction frequently it is needed to
remember something; if the number of information to remember exceeds
the STM personal limit the performance could be mediocre and the user
couldn’t achieve the goal.
However, the STM model proposed in [9] is simplified since the capacity
of STM isn’t took in account; the short term memory is modeled as a set
(M) in which there is nothing to prevent the insertion of more than a fixed
number of elements.
Thus the primary target of the thesis proposal is to model the STM as
a complex data structure (as a vector or an array) with a fixed size where
to collect a pre-established number of information, where is not therefore
possible to insert a surplus number of information. In this way one of the
properties to be satisfied when evaluating the usability of an interface is that
it doesn’t require the memorization of information in a number greater than
the size of the STM.
Once the STM is modeled in the Cerone’s formalism, the aim is to learn
and use the Maude language and implementation to extend the framework.
4.2
Add the rewriting rule randomDelete to the Cognitive
Model
In second analysis to manage the situation where the user forget something
while interacting with the system it is possible to add in the Cerone framework another rewriting rule that randomly delete something in the STM.
if C = act perch !m and
perch !m → perck !n ∈ I, with m > 1
then C 0 = none perck !n and
M0 = M − {inf oj }
It is possible to assign to each item in STM a probability to be forgotten
according to the number of time the user has repeated it. In this way the
model would be stochastic: there would be a walk where everything goes
in the right way and a walk where the user forget something and can’t
accomplish the goal.
In the case study of the ATM, for instance, if the information about
cardB has a high probability to be forgotten it is possible that the user
26
forgets the card inside the ATM so the property to be checked is not if the
card is always collected but which is the probability that the user forgets
the card.
By adding this rewriting rule it is possible design a system that manage
in a right way these kind of situations, for example by adding a reminder
for the user.
4.3
Test the validity of the Cognitive Model
As seen in section 2.2 the capacity of the STM varies depending on the user.
The test conducted in [13] is an example to obtain the personal capacity.
Subjects have to read a series of sentences aloud. The sentences are 60 and
are divided into three sets each of two, three, four, five and six sentences.
The subject has to read every set, reading every sentence one after the other.
When the set is finished he has to remember the last word of the sentences.
He continues with longer set of sentences until he fails all three sets at a
particular level. The level at which the subject is correct on two out of
three sets is taken as a measure of the subject’s STM span.
The idea is to create a web site where firstly test the user span with such test;
second test a form in different pages where user has to remember something
from a page to the others. Then test the same task with the model to see if
the model actually predicts the errors done by the users.
4.4
Add the Long Term Memory to the Cognitive Model
In the [9] framework the LTM update is not modeled. Information stored in
LTM is used to solve problems where the automatic control is not appropriate and the deliberate control has to be used. This activation is managed
by the decision rule, however there is not a set of entities that represents
the LTM, as it happens with the STM.
Nevertheless the modulation of the LTM can solve different problems
about the evaluation of the usability of a system. In particular problems
concerning the production of some kind of errors by the user, the learnability
of the system and the memorability of the system.
There are situations in which the user makes an error not because he
forgets to do something but because he never did it or he never learned
it. If, for instance, a user doesn’t know how to insert the character “@”
the interaction with some kind of system can produce errors which are not
signals that the system is not usable.
27
In some sense it’s possible to describe the interaction between the user
and the system as a list of things that the user has to know: the user who
has to use the mail surely has to know how to use the keyboard, how to
digit some particular characters, how to use the mouse and point it in a
specific place in the screen, and so on; if he doesn’t achieve because he never
learned, it’s clear that he can’t use the system.
On the other hand the LTM can be seen as the entire knowledge of the
user, since it contains information with slow access but little or no decay; a
sort of list, even in this case, of things the user is able to do.
By modeling the LTM as a list and by providing a list of things that
the user who wants to interact with a system has to know, it is possible to
compare the two lists and manage situations where the user is not able to
do something, avoiding the insurgence of errors.
The learnability is the characteristic of a system that is easy to learn
and, as we have already seen, everything the user learn about the system
he is interacting with goes in the user’s LTM. Therefore by modeling the
LTM is possible to describe also the learnability of the user by comparing
the LTM before and after the use. Moreover in learnability comes in also the
time: the time the user take to learn using the system and the number of
time he has to use it before he learns using it. Thus it would be appropriate
join to such kind of analysis an analysis of the time the user take to “insert”
the information in the LTM and the number of the time the user utilizes
the system before he has all the information about the system in the LTM.
Finally, the memorability is the characteristic of a system easy to remember: when the user doesn’t use it for some period doesn’t have problem
on using it and he doesn’t have to learn everything all over again. Once
again, by modeling LTM is possible identify if the user actually remember
the system and if he remember how to use it.
The LTM could have the structure of the STM modeled in [9], that is a
set of information. In this case there is not the problem of the size of the
list, since in LTM is possible to store information without space limitation.
4.5
Add the Reward System to the Cognitive Model
The satisfaction is the characteristic of a user who uses a usable system: the
system should be pleasant to use and the user has to like it.
As already said the reward system is a group of neural structures responsible for pleasure, positive reinforcement and incentive salience. In
particular the neurotransmitter dopamine create a sensation of satisfaction,
gratification, motivation (or punishment) by stimulating attention, memory,
28
learning, behavior, cognition and volunteer movement.
Adding the reward system to the cognitive model would solve the problem of the evaluation of the user’s satisfaction.
Presumably the reward system is too much complicated to model it in
Maude and a fitting formalism is needed, with which the memory system,
the reward system and the decision system can be modeled.
In [23] the dopaminergic system has been modeled to simulate the internet addiction subjects. In this work the system has been modeled through
two parallel hybrid automatons.
Thus the entire cognitive framework could be modeled as an hybrid
automaton, where is possible describe the functioning of both the reward
system (described through differential equations) and the memory system
(described in Maude implementation).
4.6
Add the Decision System to the Cognitive Model
Finally adding the decision system to the cognitive model would manage the
situations in which the user has to decide between two (or more) different
actions.
29
References
[1] Morten Sieker Andreasen, Henrik Villemann Nielsen, Simon Ormholt
Schrøder, and Jan Stage. What happened to remote usability testing?:
an empirical study of three methods. In Proceedings of the SIGCHI
conference on Human factors in computing systems, pages 1405–1414.
ACM, 2007.
[2] Richard C Atkinson and Richard M Shiffrin. Human memory: A proposed system and its control processes. Psychology of learning and
motivation, 2:89–195, 1968.
[3] Rita L Atkinson et al. Hilgard’s introduction to psychology, volume 12.
Harcourt Brace College Publishers Philadelphia PA, 1996.
[4] Kent C Berridge and Morten L Kringelbach. Affective neuroscience
of pleasure: reward in humans and animals. Psychopharmacology,
199(3):457–480, 2008.
[5] Stuart K Card, Thomas P Moran, and Allen Newell. Computer textediting: An information-processing analysis of a routine cognitive skill.
Cognitive psychology, 12(1):32–74, 1980.
[6] Stuart K Card, Thomas P Moran, and Allen Newell. The keystroke-level
model for user performance time with interactive systems. Communications of the ACM, 23(7):396–410, 1980.
[7] Stuart K Card, Allen Newell, and Thomas P Moran. The psychology
of human-computer interaction. 1983.
[8] James H Carlisle. Evaluating the impact of office automation on top
management communication. In Proceedings of the June 7-10, 1976, national computer conference and exposition, pages 611–616. ACM, 1976.
[9] Antonio Cerone. A cognitive framework based on rewriting logic for the
analysis of interactive systems. In International Conference on Software
Engineering and Formal Methods, pages 287–303. Springer, 2016.
[10] Antonio Cerone, Judy Bowen, Steve Reeves, Tiziana Margaria, Julia
Padberg, and Gabriele Taentzer. Closure and attention activation in
human automatic behaviour: A framework for the formal analysis of
interactive systems. 2011.
30
[11] Antonio Cerone, Simon Connelly, and Peter Lindsay. Formal analysis of human operator behavioural patterns in interactive surveillance
systems. Software & Systems Modeling, 7(3):273–286, 2008.
[12] Antonio Cerone, Peter A Lindsay, and Simon Connelly. Formal analysis
of human-computer interaction using model-checking. In Third IEEE
International Conference on Software Engineering and Formal Methods
(SEFM’05), pages 352–361. IEEE, 2005.
[13] Meredyth Daneman and Patricia A Carpenter. Individual differences
in working memory and reading. Journal of verbal learning and verbal
behavior, 19(4):450–466, 1980.
[14] Alan John Dix. Formal methods for interactive systems, volume 16.
Academic Press London, UK, 1991.
[15] Joseph S Dumas and Janice Redish. A practical guide to usability testing. Intellect Books, 1999.
[16] Wayne D Gray and Marilyn C Salzman. Damaged merchandise?
a review of experiments that compare usability evaluation methods.
Human–Computer Interaction, 13(3):203–261, 1998.
[17] Boris S Gutkin, Stanislas Dehaene, and Jean-Pierre Changeux. A
neurocomputational hypothesis for nicotine addiction. Proceedings of
the National Academy of Sciences of the United States of America,
103(4):1106–1111, 2006.
[18] Melody Y Ivory and Marti A Hearst. The state of the art in automating usability evaluation of user interfaces. ACM Computing Surveys
(CSUR), 33(4):470–516, 2001.
[19] Bonnie E John and David E Kieras. The goms family of user interface
analysis techniques: Comparison and contrast. ACM Transactions on
Computer-Human Interaction (TOCHI), 3(4):320–351, 1996.
[20] Barry Kirwan. A guide to practical human reliability assessment. CRC
press, 1994.
[21] Michael C Medlock, Dennis Wixon, Mark Terrano, R Romero, and Bill
Fulton. Using the rite method to improve products: A definition and a
case study. Usability Professionals Association, 51, 2002.
31
[22] George A Miller. The magical number seven, plus or minus two: Some
limits on our capacity for processing information. Psychological review,
63(2):81, 1956.
[23] Lucia Nasti. Modelling and simulation of dopaminergic system in addiction context. the case of internet addiction. 2016.
[24] Jakob Nielsen. Usability engineering. Elsevier, 1994.
[25] Donald A Norman and Tim Shallice. Attention to action. In Consciousness and self-regulation, pages 1–18. Springer, 1986.
[26] Judith Reitman Olson and Gary M Olson. The growth of cognitive
modeling in human-computer interaction since goms. Human–computer
interaction, 5(2-3):221–265, 1990.
[27] Laila Paganelli and Fabio Paternò. Tools for remote usability evaluation
of web applications through browser logs and task models. Behavior
Research Methods, Instruments, & Computers, 35(3):369–378, 2003.
[28] Fabio Paternò. Interazione uomo-computer.
[29] Fabio Paternò. Concurtasktrees: an engineered notation for task models. The handbook of task analysis for human-computer interaction,
pages 483–503, 2004.
[30] Fabio Paterno. Model-based design and evaluation of interactive applications. Springer Science & Business Media, 2012.
[31] Fabio Paternò, Antonio Schiavone, and Antonio Conti. Bad usability
smells in web mobile interactions. 2016.
[32] Ivan Petrovich Pavlov. Conditioned reflexes. an investigation ofthe
physiological activityofthe cerebral cortex. Trans. and ed. GV Anrep.
London: Oxford University Press, 1927.
[33] Roy D Pea. User centered system design: new perspectives on humancomputer interaction. Journal educational computing research, 3:129–
134, 1987.
[34] Wolfram Schultz. Neuronal reward and decision signals: from theories
to data. Physiological Reviews, 95(3):853–951, 2015.
[35] Giovanni Serpelloni. Gambling. Manuale peri Dipartimenti delle Dipendenze, 2013.
32
© Copyright 2026 Paperzz