Shifting and inhibition in cognitive control

Shifting and inhibition in cognitive control
Mario Gianni and Panagiotis Papadakis and Fiora Pirri
ALCOR, Vision, Perception and Cognitive Robotics Laboratory
Department of Computer and System Sciences, University of Rome ‘La Sapienza‘, Italy
{gianni, papadakis, pirri}@dis.uniroma1.it
Abstract— Shifting and inhibition are executive cognitive
functions responding selectively to stimuli, so as to switch
from one activity to a more compelling one or to inhibit
inappropriate urges and preserve focus on the current task.
In an autonomous system these cognitive skills are crucial to
assess a well-regulated reactive behavior, which is of particular
relevance in critical circumstances. In this paper we develop an
approach to shifting and inhibition, as cost-response functions.
These functions are defined within a rich probabilistic structure
modelling the stimuli and the way stimuli are taken into account
while the execution of a task. This is a preliminary account to
task-switching in which we consider only the stimulus-response
side of the task switching, although to deem for the main
shifting and inhibition controls we have to introduce the concept
of tasks and related actions and constraints.
I. INTRODUCTION
Task switching is a crucial aspect of cognitive control
modeling the human ability to adapt to changing circumstances and stimuli. In fact, the ability to selectively respond
to several stimuli, and to inhibit inappropriate urges, focusing
on the task at hands, are well-known to exists in humans
as shifting and inhibition executive functions [1], [2]).The
neuroscience theory of control uses inhibition to explain
many cognitive activities. First and foremost to explain
how a subject in the presence of several stimuli responds
selectively and is able to resist inappropriate urges (see
[3]). Harnischfeger [4], [5] defines cognitive inhibition as
a form of forgetting previously activated cognitive processes
and harnessing interference from processes or contents not
relevant to the main current task. The executive functions of
inhibition and shifting explain the ability to flexibly switch
between tasks, when a reconfiguration of memory is required,
by disengaging from previous goals or task sets (see [6]). It
has been observed that there is a switch cost that can be
explained by the previous task-set inhibition (see [7]).
Since the early work of Jersild [8] several studies in neuro
and cognitive science have led to a better understanding of
many of the variables affecting task switching in the context
of cognitive control (we refer the reader to [9]–[13] and the
citations therein).
These theories on executive control processes and task
switching have strongly influenced cognitive robotics architectures since the eighties, as for example the Norman and
Shallice [14] ATA schema and the principles of goal directed
behaviors in Newell [15] (for a review on these architectures
in the framework of the task switching paradigm see [16]).
However only recently the need to model task switching
and the cognitive functions of shifting and inhibition, under-
lying the cognitive control ability to switch, are becoming
a hot topic in cognitive robotics. Earliest studies have been
carried within brain-actuated interaction [17], mechatronic
[18], learning [19] and planning [20]. Recently several
studies have posed the need to model task switching to
cope with adaptivity and ecological behaviors in a dynamic
environment, such as [21]–[26].
Experts of task switching claim that the decision undergoes a switching cost. This cost results from the interplay
between the time needed to reconfigure a mental state and
the time needed to resolve interference from a previous set
[12].
In this paper we describe an approach to shifting and
inhibition, based on the concept of a switching cost to
reconfigure a mental state and the time needed to resolve
interference from a previous set [12] by modeling the reconfiguration of a new task given the incoming stimuli and
the current state. The approach models how to compute the
stimuli, the stimuli cost, measured according to the current
task and, finally, the response. This preliminary model is
embedded into a planning framework [27], [28] that we do
not discuss here.
We consider stimuli occurring in time, and by attributing
a cost to them we can decide whether the current actions
is to be interrupt if the stimuli is afforded, or if it should
continue and the stimuli disregarded.
The paper is organized as follows. In the next Section II
we introduce some preliminary concepts of the framework; in
Section III, we introduce the measures to evaluate the stimuli
cost. Further we illustrate the response model in Section IV
and in Section V we illustrate the method for choosing the
minimal cost response, if this exists. The paper is concluded
with some indication on possible future directions.
II. PRELIMINARY DEFINITIONS
In this section we introduce some preliminary definitions
concerning processes, the state of the system, the timelines
for assessing both reaction times (RT) and the response
stimulus intervals, and the specification of a task. As reported
in the literature, task switching incurs into a switching cost.
Monsell and colleagues [29] have studied what counts to
reduce this cost, according to preparation – to the stimulus
onset – and demonstrated that reaction time depends on the
preparation to task changes, namely mean RT is longer (and
error rate usually greater) when the task changes than when
the same task is performed as on the previous trial. Still there
is a residual cost [30] [29]. This unavoidable cost is crucial to
Fig. 1. The framework for shifting and inhibition, including the stimuli onset, the shifting cost estimation, the response model, embedded in a task based
cognitive control, that exploits flexible planning.
model the reaction time to stimuli. In fact, from the point of
view of modeling executive functions we shall consider the
switching cost at the basis of the choice between inhibition
or shifting. In this preliminary account to task switching
we model two fundamental aspects of task switching, the
stimulus onset, that is, what counts as a stimulus in terms of
stimulus gradient, and the response to the stimulus according
to the specification of a cost. The models, in turns, require
to model tasks, the processes involved and the awareness of
the current state in order to account for the switch.
We define a task as a set of processes specified by a starting time, while their end-time is either induced by switching
or by time constraints specified by a temporal network [31].
While some processes can be considered executive functions
of a component of the system undergoing the requirements
of goal-directed intentional behaviors such as find an object
or report on the obstacles at that location, shifting and
inhibition are executive functions induced by stimuli.
Processes, however, are not all the same. There are processes that start when the system start, and do not – should
not – get interrupted, in the sense that are unconstrained
by other processes, such as the battery process or the wifi
process, or the interface process, these are the liveness
processes. Other processes that depend on one another for
coordination, such as the motion of the head and the motion
of the arm, the motion of the head and all its related vision
processes, or the scanning and mapping processes, all these
are regulated by time constraints, and are the goal-based
processes. Still, these processes are subject to stimuli that
contrast the top-down influence.
Time constraints regulate how goal-based processes interleave. A task, therefore, is defined as a set of starting
actions, a set of processes properties, such as being active or
elapsed, and a set of end actions when constraints require so.
To better understand the framework we provide the following
definitions concerning a process, the network of constraints,
and the state of the system with respect to a time window.
Definition 1: Processes A process is a program residing
in memory at execution time and returning an information
value which is called the yield of the process or the outcome
of the process. A process therefore is defined over a domain
of values, a starting time, and a time frequency. Despite
processes have time frequencies that depend on the component issuing the process (for example the time frequency of
different cameras and the time frequency of the wifi signal),
we maintain a time stamp given by the robot clock time. The
information value can be either continuous or discrete, hence
each process is defined to be a mapping π : Dm ×t0 ×t f 7→ x,
where D is the domain of values, t0 ,t f ∈ R are the initial time
and the time frequency. Processes are started by a start action, for example start pan (Head,t0 , y), starttilt (Head,t0 , y),
startzoom (Camera,t0 , y), startup (Arm,t0 , y), and so on, where
the first argument indicates the component the process activates with the starting action, the second is the time at
which the process is initiated, and the last arguments denote
the parameters of motion; the subscript indicates the name
of the process. Each starting action has restrictions imposed
by conditions determining whether the action is possible in
the current state, and each process has preconditions and
postconditions that are better defined in the whole framework
that we do not discuss here (see [20], [24]).
Examples of processes are given in Table I, note that
starting time is 0 for liveness processes; in the table the
abbreviations are c = certain, int = interesting, meas =
measured. In Figures 4 and 5 we illustrate some examples
TABLE I
E XAMPLE OF PROCESSES , THEIR INFORMATION VALUE AND TIME
SPECIFICATION .
process.
Battery
WIFI
Vision
2DMap
3DMap
Interface
Dm
R
R
R×I
R2
R2
I×I
start time
0
0
t0
tk
tk
0
time freq
ms
ms
fps
s
s
s
probability (see Figure 5) for which a sampling window
is learned based on several trials.
• The set of timelines for each component of the system
that is involved in the task, see for an example Figure
inf. value
level
3.
s. level/ distance
• The effect-cause support of each process start action.
# int. points/object
• The set of constraints of P.
(c. area)/(meas. area)
Definition 4: Time frame. Let us indicate with TC the
(c. surface)/(meas. surface)
requests
time value of the internal system clock at a specific time
instant. Consider the execution of a task, say τ, and let
ActiveGB (TC ) be the set of all the goal-based processes that
are active at time TC , namely all the goal-based processes
that started at a time t < TC and that are not yet ended. Let
also ActiveL (TC ) be the set of all liveness processes we can
note that this set of processes are active since the switch-on
of the system.
Let us order the initial time t0 of all start actions that are in
ActiveGB (TC ): tord = t1 ≤ . . . ≤ tn , where each ti is the starting
time of a specific process that is in ActiveGB (TC ). The time
frame is defined as:
t f rame = [min(tord ), (max(tord ) − min(tord ) + ∆t)]
Fig. 2. A temporal constraints network specifying the time constraints
between processes that are goal-based. The edges of the network are labeled
by the temporal laws; for example the request that for a specific task two
processes work in parallel can be accounted for by the temporal constraint
overlap. The nodes of the network are labeled by the processes and the time
interval of activity. Note that time can remain uninstantiated up to when
the process initiate, likewise the ending time can be instantiated when the
process must end.
of the yields of processes in 5 different trials.
Definition 2: Temporal constraint network for a set of
processes. Let P be a set of processes, a temporal constraint
network (TCN) T for a set of goal-directed processes is
defined by a set of interval constraints between pairs of
processes according to the interval algebra early defined in
[32], [33]; a TCN is illustrated in Figure 2. A network T is
consistent if for each process πi ∈ P and for each of their
starting time t0 = ti− , there exists an ending time ti+ which
forms a solution set ν(T ):
(1)
Here ∆t is the largest time frequency of the yield of the
processes in ActiveGB (TC ), at TC . In other words the time
frame t f rame span from the earliest started goal-based process
which is still active, up to the current time, plus an amount
that corresponds to the time frequency of the process that has
the largest time frequency in its yields, for example vision
or mapping, as opposed to wifi. We shall see that the time
frame is needed to establish the cost of the switch.
Definition 5: State of the system. The state of the system
coincides with the memory of the active processes, namely
the processes active in the time frame t f rame . Processes are
active in the time frame memory together with constraints
regulating their interaction and together with the processes
yields.
III. THE STIMULI MODEL
The information yielded by a process, namely the yield
of a process, is a mapping returning a vector x of features,
obtained either by internal information such as the level of
the wifi signal, or the state of the battery, or the pan angle
of the head; or it is obtained as the outcome of a sensory
ν(T ) = {[t1− ,t1+ ], ..., [tn− ,tn+ ] : ti+ ,ti− ∈ R+ ,ti− < ti+ }
satisfying all the temporal constraints between the processes.
Definition 3: Tasks. Given a temporal constraint network
T , a task τ is defined by the following values:
• The set of goal-based processes P labeling the network
T and the set of liveness processes.
• The significant range values of the non parametric
Fig. 3.
Constraints between processes are required to coordinate the
activities of each component of the system.
Fig. 4. Vision processes outcomes in 5 different trials of 20s. The outcome
is measured in terms of number of interesting points in the detection of
cars (up) or person (below), in an image window of size 30 × 30. The
processes have clearly different behaviors and the goal is to learn what can
be considered as a stimulus in terms of a difference between the normal
detection of interesting points – which are disregarded – and the an-normal
detection leading to a good probability of having detected the object looked
for.
operation, such as the amount of area signed as measured
with respect to the total area while mapping and localization,
and so on. In Figure 4 the vision process on the max number
of interest points, for a 30 × 30 window, is seen to yield a
one dimensional feature, the process is repeated for 5 trials.
As outlined in the previous section this information is
available to the system when the process is active, and resides
in the memory. The process yielding should not be seen as
the stimulus onset, unless there are values that are out-ofthe-norm. For example if the battery level becomes low and
under a safe level then we might consider the yielding of the
battery as a stimulus onset.
The problem we face in this section is to determine
the activation region inside which we can determine the
stimulus onset. Given a certain amount of trials for each
process we use the kernel density estimation, in particular the
Epanechnikov kernel, to determine the shape of the density
of each process. Namely given a diagonal matrix H for each
process, that it is adjusted experimentally according to time
frequency, the non parametric density – for a one dimensional
Fig. 5. The two histograms show usual values – the high values – and the
unusual ones which are those that suggest a possible shift.
yield is:
1 n 1
1
fˆπi (x,t) = ∑
K
n i=1 h(11) h(22)
xi − Xi
h(11)
!
K
ti − Ti
h(22)
!
(2)
Here K is (3/4){1 − ((x − X)/h)2 }I(|(x − X)/h| ≤ 1) with
X ranging over the domain. The kernel has the job of
smoothing the histogram and making the bins more flexible.
The histograms of the yields illustrated in Figure 4 are
illustrated in Figure 5. The histograms maximal values gather
those values that have a highest frequency to occur while the
process is active, that is, those that in principle do not induce
a stimulus. While the stimulus onset is around the minimal
values. The 2 × 2 Hessian is formed by fxx , fxt , ftt , and for
example ftt is:
xi −Xi I(|(t−Ti )
9K
n
h(11)
h22 | ≤ 1)
1
(3)
ftt = n ∑ −
nh i=1
8h2
1.0
0.8
0.6
0.4
0.2
0.0
-3
-2
-1
0
1
2
3
Fig. 6. When the kernel values at fxx and ftt , for the positive eigenvalues
of the Hessian, lead to values of S in the activation area, the red area, the
stimulus is onset.
The more rare events, those having a lower probability to
occur, somehow the unexpected values, are those for which
all the eigenvalues of the Hessian are positive, given that
the Hessian is non-singular. Indeed, the local minima of the
non-parametric density, that we have illustrated in Figure 5
as an histogram, are stimuli onsets. Changes are non-relevant
when the values are such that
ξ = tan−1 ( fxx + i ftt ) ≤ 1
We denote it the indifference region, illustrated as the blue
region in Figure 6, while the red area is the activation
area. Stimuli are triggered when their values belong to the
activation region.
IV. STIMULI-RESPONSE MAPPING
In this section we discuss how to model the response
when stimuli are in the activation region. In many cases,
several stimuli can occur at the frame time t f rame , while
several parallel processes are running within a task given the
current system state. Furthermore, several responses might
be possible, according to the triggered stimuli, while only
one response can be given. It is therefore necessary to learn
the mapping stimuli-response, as a distribution specifying the
probability that a task τi is selected given the stimuli occurring while task τ j is running. To introduce the distribution,
we need first the following definition.
Task Stimuli at t f rame . Given a state S and the set of
processes P in the set ActiveGB (TC ), see Definition 4, for the
task τq let ξ = {ξ1 , . . . , ξm } be the stimuli onsets for all the
processes in ActiveGB (TC ). The task stimuli at t f rame is the
binary vector Z = (z1 , . . . , zm )> such that zi = 1 if ξi belongs
to the activating area and 0 otherwise. We can note that there
can be at most a stimulus for each process in ActiveGB (TC ),
hence 0 ≤ ∑m
i=1 zi ≤ m.
Let us now consider K trials of the task τq , at fixed ∆t,
and for each such trial we manually code the vector X =
(X11 , . . . , X1,m , . . . , Xn1 , . . . Xnm ), with n the number of tasks
of the system and m the number of stimuli of the task τq
defined as follows. Initially all Xi j have value β , then at trial
k, k = 1, . . . , K:
(
Xik−1
if Zi = 0 or no task τ j is chosen;
j
Xikj =
Xik−1
+
1
if
Zi = 1 and task τ j is chosen, j 6= q.
j
(4)
If the tasks are assigned independently of one another,
the tasks distribution for task stimuli is multinomial with
parameters µ1 , . . . µn×n , more realistically the assignment
follows a contagion model because once a task is selected the
way it is further selected changes according to a parameter
β called the contagion parameter [34], and it converges to
the multinomial distribution as β approaches infinity. The β
parameter incorporates a learning behavior from past steps
in choosing the task to assign to the stimulus. The resulting
distribution is a simplified version of the multivariate Polya
distribution, so the joint probability mass function for X, is:
pX (x11 , x12 , . . . , x1m , . . . , xn1 , . . . , xnm ) =
Γ((m×n)β )
k!
∏(i, j)∈H xi j ! Γ((m×n)β +k)
∏(i j)∈H
Γ(xi j +β )
Γ(β )
(5)
Here k = ∑(i, j)∈H xi j and H = {(1, 1), . . . , (1, m),
. . . , (n, 1), . . . , (n, m)}. Thus we have a single parameter
β which needs to be estimated. In order to tune the β a
test of fit is conducted using a Monte Carlo version of the
multinomial test of goodness of fit [35].
V. RESPONSE SELECTION
The reactions of the robotic system at t f rame , while executing task τq is described by the non-parametric density defined
in (5), giving a stimulus-response mapping. We have now to
determine the cost of choosing a task given a stimulus, to
improve the response distribution.
For each yield xπ,t of a process π we define the following
distance value
di = |ξπ,t − γ|
with γ the superior contour of the indifference region. Let,
now, ψ be the set of starting action in ActiveGB (TC ) as
specified in Definition 4 and let ϕ be the set of liveness
processes together with the process that are still active
although their starting action is not in ActiveGB (TC ). For each
process active in t f rame we define the set of constraints CA
linking the process to other processes not in ActiveGB (TC ).
Now ψ ∪ ϕ ∪ CA define a set that effectively contrasts a
possible decision of switching, and thus it represents a cost:
Cost(τ) = max(#{ψ ∪ ϕ ∪CA })
The value of Cost(τ) represents the cost to shift from the
current state S of the system to the state in which the
preconditions for the execution of the task τ 0 responding
to the stimulus are satisfied. The cost of inhibition CostI is,
instead, given a priori for each stimulus that is known to
happen.
Given the specifications Ξ = {τ1 , . . . , τn } (see 3) of the
system, we define a function F : Ξ → R as follows
!
K
F(τ) = w
∑ dk δk
f (ψ)
(6)
k=1
here w is the weight of the task and δk ∈ {0, 1} selects which
distances dk further weights the task according to the stimuliresponse mapping.
The reaction to the stimuli occurrence is selected by
solving the following optimization problem
minimize
τi ∈Ξ
F(τi )
subject to Cost(τ) ≤ CostI (ξ )i , i = 1, . . . , n.
If no solution τi of the problem exists, then the stimuli is
inhibited.
VI. CONCLUSION
In this paper we have introduced a preliminary model for
task switching facing the stimuli onset, the switch cost and
finally the response model. The topic is becoming of great
interest in the cognitive robotics community, likewise in the
attention and vision communities, as it faces an important
problem: modeling processes as means for inducing changes
in goal-based behaviors so as to ensure adaptation to a
changing environment. Clearly, we face different scenarios,
from those devised by neuroscience, and thus our paradigms
are slightly different. Still, the main difficulty concerns all the
implications of shifting and inhibition, in terms of response
time and intervals as studied in deep details in neuroscience;
on the other hand there is the need of the modeling the
complete framework on which task-switching can be based,
since the definition of the cost requires a clear understanding
of all the connections of a process with other ones, like future
processes, linked to the present ones, by constraints. This are,
indeed, the aims of the ongoing research.
R EFERENCES
[1] E. Miller and J. Cohen, “An integrative theory of prefrontal cortex
function,” Annual Rev. Neuroscience, vol. 24, pp. 167 – 202, 2007. 1
[2] A. R. Aron, “The neural basis of inhibition in cognitive control,” The
Neuroscientist, vol. 13, pp. 214 – 228, 2007. 1
[3] S. P. Tipper, “Does negative priming reflect inhibitory mechanisms?
a review and integration of conflicting views,” Quarterly Journal of
Experimental Psychology, vol. 54, pp. 321 – 343, 2001. 1
[4] K. K. Harnishfeger and D. F. Bjorklund, The evolution of inhibition
mechanisms and their role in human cognition and behavior., 1995,
pp. 141 – 173. 1
[5] K. Kipp and R. S. Pope, “Intending to forget: The development of
cognitive inhibition in directed forgetting,” Journal of Experimental
Child Psychology, vol. 62, pp. 292 – 315, 1996. 1
[6] U. Mayr and S. Keele, “Changing internal constraints on action: the
role of backward inhibition.” Journal of Experimental Psychology, vol.
129, no. 1, pp. 4–26, 2000. 1
[7] A. Philipp and I. Koch, “Task inhibition and task repetition in task
switching,” The European Journal of Cognitive Psychology, vol. 18,
no. 4, pp. 624–639, 2006. 1
[8] A. Jersild, “Mental set and shift.” Archives of Psychology, vol. 89, pp.
5–82, 1927. 1
[9] R. Rogers and S. Monsell, “The costs of a predictable switch between
simple cognitive tasks,” J. of Exp. Psychology: General, vol. 124, pp.
207–231, 1995. 1
[10] Thea and Ionescu, “Exploring the nature of cognitive flexibility,” New
Ideas in Psych., vol. 30, no. 2, pp. 190 – 200, 2012. 1
[11] C. Chamberland and S. Tremblay, “Task switching and serial memory:
Looking into the nature of switches and tasks,” Acta Psych., vol. 136,
pp. 137–147, 2011. 1
[12] S. Monsell, “Task switching,” Trends in Cognitive Sciences, vol. 7,
no. 3, pp. 134 – 140, 2003. 1
[13] J. Rubinstein, D. Meyer, and E.Jeffrey, “Executive control of cognitive
processes in task switching.” J. of Exp. Psych.: Human Perception and
Performance, vol. 27, no. 4, pp. 763–797, 2001. 1
[14] D. A. Norman and T. Shallice, Consciousness and Self-Regulation:
Advances in Research and Theory. Plenum Press, 1986, vol. 4, ch.
Attention to action: Willed and automatic control of behaviour. 1
[15] A. Newell, Unified theories of cognition. Harvard University Press,
1990. 1
[16] J. Rubinstein, E. Meyer, and J. E. Evans, “Executive control of cognitive processes in task switching,” Journal of Experimental Psychology:
Human Perception and Performance, vol. 27, no. 4, pp. 763–797, 2001.
1
[17] J. d. R. Milln, F. Renkens, J. Mourio, and W. Gerstner, “Brain-actuated
interaction,” Artificial Intelligence, vol. 159, pp. 241–259, 2004. 1
[18] G. Capi, “Robot task switching in complex environments,” in Advanced intelligent mechatronics, 2007 IEEE/ASME international conference on, 2007, pp. 1 –6. 1
[19] M. Ito, K. Noda, Y. Hoshino, and J. Tani, “Dynamic and interactive
generation of object handling behaviors by a small humanoid robot
using a dynamic neural network model,” Neural Networks, vol. 19,
pp. 323–337, 2006. 1
[20] A. Finzi and F. Pirri, “Representing flexible temporal behaviors in the
situation calculus,” in In Proc. of IJCAI, 2005. 1, 3
[21] G. Capi, G. Pojani, and S.-I. Kaneko, “Evolution of task switching
behaviors in real mobile robots,” in Innovative Computing Information
and Control, 2008. ICICIC ’08. 3rd International Conference on, 2008,
p. 495. 1
[22] S. Suzuki, T. Sasaki, and F. Harashima, “Visible classification of
task-switching strategies in vehicle operation,” in Robot and Human
Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE
International Symposium on, 2009, pp. 1161 –1166. 1
[23] J. Wawerla and R. Vaughan, “Robot task switching under diminishing returns,” in Intelligent Robots and Systems, 2009. IROS 2009.
IEEE/RSJ International Conference on, 2009, pp. 5033 –5038. 1
[24] A. Finzi and F. Pirri, “Switching tasks and flexible reasoning in the
situation calculus,” DIS Sapienza Università di Roma, Tech. Rep. 7,
2010. 1, 3
[25] K. Durkee, C. Shabarekh, C. Jackson, and G. Ganberg, “Flexible
autonomous support to aid context and task switching,” in Cognitive
Methods in Situation Awareness and Decision Support (CogSIMA),
2011 IEEE First International Multi-Disciplinary Conference on,
2011, pp. 204 –207. 1
[26] D. D’Ambrosio, J. Lehman, S. Risi, and K. Stanley, “Task switching
in multirobot learning through indirect encoding,” in Intelligent Robots
and Systems (IROS), 2011 IEEE/RSJ International Conference on,
2011, pp. 2802 –2809. 1
[27] A. Finzi, F. Pirri, Ray, and R. Reiter, “Open world planning in the
situation calculus,” in In Proc. of (AAAI-2000). AAAI Press, 1999,
pp. 754–760. 1
[28] F. Pirri and R. Reiter, “Some contributions to the metatheory of the
situation calculus,” J. ACM, vol. 46, no. 3, pp. 325–361, 1999. 1
[29] S. Nieuwenhuis and S. Monsell, “Residual cost in task switching:
Testing the failure to engage hypothesis,” Psychonomic Bulletin and
Review, vol. 1, no. 9, pp. 86 – 92, 2002. 1
[30] R. Rogers and S. Monsell, “osts of a predictable switch between simple
cognitive tasks,” Journal of Experimental Psychology: General, no.
124, pp. 207 – 231, 1995. 1
[31] I. Meiri, “Combining qualitative and quantitative constraints in temporal reasoning,” J. Art. Intel., pp. 260–267, 1996. 2
[32] R. Dechter, I. Meiri, and J. Pearl, “Temporal constraint networks,”
Artif. Intell., vol. 49, no. 1-3, pp. 61–95, 1991. 3
[33] M. Vilain, H. Kautz, and P. Beek, “Constraint propagation algorithms
for temporal reasoning,” in Readings in Qualitative Reasoning about
Physical Systems. Morgan Kaufmann, 1986, pp. 377–382. 3
[34] P. Kvam and D. Day, “The multivariate polya distribution in combat
modeling,” Naval Res. Logistics, vol. 48, no. 1, pp. 1–17, 2001. 5
[35] J. R. Taylor, An introduction to error analysis. USB, 1997. 5