Team-triggered coordination for real-time control of networked cyber

1
Team-triggered coordination for real-time
control of networked cyber-physical systems
Cameron Nowzari
Abstract—This paper studies the real-time implementation of
distributed controllers on networked cyber-physical systems. We
build on the strengths of event- and self-triggered control to
synthesize a unified approach, termed team-triggered, where
agents make promises to one another about their future states
and are responsible for warning each other if they later decide
to break them. The information provided by these promises
allows individual agents to autonomously schedule information
requests in the future and sets the basis for maintaining desired
levels of performance at lower implementation cost. We establish
provably correct guarantees for the distributed strategies that
result from the proposed approach and examine their robustness
against delays, packet drops, and communication noise. The
results are illustrated in simulations of a multi-agent formation
control problem.
I. I NTRODUCTION
A growing body of work studies the design and real-time
implementation of distributed controllers to ensure the efficient
and robust operation of networked cyber-physical systems. In
multi-agent scenarios, energy consumption is correlated with
the rate at which sensors take samples, processors recompute
control inputs, actuator signals are transmitted, and receivers
are left on listening for potential incoming signals. Performing
these tasks periodically is costly, might lead to inefficient
implementations, or face hard physical constraints. To address
these issues, the goal of triggered control is to identify criteria
that allow agents to tune the implementation of controllers and
sampling schemes to the execution of the task at hand and
the desired level of performance. In event-triggered control,
the focus is on detecting events during the network execution
that are relevant from the point of view of task completion
and should trigger specific agent actions. In self-triggered
control, the emphasis is instead on developing tests that rely
only on current information available to individual agents to
schedule future actions. Event-triggered strategies generally
result in less samples or controller updates but, when executed
over networked systems, may be costly to implement because
of the need for continuous availability of the information
required to check the triggers. Self-triggered strategies are
more easily amenable to distributed implementation but result
in conservative executions because of the overapproximation
by individual agents about the state of the environment and the
network. These strategies might be also beneficial in scenarios
where leaving receivers on to listen to potential messages is
costly. Our objective in this paper is to build on the strengths
Preliminary versions of this paper have appeared as [1] and [2].
The authors are with the Department of Mechanical and Aerospace
Engineering, University of California, San Diego, California, USA,
{cnowzari,cortes}@ucsd.edu
Jorge Cortés
of event- and self-triggered control to synthesize a unified
approach for controlling networked systems in real time that
combines the best of both worlds.
Literature review: The need for systems integration and
the importance of bridging the gap between computing, communication, and control in the study of cyber-physical systems cannot be overemphasized [3], [4]. Real-time controller
implementation is an area of extensive research including
periodic [5], [6], event-triggered [7], [8], [9], [10], and selftriggered [11], [12], [13] procedures. Our approach shares
with these works the aim of trading computation and decision
making for less communication, sensor, or actuator effort
while still guaranteeing a desired level of performance. Of
particular relevance to this paper are works that study self- and
event-triggered implementations of controllers for networked
cyber-physical systems. The predominant paradigm is that of
a single plant that is stabilized through a decentralized triggered controller over a sensor-actuator network, see e.g. [14],
[15], [16]. Fewer works have considered scenarios where
multiple plants or agents together are the subject of the
overall control design. Exceptions include consensus via eventtriggered [17], [18], [19] or self-triggered control [17], [20],
rendezvous [21], model predictive control [22], and modelbased event-triggered control [23], [24]. The event-triggered
controller designed in [17] for a decentralized system with
multiple plants requires agents to have continuous information
about each others’ states. The works in [17], [25] implement
self-triggered communication schemes to perform distributed
control where agents assume worst-case conditions for other
agents when deciding when new information should be obtained. Distributed strategies based on event-triggered communication and control are explored in [26], where each agent has
an a priori computed local error tolerance and once it violates
it, the agent broadcasts its updated state to its neighbors. The
same event-triggered approach is taken in [27] to implement
gradient control laws that achieve distributed optimization. The
works [23], [28], [29] are closer in spirit to the ideas presented
here. In the interconnected system considered in [23], each
subsystem helps neighboring subsystems by monitoring their
estimates and ensuring that they stay within some performance
bounds. The approach requires different subsystems to have
synchronized estimates of one another even though they do
not communicate at all times. In [28], [29], agents do not
have continuous availability of information from neighbors and
instead decide when to broadcast new information to them.
Statement of contributions: We propose a novel scheme
for the real-time control of networked cyber-physical systems
that combines ideas from event- and self-triggered control.
2
Our approach is based on agents making promises to one
another about their future states and being responsible for
warning each other if they later decide to break them. This is
reminiscent of event-triggered implementations. Promises can
be broad, from tight state trajectories to loose descriptions of
reachability sets. With the information provided by promises,
individual agents can autonomously determine when in the
future fresh information will be needed to maintain a desired
level of performance. This is reminiscent of self-triggered
implementations. The benefits of the proposed scheme are
threefold. First, because of the availability of the promises,
agents do not require continuous state information about
neighbors, in contrast to event-triggered strategies implemented over distributed systems that require the continuous
availability of the information necessary to check the relevant
triggers. Second, because of the extra information provided
by promises about what other agents plan to do, agents can
generally wait longer periods of time before requesting new
information and operate more efficiently than if only worstcase scenarios are assumed, as is done in self-triggered control.
Less overall communication is beneficial in reducing the total
network load and decreasing chances of communication delays
or packet drops due to network congestion. Lastly, we provide
theoretical guarantees for the correctness and performance of
team-triggered strategies implemented over distributed networked systems. Our technical approach makes use of setvalued analysis, invariance sets, and Lyapunov stability. We
also show that, in the presence of physical sources of error
and under the assumption that 1-bit messages can be sent
reliably with negligible delay, the team-triggered approach
can be slightly modified to be robust to delays, packet drops,
and communication noise. Interestingly, the self-triggered approach can be seen as a particular case of the team-triggered
approach where promises among agents simply consist of their
reachability sets (and hence do not actually constrain their
state). We illustrate the convergence and robustness results
through simulation in a multi-agent formation control problem,
paying special attention to the implementation costs and the
role of the tightness of promises in the algorithm performance.
Organization: Section II lays out the problem of interest.
Section III briefly reviews current real-time implementation
approaches based on agent triggers. Section IV presents the
team-triggered approach for networked cyber-physical systems. Sections V and VI analyze the correctness and robustness, respectively, of team-triggered strategies. Simulations
illustrate our results in Section VII. Finally, Section VIII
gathers our conclusions and ideas for future work.
Notation: We let R, R≥0 , and Z≥0 denote the sets of real,
nonnegative real, and nonnegative integer numbers, respectively. The two-norm of a vector is k · k2 . Given x ∈ Rd and
δ ∈ R≥0 , B(x, δ) denotes the closed ball centered at x with
radius δ. For Ai ∈ Rmi ×ni with i ∈ {1, . . . , N }, we denote by
diag (A1 , . . . , AN ) ∈ Rm×n the block-diagonalPmatrix with
N
A1 through AN on the diagonal, where m = i=1 mi and
PN
n = i=1 ni . Given a set S, we denote by |S| its cardinality.
We let Pc (S), respectively Pcc (S), denote the collection of
compact, respectively, compact and connected, subsets of S.
The Hausdorff distance between S1 , S2 ⊂ Rd is
dH (S1 , S2 ) = max{ sup inf kx − yk2 , sup inf kx − yk2 }.
y∈S2 x∈S1
x∈S1 y∈S2
The Hausdorff distance is a metric on the set of all nonempty compact subsets of Rd . Given two bounded set-valued
functions C1 , C2 ∈ C 0 (I ⊂ R; Pc (Rd )), its distance is
dfunc (C1 , C2 ) = sup dH (C1 (t), C2 (t)).
(1)
t∈I
An undirected graph G = (V, E) is a pair consisting of a set of
vertices V = {1, . . . , N } and a set of edges E ⊂ V × V such
that if (i, j) ∈ E, then (j, i) ∈ E. The set of neighbors
QN of a
vertex i is N (i) = {j ∈ V | (i, j) ∈ E}. Given v ∈ i=1 Rni ,
i
we let vN
= (vi , {vj }j∈N (i) ) denote the components of v that
correspond to vertex i and its neighbors in G.
II. N ETWORK MODELING AND PROBLEM STATEMENT
We consider a distributed control problem carried out
over an unreliable wireless network. Consider N agents
whose communication topology is described by an undirected
graph G. The fact that (i, j) belongs to E models the ability of
agents i and j to communicate with one another. The agents i
can communicate with are its neighbors N (i) in G. The state
of i ∈ {1, . . . , N }, denoted xi , belongs to a closed set Xi ⊂
Rni . TheQnetwork state x = (x1 , . . . , xN ) therefore belongs
N
to X = i=1 Xi . According to the discussion above, agent i
can access xiN when it communicates with its neighbors. By
assumption, each agent has access to its own state at all times.
We consider linear dynamics for each i ∈ {1, . . . , N },
ẋi = fi (xi , ui ) = Ai xi + Bi ui ,
ni ×ni
(2)
ni ×mi
, and ui ∈ Ui . Here, Ui ⊂
, Bi ∈ R
with Ai ∈ R
Rmi is a closed set of allowable controls for agent i. We
assume the existence of a safe-mode controller usf
i : Xi → U i ,
Ai xi + Bi usf
i (xi ) = 0,
for all xi ∈ Xi ,
i.e., a controller able to keep agent i’s state fixed. The existence
of a safe-mode controller for a general controlled system
may seem restrictive, but there exist many cases, including
nonlinear systems, that admit one, such as single integrators or
vehicles
QNwith unicycle dynamics. Letting u = (u1 , . . . , uN ) ∈
U = i=1 Ui , the dynamics can be described by
ẋ = Ax + Bu,
(3)
n×n
with A = diag (A1 , . . . , AN ) ∈ R
PN and B =
n×m
diag (B , . . . , BN ) ∈ R
, where n = i=1 ni , and m =
PN 1
m
.
We
refer
to
the
team
of agents with communication
i
i=1
topology G and dynamics (3), where each agent has a safemode controller and access to its own state at all times, as
a networked cyber-physical system. The goal is to drive the
agents’ states to some desired closed set of configurations
D ⊂ X and ensure that it stays there. Depending on how D is
defined, this objective can capture different coordination tasks,
including deployment, rendezvous, and formation control. The
goal of the paper is not to design the controller that achieves
this but rather synthesize efficient strategies for the real-time
implementation of a given controller.
3
Given the agent dynamics, the communication graph G, and
the set D, our starting point is the availability of a control
law that drives the system asymptotically to D. Formally, we
assume that a continuous map u∗ : X → U and a continuously
differentiable function V : X → R, bounded from below exist
such that D is the set of minimizers of V and, for all x ∈
/ D,
∇i V (x) (Ai xi + Bi u∗i (x)) ≤ 0, i ∈ {1, . . . , N },
N
X
∇i V (x) (Ai xi + Bi u∗i (x)) < 0.
(4a)
(4b)
i=1
∗
We assume that both the control law u and the gradient
∇V are distributed over G. By this we mean that, for each
i ∈ {1, . . . , N }, the ith component of each of these objects
only depends on xiN , rather than on the full network state x.
For simplicity, and with a slight abuse of notation, we write
u∗i (xiN ) ∈ Ui and ∇i V (xiN ) ∈ Rni to emphasize this fact
when convenient. This property has the important consequence
that agent i can compute these quantities with the exact
information it can obtain through communication on G.
Remark II.1 (Assumption on non-negative contribution of
each agent to task completion) Note that (4b) simply states
that V is a Lyapunov function for the closed-loop system.
Instead, (4a) is a more restrictive assumption that essentially
states that each agent does not individually contribute in a
negative way to the evolution of the Lyapunov function. This
latter assumption can in turn be relaxed
PN [14] by selecting
parameters α1 , . . . , αN ∈ R with
i=1 αi = 0 (note that
some αi would be positive and others negative) and specifying
instead that, for each i ∈ {1, . . . , N }, the left-hand side of (4a)
should be less than or equal to αi . Along these lines, one could
envision the design of distributed mechanisms to dynamically
adjust these parameters, but we do not go into details here for
space reasons.
•
From an implementation viewpoint, the controller u∗ requires continuous agent-to-agent communication and continuous updates of the actuator signals, making it unfeasible for
practical scenarios. In the next section we develop a selftriggered communication and control strategy to address the
issue of selecting time instants for information sharing.
III. S ELF - TRIGGERED COMMUNICATION AND CONTROL
This section provides an overview of the self-triggered
communication and control approach to solve the problem
described in Section II. In doing so, we also introduce several
concepts that play an important role in our discussion later.
The general idea is to guarantee that the time derivative of the
Lyapunov function V along the trajectories of the networked
cyber-physical system (3) is less than or equal to 0 at all times,
even when the information used by the agents is inexact.
To model the case that agents do not have perfect information about each other at all times, we let each agent
i ∈ {1, . . . , N } keep an estimate x
bij of the state of each of its
neighbors j ∈ N (i). Since i always has access to its own state,
x
biN (t) = (xi (t), {b
xij (t)}j∈N (i) ) is the information available
to agent i at time t. Since agents do not have access to exact
information at all times, they cannot implement the controller
u∗ exactly, but instead use the feedback law
∗ i
uself
xN (t)).
i (t) = ui (b
We are now interested in designing a triggering method such
that agent i can decide when x
biN (t) needs to be updated.
Let tlast be the last time at which all agents have received
information from their neighbors. Then, the time tnext at which
the estimates should be updated is when
d
V (x(tnext ))
(5)
dt
N
X
∇i V (x(tnext )) Ai xi (tnext ) + Bi uevent
(tlast ) = 0.
=
i
i=1
Unfortunately, (5) requires global information and cannot be
checked in a distributed way. Instead, one can define a local
event that defines when a single agent i ∈ {1, . . . , N } should
update its information as any time that
∇i V (x(t)) Ai xi (t) + Bi uself
(6)
i (t) = 0.
As long as each agent i can ensure the local event (6) has not
yet occurred, it is guaranteed that (5) has not yet occurred
either. The problem with this approach is that each agent
i ∈ {1, . . . , N } needs to have continuous access to information
about the state of its neighbors N (i) in order to evaluate
∇i V (x) = ∇i V (xiN ) and check condition (6). The selftriggered approach removes this requirement on continuous
availability of information by having each agent employ
instead the possibly inexact information about the state of
their neighbors. The notion of reachability set plays a key role
in achieving this. Given y ∈ Xi , the reachable set of points
under (2) starting from y in s seconds is,
Ri (s, y) = {z ∈ Xi | ∃ ui : [0, s] → Ui such that
Z s
Ai s
eAi (s−τ ) Bi ui (τ )dτ }.
z=e y+
0
Using this notion, if agents have exact knowledge about the
dynamics and control sets of its neighboring agents (but not
their controllers), each agent can construct, each time state
information is received, sets that are guaranteed to contain
their neighbors’ states.
Definition III.1 (Guaranteed sets) If tilast is the time at
which agent i receives state information xj (tilast ) from its
neighbor j ∈ N (i), then the guaranteed set is given by
Xij (t, tilast , xj (tilast )) = Rj (t − tilast , xj (tilast )) ⊂ Xj ,
(7)
and is guaranteed to contain xj (t) for t ≥ tilast .
We let Xij (t) = Xij (t, tilast , xj (tilast )) when the starting state
xj (tilast ) and time tilast do not need to be emphasized. We denote
by XiN (t) = (xi (t), {Xij (t)}j∈N (i) ) the information available
to agent i at time t.
Remark III.2 (Computing reachable sets) Finding
the
guaranteed or reachable sets (7) can be in general
4
computationally expensive. A common approach consists of
computing over-approximations to the actual reachable set via
convex polytopes or ellipsoids. There exist efficient algorithms
to calculate and store these for various classes of systems, see
e.g., [30], [31]. Furthermore, agents can deal with situations
where they do not have exact knowledge about the dynamics
of their neighbors (so that the guaranteed sets cannot be
computed exactly) by employing overapproximations of the
actual guaranteed sets.
•
these promises are violated later (hence the connection with
event-triggered control). With the extra information provided
by the availability of the promises, each agent computes the
next time that an update is required and requests information
from their neighbors accordingly to guarantee the monotonicity of the Lyapunov function V introduced in Section II (hence
the connection with self-triggered control).
A. Promises
With the guaranteed sets in place, we can now provide a test
that allows agents to determine when they should update their
current information and control signals. At time tilast , agent i
computes the next time tinext ≥ tilast to acquire information via
i
∇i V (yN ) Ai xi (tinext ) + Bi uself
sup
i (tnext ) = 0. (8)
yN ∈XiN (tinext )
By (4a) and the fact that Xij (tilast ) = {xj (tilast )}, at time tilast ,
i
∇i V (yN ) Ai xi (tilast ) + Bi uself
sup
i (tlast )
yN ∈XiN (tilast )
i
= ∇i V (xiN (tilast )) Ai xi (tilast ) + Bi uself
i (tlast ) ≤ 0.
If all agents use this triggering criterion for updating informad
V (x(t)) ≤ 0 at all times because,
tion, it is guaranteed that dt
for each i ∈ {1, . . . , N }, the true state xj (t) is guaranteed to
be in Xij (t) for all j ∈ N (i) and t ≥ tilast .
The condition (8) is appealing because it can be evaluated
by agent i with the information it possesses at time tilast .
Once determined, agent i schedules that, at time tinext , it will
request updated information from its neighbors. We refer to
tinext − tilast as the self-triggered request time for agent i. Due
to the conservative way in which tinext is determined, it is
possible that tinext = tilast for some i, which would mean that
instantaneous information updates are necessary (note that this
cannot happen for all i ∈ {1, . . . , N } unless the network state
is already in D). This can be dealt with by introducing a dwell
time such that a minimum amount of time must pass before
an agent can request new information and using the safe-mode
controller while waiting for the new information. We do not
enter into details here and defer the discussion to Section IV-C.
The problem with the self-triggered approach is that the
resulting times are often conservative because the guaranteed
sets can grow large quickly as they capture all possible
trajectories of neighboring agents. It is conceivable that improvements can be made from tuning the guaranteed sets based
on what neighboring agents plan to do rather than what they
can do. This observation is at the core of the team-triggered
approach proposed next.
IV. T EAM - TRIGGERED COORDINATION
This section presents the team-triggered approach for the
real-time implementation of distributed controllers on networked cyber-physical systems. The team-triggered approach
incorporates the reactive nature of event-triggered approaches
and, at the same time, endows individual agents with the autonomy characteristic of self-triggered approaches to determine
when and what information is needed. Agents make promises
to their neighbors about their future states and inform them if
A promise can be either a time-varying set of states (state
promise) or controls (control promise) that an agent sends to
another agent.
Definition IV.1 (State promises and rules) A state promise
that agent j ∈ {1, . . . , N } makes to agent i at time t is a setvalued, continuous (with respect to the Hausdorff distance)
function Xji [t] ∈ C 0 ([t, ∞); Pcc (Xj )). A state promise rule
for agent j ∈ {1, . . . , N } generated at time t is a continuous (with respect to the
dfunc defined in(1))
distance
Q
map of the form Rjs : C 0 [t, ∞); i∈N (j)∪{j} Pcc (Xi ) →
C 0 ([t, ∞); Pcc (Xj )).
The notation Xji [t](t′ ) conveys the promise xj (t′ ) ∈
Xji [t](t′ ) that agent j makes at time t to agent i about
time t′ ≥ t. A state promise rule is simply a way of
generating state promises. This means that if agent j must
send information to agent i at time t, it sends the state
j
promise Xji [t] = Rjs (XN
[·]|[t,∞) ). We require that, in the
absence of communication delays or noise in the state
measurements, the promises generated by a rule have the
property that Xji [t](t) = {xj (t)}. For simplicity, when the
time at which a promise is received is not relevant, we
use the notation Xji [·], or simply Xji . All promise information available to agent i ∈ {1, . . . , N } at some time t
i
i
is given
by XN
[·]|[t,∞) = (xi|[t,∞)
, {Xj [·]|[t,∞) }j∈N (i) ) ∈
Q
C 0 [t, ∞); j∈N (i)∪{i} Pcc (Xj ) . To extract information
i
from this about a specific time t′ , we useQXN
[·](t′ ) or simply
i
′
′
i
′
XN (t ) = (xi (t ), {Xj [·](t )}j∈N (i) ) ∈ j∈N (i)∪{i} Pcc (Xj ).
The generality of the above definitions allow promise sets to
be arbitrarily complex but we restrict ourselves to promise sets
that can be described with a finite number of parameters.
Remark IV.2 (Example promise and rule) Alternative to
directly sending state promises, agents can share their promises
based on their control rather than their state. The notation
Uji [t](t′ ) conveys the promise uj (t′ ) ∈ Uji [t](t′ ) that agent j
makes at time t to agent i about time t′ ≥ t. Given the
dynamics of agent j and state xj (t) at time t, agent i can
compute the state promise for t′ ≥ t,
Xji [t](t′ ) = {z ∈ Xj | ∃ uj : [t, t′ ] → Uj with
uj (s) ∈ Uji [t](s) for s ∈ [t, t′ ] such that
Z t′
′
Aj (t′ −t)
z=e
xj (t) +
eAj (t −τ ) Bj uj (τ )dτ }.
t
(9)
5
As an example,
given j ∈ {1, . . . , N }, a continuous control
Q
law uj : i∈N (j)∪{j} Pcc (Xi ) → Uj , and δj > 0, the ballradius control promise rule for agent j generated at time t is
j
j
(t)), δj ) ∩ Uj t′ ≥ t. (10)
Rjcb (XN
[·]|[t,∞) )(t′ ) = B(uj (XN
of the form u∗∗ :
Q
j∈{1,...,N }
Pcc (Xj ) → Rm that satisfy
∇i V (x) (Ai xi + Bi u∗∗
i ({x})) ≤ 0, i ∈ {1, . . . , N }, (11a)
N
X
∇i V (x) (Ai xi + Bi u∗∗
i ({x})) < 0.
(11b)
i=1
Note that this promise is a ball of radius δj in the control space
Uj centered at the control signal used at time t. Depending
on whether δj is constant or changes with time, we refer to
it as the static or dynamic ball-radius rule, respectively. The
promise can be sent with three parameters, the state xj (t)
j
when the promise was sent, the control action uj (XN
(t)) at
that time, and the radius δj of the ball. The state promise can
then be generated using (9).
•
Promises allow agents to predict the evolution of their
neighbors more accurately, which directly affects the network
behavior. In general, tight promises correspond to agents having good information about their neighbors, which at the same
time may result in an increased communication effort (since
the promises cannot be kept for long periods of time). On
the other hand, loose promises correspond to agents having to
use more conservative controls due to the lack of information,
while at the same time potentially being able to operate
for longer periods of time without communicating (because
promises are not violated).
The availability of promises equips agents with set-valued
information models about the state of other agents. This fact
makes it necessary to address the definition of distributed
controllers that operate on sets, rather than points. We discuss
this in Section IV-B. The additional information that promises
represent is beneficial to the agents because it decreases the
amount of uncertainty when making action plans. Section IV-C
discusses this in detail. Finally, these advantages rely on the
assumption that promises hold throughout the evolution. As
the state of the network changes and the level of task completion evolves, agents might decide to break former promises
and make new ones. We examine this in Section IV-D.
B. Controllers on set-valued information models
Here we discuss the type of controllers that the teamtriggered approach relies on. The underlying idea is that,
since agents possess set-valued information about the state of
other agents through promises, controllers themselves should
be defined on sets, rather than on points. There are different
ways of designing controllers that operate with set-valued
information depending on the type of system, its dynamics, or
the desired task, see e.g., [32]. For the problem of interest here,
we offer the following possible goals. One may be interested
in simply decreasing the value of a Lyapunov function as fast
as possible, at the cost of more communication or sensing.
Alternatively, one may be interested in choosing the stabilizing
controller such that the amount of required information is
minimal at a cost of slower convergence time. We consider
continuous (with respect to the Hausdorff distance) controllers
In other words, if exact, singleton-valued information is available to the agents, then the controller u∗∗ guarantees the monotonic evolution of the Lyapunov function V . We assume that
u∗∗ is distributed over the communication graph G. As before,
this means that for each i ∈ {1, . . . , N },
Qthe ith component
cc
u∗∗
can
be
computed
with
information
in
i
j∈N (i)∪{i} P (Xj )
Q
cc
rather than in the full space j∈{1,...,N } P (Xj ).
Controllers of the above form can be derived from the
availability of the controller u∗ Q
: X → U introduced in
N
cc
Section II. Specifically, let E :
j=1 P (Xj ) → X be a
continuous map that is distributed over G and satisfies,
QN for each
i ∈ {1, . . . , N }, that Ei (Y ) ∈ Yi for each Y ∈ j=1 Pcc (Xj )
and Ei ({y}) = yi for each y ∈ X . Essentially, what the map
E does for each agent is select a point from the set-valued
information that it possesses. Now, define
u∗∗ (Y ) = u∗ (E(Y )).
(12)
Note that this controller satisfies (11a) and (11b) because u∗
satisfies (4a) and (4b).
Example IV.3 (Controller definition with the ball-radius
promise rules) Here we construct a controller u∗∗ using (12)
for the case when promises are generated according to the ballradius control rule described in Remark IV.2.
QN To do so, note
that it is sufficient to define the map E : j=1 Pcc (Xj ) → X
only for tuples of sets of the form given in (9), where the
corresponding control promise is defined by (10). With the
notation of Remark IV.2, recall that the promise that an
agent j sends at time t is conveyed through three parameters
(yj , vj , δj ), the state yj = xj (t) when the promise was sent,
j
the control action vj = uj (XN
(t)) at that time, and the
radius δj of the ball. We can then define the jth component
of the map E as
′
Ej (X1 [t](t′ ), . . . , XN [t](t′ )) = eAj (t −t) yj
Z t′
′
eAj (t −τ ) Bj vj dτ,
+
t
which is guaranteed to be in Xj [t](t′ ) for t′ ≥ t. This
specification amounts to each agent i calculating the evolution
of its neighbors j ∈ N (i) as if they were using a zero-order
hold control.
•
C. Self-triggered information updates
Here we discuss how agents use the promises received from
other agents to generate self-triggered information requests in
the future. Let tilast be some time at which agent i receives
updated information (i.e., promises) from its neighbors. Until
the next time information is obtained, agent i has access to
6
i
the collection of functions XN
describing its neighbors’ state
and can compute its evolution under the controller u∗∗ via
i
xi (t) = eAi (t−tlast ) xi (tilast )
Z t
i
eAi (t−τ ) Bi u∗∗
+
i (XN (τ ))dτ,
tilast
t ≥ tilast .
(13)
Note that this evolution of agent i can be viewed as a promise
that it makes to itself, i.e., Xii [·](t) = {xi (t)}. With this
in place, i can schedule the next time tinext at which it will
need updated information from its neighbors by computing
the worst-case time evolution of V along its trajectory among
all the possible evolutions of its neighbors given the informationQcontained in their promises. Formally, we define, for
YN ∈ j∈N (i)∪{i} Pcc (Xj ),
Li V sup (YN ) = sup ∇i V (yN ) (Ai yi + Bi u∗∗
i (YN )) , (14)
yN ∈YN
where yi is the element of yN corresponding to i. Then,
the trigger for when agent i needs new information from its
neighbors is similar to (8), where we now use the promise
sets instead of the guaranteed sets. Specifically, the critical
time at which information is requested is given by tinext =
max{tilast + Td,self , t∗ }, where Td,self > 0 is an a priori chosen
parameter that we discuss below and t∗ is implicitly defined by
i
t∗ = min{t ≥ tilast | Li V sup (XN
(t)) = 0}.
(15)
[tilast , t∗ ),
This ensures that for t ∈
agent i is guaranteed to
be contributing positively to the desired task. We refer to
tinext − tilast as the self-triggered request time. The parameter
Td,self > 0 is the self-triggered dwell time. We introduce
it because, in general, it is possible that t∗ = tilast , implying that instantaneous communication is required. The
dwell time is used to prevent this behavior as follows. Note
i
that Li V sup (XN
(t′ )) ≤ 0 is only guaranteed while t′ ∈
[tilast , t∗ ]. Therefore, in case that tinext = tilast + Td,self , i.e., if
t∗ ≤ tilast + Td,self , agent i uses the safe-mode control during
t′ ∈ (t∗ , tilast + Td,self ] to leave its state fixed. This design
ensures the monotonicity of the evolution of V along the
network execution. The team-triggered controller is defined by
(
i
∗
u∗∗
i (XN (t)), if t ≤ t ,
team
ui (t) =
(16)
sf
ui (xi (t)),
if t > t∗ ,
for t ∈ [tilast , tinext ), where t∗ is given by (15). Note that the
self-triggered dwell time Td,self only limits the frequency at
which an agent i can request information from its neighbors
and does not provide guarantees on inter-event times of when
its memory is updated or its control is recomputed. If a
neighboring agent sends information to agent i before this
dwell time has expired (because that agent has broken a
promise), this triggers agent i to update its memory and
potentially recompute its control law.
D. Event-triggered information updates
Agent promises may need to be broken for a variety of
reasons. For instance, an agent might receive new information
from its neighbors, causing it to change its former plans.
Another example is given by an agent that made a promise
that is not able to keep for as long as it anticipated. Consider
an agent i ∈ {1, . . . , N } that has sent a promise Xij [tlast ] to a
neighboring agent j at some time tlast . If agent i ends up breaking its promise at time t∗ ≥ tlast , i.e., xi (t∗ ) ∈
/ Xij [tlast ](t∗ ),
then it is responsible for sending a new promise Xij [tnext ]
to agent j at time tnext = max{tlast + Td,event , t∗ }, where
Td,event > 0 is an a priori chosen parameter that we discuss
below. This implies that agent i must keep track of promises
made to its neighbors and monitor them in case they are
broken. Note that this mechanism is implementable because
each agent only needs information about its own state and
the promises it has made to determine whether the trigger is
satisfied.
The parameter Td,event > 0 is known as the event-triggered
dwell time. We introduce it because, in general, the time t∗ −
tlast between when agent i makes and breaks a promise to an
agent j might be arbitrarily small. The issue, however, is that if
t∗ < tlast +Td,event , agent j operates under incorrect information
about agent i for t ∈ [t∗ , tlast + Td,event ). We deal with this by
introducing a warning message WARN that agent i must send
to agent j when it breaks its promise at time t∗ < tlast +Td,event .
If agent j receives such a warning message, it redefines the
promise Xij using the guaranteed sets (7) as follows,
[
[
Xij [·](t) =
Xji (t, xi ) =
Ri (t − t∗ , xi )
xi ∈Xij [·](t∗ )
xi ∈Xij [·](t∗ )
(17)
∗
for t ≥ t , until the new message arrives at time tnext = tlast +
Td,event . By definition of the reachable set, the promise Xij [·](t)
is guaranteed to contain xi (t) for t ≥ t∗ .
Remark IV.4 (Promise expiration times) It is also possible
to set an expiration time Texp > Td,event for the validity of
promises. If this in effect and a promise is made at tlast , it
is only valid for t ∈ [tlast , tlast + Texp ]. The expiration of the
promise triggers the formulation of a new one.
•
The combination of the self- and event-triggered information updates described above together with the team-triggered
controller uteam as defined in (16) gives rise to the TEAM TRIGGERED LAW , which is formally presented in Algorithm 1.
The self-triggered information request in Algorithm 1 is
executed by an agent anytime new information is received,
whether it was actively requested by the agent, or was received
from some neighbor due to the breaking of a promise.
V. C ONVERGENCE OF THE TEAM - TRIGGERED LAW
Here we analyze the convergence properties of the TEAM TRIGGERED LAW . Our first result establishes the monotonic
evolution of the Lyapunov function V along the network
trajectories.
Proposition V.1 Consider a networked cyber-physical system
as described in Section II executing the TEAM - TRIGGERED
LAW (cf. Algorithm 1) based on a continuous controller
Q
cc
m
u∗∗ :
that satisfies (11) and
j∈{1,...,N } P (Xj ) → R
is distributed over the communication graph G. Then, the
7
Algorithm 1 :
TEAM - TRIGGERED LAW
(Self-trigger information update)
At any time t agent i ∈ {1, . . . , N } receives new promise(s) Xji [t] from
neighbor(s) j ∈ N (i), agent i performs:
1: compute own control uteam
(t′ ) for t′ ≥ t using (16)
i
2: compute own state evolution xi (t′ ) for t′ ≥ t using (13)
i (t∗ )) = 0
3: compute first time t∗ ≥ t such that Li V sup (XN
4: schedule information request to neighbors in max{t∗ −t, Td,self } seconds
(Respond to information request)
At any time t a neighbor j ∈ N (i) requests information, agent i performs:
i [·]
1: send new promise Xij [t] = Ris (XN
[t,∞) ) to agent j
(Event-trigger information update)
At all times t, agent i performs:
1: if there exists j ∈ N (i) such that xi (t) ∈
/ Xij [·](t) then
2:
if agent i has sent a promise to j at some time tlast ∈ (t − Td,event , t]
then
3:
send warning message WARN to agent j at time t
4:
schedule to send new promise Xij [tlast + Td,event ]
=
i [·]
Ris (XN
|[tlast +Td,event ,∞) ) to agent j in tlast + Td,event − t
seconds
5:
else
i [·]
6:
send new promise Xij [t] = Ris (XN
|[t,∞) ) to agent j at time t
7:
end if
8: end if
(Respond to warning message)
At any time t agent i ∈ {1, . . . , N } receives a warning message WARN from
agent j ∈ N (i)
1: redefine promise set Xji [·](t′ ) = ∪xj ∈X i [·](t) Rj (t′ − t, xj ) for t′ ≥ t
The requirements of uniformly bounded promises in Proposition V.2 means that there exists a compact set that contains
all promise sets. Note that this is automatically guaranteed if
the network state space is compact. Alternatively, if the sets of
allowable controls are bounded, a bounded network trajectory
with expiration times for promises implemented as outlined
in Remark IV.4 would result in uniformly bounded promises.
There are two main challenges in proving Proposition V.2,
which we discuss next.
The first challenge is that agents operate asynchronously,
i.e., agents receive and send information, and update their
control laws possibly at different times. To model asynchronism, we use a procedure called analytic synchronization,
see e.g. [33]. Let the time schedule of agent i be given by
T i = {ti0 , ti1 , . . . }, where tiℓ corresponds to the ℓth time
that agent i receives information from one or more of its
neighbors (the time schedule T i is not known a priori by
the agent). Note that this information can be received because
i requests it itself, or a neighbor sends it to i because an
event is triggered. Analytic synchronization simply consists of
merging together the individual time schedules into a global
time schedule T = {t0 , t1 , . . . } by setting
i
T = ∪N
i=1 T .
j
function V is monotonically nonincreasing along any network
trajectory.
Proof: We start by noting that the time evolution of V
under Algorithm 1 is continuous and piecewise continuously
differentiable. Moreover, at the time instants when the time
derivative is well-defined, one has
N
X
d
∇i V (xiN (t)) Ai xi (t) + Bi uteam
(t) (18)
V (x(t)) =
i
dt
i=1
≤
N
X
sup
i
i=1 yN ∈XN (t)
∇i V (yN ) Ai xi (t) + Bi uteam
(t) ≤ 0.
i
As we justify next, the last inequality follows by design of
the TEAM - TRIGGERED LAW. For each i ∈ {1, . . . , N }, if
i
i
Li V sup (XN
(t)) ≤ 0, then uteam
(t) = u∗∗
i (XN (t)) (cf. (16)).
i
In this case the corresponding summand of (18) is exi
i
actly Li V sup (XN
(t)), as defined in (14). If Li V sup (XN
(t)) >
team
sf
0, then ui (t) = ui (xi (t)), for which the corresponding
summand of (18) is exactly 0.
The next result characterizes the convergence properties of
team-triggered coordination strategies.
Proposition V.2 Consider a networked cyber-physical system as described in Section II executing the TEAM TRIGGERED LAW (cf. Algorithm 1) with dwell times
T
0 based on a continuous controller u∗∗ :
Qd,self , Td,event >
cc
m
that satisfies (11) and is disj∈{1,...,N } P (Xj ) → R
tributed over the communication graph G. Then, any bounded
network trajectory with uniformly bounded promises asymptotically approaches the desired set D.
Note that more than one agent may receive information at
any given time t ∈ T . This synchronization is done for
analysis purposes only. For convenience, we identify Z≥0 with
T via ℓ 7→ tℓ .
The second challenge is that a strategy resulting from the
team-triggered approach has a discontinuous dependence on
the network state and the agent promises. More precisely, the
information possessed by any given agent are trajectories of
sets for each of their neighbors, i.e., promises. For convenience, we denote by
S=
N
Y
Si ,
where
i=1
Si = C 0 R; Pcc (X1 ) × · · · × Pcc (Xi−1 ) × Xi
× Pcc (Xi+1 ) × · · · × Pcc (XN ) ,
the space that the state of the entire network lives in. Note
that this set allows us to capture the fact that each agent i has
perfect information about itself, as described in Section II.
Although agents only have information about their neighbors,
the above space considers agents having promise information
about all other agents to facilitate the analysis. This is only
done to allow for a simpler technical presentation, and does
not impact the validity of the arguments made here. The
information possessed by all agents of the network at some
time t is collected in
X 1 [·]|[t,∞) , . . . , X N [·]|[t,∞) ∈ S,
i
where X i [·]|[t,∞) = X1i [·]|[t,∞) , . . . , XN
[·]|[t,∞) ∈ Si . Here,
[·] is shorthand notation to denote the fact that promises
might have been made at different times, earlier than t.
The TEAM - TRIGGERED LAW corresponds to a discontinuous
map of the form S × Z≥0 → S × Z≥0 . This fact makes
8
it difficult to use standard stability methods to analyze the
convergence properties of the network. Our approach to this
problem consists of defining a discrete-time set-valued map
M : S × Z≥0 ⇒ S × Z≥0 whose trajectories contain the
trajectories of the TEAM - TRIGGERED LAW. Although this
‘overapproximation procedure’ enlarges the set of trajectories
to consider, the gained benefit is that of having a set-valued
map with suitable continuity properties that is amenable to
set-valued stability analysis. We describe this in detail next.
We start by defining the set-valued map M . Let (Z, ℓ) ∈ S×
Z≥0 . We define the (N + 1)th component of all the elements
in M (Z, ℓ) to be ℓ + 1. The ith component of the elements in
M (Z, ℓ) is given by one of following possibilities. The first
possibility models the case when agent i does not receive any
information from its neighbors. In this case, the ith component
of the elements in M (Z, ℓ) is simply the ith component of Z,
i
(19)
Z1i |[tℓ+1 ,∞) , . . . , ZN
|[tℓ+1 ,∞) ,
The second possibility models the case when agent i has
received information (including a WARN message) from at
least one neighbor: the ith component of the elements in
M (Z, ℓ) is
Y1i |[tℓ+1 ,∞) , . . . , YNi |[tℓ+1 ,∞) ,
(20)
where each agent has access to its own state at all times,
Yii (t) = eAi (t−tℓ+1 ) Zii (tℓ+1 )
Z t
eAi (t−τ ) Bi uteam
(τ )dτ,
+
i
t ≥ tℓ+1 ,
(21a)
tℓ+1
(here, with a slight abuse of notation, we use uteam to denote
the controller evaluated at Y i ) and,
Yji |[t ,∞) =
 ℓ+1
i

Zj |[tℓ+1 ,∞) ,

Wji |[t ,∞) ,
ℓ+1


Rs (Z j
(21b)
if i does not receive info from j,
if i receives a warning from j,
N |[tℓ+1 ,∞) ),
otherwise,
S
for j 6= i, where Wji (t) = zi ∈Z i (tℓ+1 ) Xij (t, zi ) corresponds
j
to the redefined promise (17) for t ≥ tℓ+1 as a result of the
warning message.
We emphasize two properties of the set-valued map M .
First, any trajectory of the TEAM - TRIGGERED LAW is also a
trajectory of the nondeterministic dynamical system defined
by M ,
j
(Z(tℓ+1 ), ℓ + 1) ∈ M (Z(tℓ ), ℓ).
Second, although the map defined by the TEAM - TRIGGERED
LAW is discontinuous, the set-valued map M is closed, as we
show next (a set-valued map T : X ⇒ Y is closed if xk → x,
yk → y and yk ∈ T (xk ) imply that y ∈ T (x)).
Lemma V.3 (Set-valued map is closed) The
map M : S × Z≥0 ⇒ S × Z≥0 is closed.
set-valued
Proof: To show this we appeal to the fact that a setvalued map composed of a finite collection of continuous
maps is closed [34, E1.9]. Given (Z, ℓ), the set M (Z, ℓ) is
finitely comprised of all possible combinations of whether or
not updates occur for every agent pair i, j ∈ {1, . . . , N }. In the
case that an agent i does not receive any information from its
neighbors, it is trivial to show that (19) is continuous in (Z, ℓ)
because Zji [t ,∞) is simply the restriction of Zji [t ,∞) to the
ℓ+1
ℓ
interval [tℓ+1 , ∞), for each i ∈ {1, . . . , N } and j ∈ N (i).
In the case that an agent i does receive updated information,
the above argument still holds for agents j that did not send
information to agent i. If an agent j sends a warning message
to agent i, Wji |[t ,∞) is continuous in (Z, ℓ) by continuity of
ℓ+1
the reachable sets on their starting point. If an agent j sends
a new promise to agent i, Yji |[t ,∞) is continuous in (Z, ℓ)
ℓ+1
by definition of the function Rjs . Finally, one can see that
Yii |[tℓ+1 ,∞) is continuous in (Z, ℓ) from (21a).
We are now ready to prove Proposition V.2.
Proof of Proposition V.2: Here we resort to the LaSalle
Invariance Principle for set-valued discrete-time dynamical
systems [34, Theorem 1.21]. Let W = S × Z≥0 , which is
closed and strongly positively invariant with respect to M . A
similar argument to that in the proof of Proposition V.1 shows
that the function V is nonincreasing along M . Combining
this with the fact that the set-valued map M is closed (cf.
Lemma V.3), the application of the LaSalle Invariance Principle implies that the trajectories of M that are bounded in
the first N components approach the largest weakly positively
invariant set contained in
S ∗ = {(Z, ℓ) ∈ S × Z≥0 | ∃(Z ′ , ℓ + 1) ∈ M (Z, ℓ)
such that V (Z ′ ) = V (Z)},
i
= {(Z, ℓ) ∈ S × Z≥0 | Li V sup (ZN
)≥0
for all i ∈ {1, . . . , N }}.
(22)
We now restrict our attention to those trajectories of M that
correspond to the TEAM - TRIGGERED LAW. For convenience,
let loc(Z, ℓ) : S × Z≥0 → X be the map that extracts the true
position information in (Z, ℓ), i.e.,
N
loc(Z, ℓ) = Z11 (tℓ ), . . . , ZN
(tℓ ) .
Given a trajectory γ of the TEAM - TRIGGERED LAW that satisfies all the assumptions of the statement of Proposition V.2, the
bounded evolutions and uniformly bounded promises ensure
that the trajectory γ is bounded. Then, the omega limit set
Ω(γ) is weakly positively invariant and hence is contained in
S ∗ . Our objective is to show that, for any (Z, ℓ) ∈ Ω(γ), we
have loc(Z, ℓ) ∈ D. We show this reasoning by contradiction.
Let (Z, ℓ) ∈ Ω(γ) but suppose loc(Z, ℓ) ∈
/ D. This means
i
that Li V sup (ZN
) ≥ 0 for all i ∈ {1, . . . , N }. Take any agent
i, by the SELF - TRIGGERED INFORMATION UPDATES, agent i
will request new information from neighbors in at most Td,self
seconds. This means there exists a state (Z ′ , ℓ + ℓ′ ) ∈ Ω(γ)
for which agent i has just received updated information from
its neighbors j ∈ N (i). Since (Z ′ , ℓ + ℓ′ ) ∈ S ∗ , we know
i ′
Li V sup (ZN
) ≥ 0. We also know, since information was just
′
updated, that Zji = locj (Z ′ , ℓ + ℓ′ ) is exact for all j ∈ N (i).
i ′
But, by (11a), Li V sup (ZN
) ≤ 0 because loc(Z ′ , ℓ + ℓ′ ) ∈
/ D.
This means that each time any agent i updates its information,
9
′
i
we must have Li V sup (ZN
) = 0. However, by (11b), there
i ′
must exist at least one agent i such that Li V sup (ZN
) < 0
since loc(Z ′ , ℓ + ℓ′ ) ∈
/ D, which yields a contradiction. Thus
for the trajectories of the TEAM - TRIGGERED LAW, (Z, ℓ) ∈ S ∗
implies that loc(Z, ℓ) ∈ D.
Given the convergence result of Proposition V.2, a termination condition for the TEAM - TRIGGERED LAW could be
included via the implementation of a distributed algorithm that
employs tokens identifying what agents are using safe-model
controllers, see e.g., [35], [36]. Also, according to the proof of
Proposition V.2, the actual value of the event-triggered dwell
time Td,event does not affect the convergence property of the
trajectories of the constructed discrete-time set-valued system.
However, the dwell time does affect the rate of convergence
of the actual continuous-time system (as a larger dwell time
corresponds to more time actually elapsing between each step
of the constructed discrete-time system).
Remark V.4 (Availability of a safe-mode controller) The
assumption on the availability of the safe-mode controller
plays an important role in the proof of Proposition V.2
because it provides individual agents with a way of avoiding
having a negative impact on the monotonic evolution of the
Lyapunov function. We believe this assumption can be relaxed
for dynamics that allow agents to execute maneuvers that
bring them back to their current state. Under such maneuvers,
the Lyapunov function will not evolve monotonically but,
at any given time, will always guarantee to be less than or
equal to its current value at some future time. We have not
pursued this approach here for simplicity and instead defer it
for future work.
•
The next result states that, under the TEAM - TRIGGERED
with positive dwell times, the system does not exhibit
Zeno behavior.
LAW
Lemma V.5 (No Zeno behavior) Under the assumptions of
Proposition V.2, the network executions do not exhibit Zeno
behavior.
Proof: Due to the self-triggered dwell time Td,self , the selftriggered information request steps in Algorithm 1 guarantee
that the minimum time before an agent i asks its neighbors
for new information is Td,self > 0. Similarly, due to the eventtriggered dwell time Td,event , agent i will never receive more
than two messages (one accounts for promise information,
the other for the possibility of a WARN message) from a
neighbor j in a period of Td,event > 0 seconds. This means
that any given agent can never receive an infinite amount of
information in finite time. When new information is received,
the control law (16) can only switch a maximum of two times
until new information is received again. Specifically, if an
agent i is using the normal control law when new information
is received, it may switch to the safe-mode controller at most
one time until new information is received again. If instead
an agent i is using the safe-mode control controller when
new information is received, it may immediately switch to the
normal control law, and then switch back to the safe-mode
controller some time in the future before new information is
received again. The result follows from the fact that |N (i)| is
finite for each i ∈ {1, . . . , N }.
Remark V.6 (Adaptive self-triggered dwell time) Dwell
times play an important role in preventing Zeno behavior.
However, a constant self-triggered dwell time throughout the
network evolution might result in wasted communication
effort because some agents might reach a state where their
effect on the evolution of the Lyapunov function is negligible
compared to others. In such case, the former agents could
implement larger dwell times, thus decreasing communication
effort, without affecting the overall performance. Next, we
give an example of such an adaptive dwell time scheme. Let
t be a time at which agent i ∈ {1, . . . , N } has just received
new information from its neighbors N (i). Then, the agent
sets its dwell time to
i
Td,self
(t) = max
X
δd
j∈N (i)
(23)
1
|N (i)|
j
sf
ku∗∗
j (XN (t)) − uj (xj (t)))k2
, ∆d ,
sf
i
ku∗∗
i (XN (t)) − ui (xi (t))k2
for some a priori chosen δd , ∆d > 0. The intuition bej
hind this design is the following. The value ku∗∗
j (XN (t)) −
usf
j (xj (t)))k2 can be interpreted as a measure of how far agent
j is from reaching a point where it cannot no longer contribute
positively to the global task. As agents are nearing this point,
they are more inclined to use the safe mode control to stay
put and hence do not require fresh information. Therefore, if
agent i is close to this point but its neighbors are not, (23) sets
a larger self-triggered dwell time to avoid excessive requests
for information. Conversely, if agent i is far from this point
but its neighbors are not, (23) sets a small dwell time to let
the self-triggered request mechanism be the driving factor in
determining when new information is needed. For agent i to
implement this, in addition to current state information and
promises, each neighbor j ∈ N (i) also needs to send the value
j
sf
of ku∗∗
j (XN (t)) − uj (xj (t)))k2 at time t. In the case that
information is not received from all neighbors, agent i simply
uses the last computed dwell time. Section VII illustrates this
adaptive scheme in simulation.
•
VI. ROBUSTNESS AGAINST UNRELIABLE
COMMUNICATION
This section studies the robustness of the team-triggered
approach in scenarios with packet drops, delays, and communication noise. We start by introducing the possibility of
packet drops in the network. For any given message an agent
sends to another agent, assume there is an unknown probability
0 ≤ p < 1 that the packet is dropped, and the message is
never received. We also consider an unknown (possibly time¯ in the network for
varying) communication delay ∆(t) ≤ ∆
¯ ≥ 0 is known. In other words, if agent j sends
all t where ∆
agent i a message at time t, agent i will not receive it with
probability p or receive it at time t + ∆(t) with probability
1 − p. We assume that small messages (i.e., 1-bit messages)
can be sent reliably with negligible delay. This assumption is
10
similar to the “acknowledgments” and “permission” messages
used in other works, see [28], [37] and references therein.
Lastly, we also account for the possibility of communication
noise or quantization. We assume that messages among agents
are corrupted with an error which is upper bounded by some
ω̄ ≥ 0 known to the agents.
With this model, the TEAM - TRIGGERED LAW as described
in Algorithm 1 does not guarantee convergence because the
monotonic behavior of the Lyapunov function no longer holds.
The problem occurs when an agent j breaks a promise to agent
i at some time t. If this occurs, agent i will operate with invalid
information (due to the sources of error described above) and
i
compute Li V sup (XN
(t′ )) (as defined in (14)) incorrectly for
′
t ≥ t.
Next, we discuss how the TEAM - TRIGGERED LAW can
be modified in scenarios with unreliable communication. To
deal with communication noise, when an agent i receives an
b i from another agent j, it must be able to
estimated promise X
j
create a promise set Xji that contains the actual promise that
agent j intended to send. We refer to this action as making a
promise set valid. The following example shows how it can
be done for the promises described in Remark IV.2.
Example VI.1 (Ball-radius promise rule with communication noise) In the scenario with bounded communication
noise, agent j sends the control promise conveyed through
j
xj (t), uj (XN
(t)), and δj , to agent i at time t as defined in
j
Remark IV.2, but i receives instead x
bj (t), u
bj (XN
(t)), and δbj ,
j
where it knows that kxj (t) − x
bj (t)k2 ≤ ω̄, kuj (XN
(t)) −
j
ubj (XN (t))k2 ≤ ω̄, and |δj − δbj | ≤ δ̄, given that ω̄ and δ̄ are
known a priori. To ensure that the promise agent i operates
with about agent j contains the true promise made by j, agent
i can set
j
uij (XN
(t)), δbj + ω̄ + δ̄) ∩ Uj
Uji [t](t′ ) = B(b
t′ ≥ t.
To create the state promise from this, i would need the true
state xj (t) of j at time t. However, since only the estimate
x
bij (t) is available, we modify (9) by
Xji [t](t′ ) = ∪yj ∈B(bxi (t),ω̄) {z ∈ Xj | ∃ uj : [t, t′ ] → Uj
j
with uj (s) ∈ Uji [t](s) for s ∈ [t, t′ ]
Z t′
′
′
such that z = eAj (t −t) yj +
eAj (t −τ ) Bj uj (τ )dτ }.
•
t
We deal with the packet drops and communication delays
with warning messages similar to the ones introduced in
Section IV-D. Let an agent j break its promise to agent i at
time t, then agent j sends i a new promise set Xji [t] for t′ ≥ t
and warning message WARN. Since agent i only receives
WARN at time t, the promise set Xji [t] may not be available
to agent i for t′ ≥ t. If the packet is dropped, then the message
never comes through, if the packet is successfully transmitted,
then Xji [t](t′ ) is only available for t′ ≥ t+∆(t). In either case,
we need a promise set Xji [·](t′ ) for t′ ≥ t that is guaranteed
to contain xj (t′ ). We do this by redefining the promise using
the reachable set, similarly to (17). Note that this does not
require the agents to have a synchronized global clock, as the
times t′ and t are both monitored by the receiving agent i.
In other words, it is not necessary for the message sent by
agent j to be timestamped. By definition of reachable set, the
promise Xji [·](t′ ) is guaranteed to contain xj (t′ ) for t′ ≥ t. If
¯ agent i has still not received the promise X i [t]
at time t + ∆,
j
from j, it can send agent j a request REQ for a new message
¯
at which point j would send i a new promise Xji [t + ∆].
Note that WARN is not sent in this case because the message
was requested from j by i and not a cause of j breaking a
promise to i. The ROBUST TEAM - TRIGGERED LAW, formally
presented in Algorithm 2, ensures the monotonic evolution of
the Lyapunov function V even in the presence of packet drops,
communication delays, and communication noise.
Algorithm 2 :
ROBUST TEAM - TRIGGERED LAW
(Self-trigger information update)
b i [t] from
At any time t agent i ∈ {1, . . . , N } receives new promise(s) X
j
neighbor(s) j ∈ N (i), agent i performs:
1: create valid promise Xji [t] with respect to ω̄
2: compute own control uteam
(t′ ) for t′ ≥ t using (16)
i
3: compute own state evolution xi (t′ ) for t′ ≥ t using (13)
i (t∗ )) = 0
4: compute first time t∗ ≥ t such that Li V sup (XN
5: schedule information request to neighbors in max{t∗ −t, Td,self } seconds
6: while message from j has not been received do
¯ for k ∈ Z≥0
7:
if current time equals t + max{t∗ − t, Td,self } + k∆
then
8:
send agent j a request REQ for new information
9:
end if
10: end while
(Respond to information request)
At any time t a neighbor j ∈ N (i) requests information, agent i performs:
i [·]
1: send new promise Yij [t] = Ris (XN
|[t,∞) ) to agent j
(Event-trigger information update)
At all times t, agent i performs:
1: if there exists j ∈ N (i) such that xi (t) ∈
/ Yij [·](t) then
2:
send warning message WARN to agent j
3:
if agent i has sent a promise to j at some time tlast ∈ (t − Td,event , t]
then
4:
schedule to send new promise Yij [tlast + Td,event ]
=
i [·]
Ris (XN
|[tlast +Td,event ,∞) ) to agent j in tlast + Td,event − t
seconds
5:
else
i [·]
6:
send new promise Yij [t] = Ris (XN
|[t,∞) ) to agent j
7:
end if
8: end if
(Respond to warning message)
At any time t agent i ∈ {1, . . . , N } receives a warning message WARN from
agent j ∈ N (i)
1: redefine promise set Xji [·](t′ ) = ∪x0 ∈X i [·](t) Rj (t′ − t, x0j ) for t′ ≥ t
j
j
2: while message from j has not been received do
¯ for k ∈ Z≥0 then
3:
if current time equals t + k∆
4:
send agent j a request REQ for new information
5:
end if
6: end while
The next result establishes the asymptotic correctness guarantees on the ROBUST TEAM - TRIGGERED LAW. In the presence of communication noise or delays, convergence can be
guaranteed only to a set that contains the desired set D.
Corollary VI.2 Consider a networked cyber-physical system
as described in Section II with packet drops occurring with
some unknown probability 0 ≤ p < 1, messages being delayed
¯ and communication noise
by some known maximum delay ∆,
bounded by ω̄, executing the ROBUST TEAM - TRIGGERED LAW
11
(cf. Algorithm 2) with dwell times
Q Td,self , Td,event > 0 based
on a continuous controller u∗∗ : j∈{1,...,N } Pcc (Xj ) → Rm
that satisfies (11) and is distributed over the communication
graph G. Let
sup
¯ ω̄) = {x ∈ X |
L
V
{xi }
D′ (∆,
inf
i
xiN ′ ∈B(xiN ,ω̄)
Y
¯ yj ) ≥ 0
×
∪yj ∈B(xi ′ ,ω̄) Rj (∆,
(24)
j
From the proof of Corollary VI.2, one can see that the
modifications made to the ROBUST TEAM - TRIGGERED LAW
make the omega limit sets of its trajectories larger than those of
the TEAM - TRIGGERED LAW, resulting in D ⊂ D′ . The set D′
depends on the Lyapunov function V . However, the difference
¯ ω̄) and D vanishes as ω̄ and ∆
¯ vanish.
between D′ (∆,
VII. S IMULATIONS
j∈N (i)
In this section we present simulations of coordination strategies derived from the team- and self-triggered approaches in
Then, any bounded network trajectory with uniformly bounded a planar multi-agent formation control problem. Our start¯ ω̄) ⊃ D with ing point is the distributed coordination algorithm based on
promises asymptotically converges to D′ (∆,
probability 1.
graph rigidity analyzed in [38], [39] which makes the desired
network formation locally (but not globally) asymptotically
Proof: We begin by noting that by equation (11b), the
stable. In this regard, the state space X of Section II corredefinition (14), and the continuity of u∗∗ , D can be written as
sponds to the domain of attraction of the desired equilibria
N
X
and, as long as the network trajectories do not leave this
i
∇i V (x)(Ai xi + Bi u∗∗
D′ (0, 0) = {x ∈ X |
i ({xN })) ≥ 0}. set, the convergence results still hold. The local convergence
i=1
result of the team-triggered approach here is only an artifact
¯ ω̄) by noticing that, for any of the specific example and, in fact, if the assumptions (4) are
One can see that D ⊂ D′ (∆,
¯ ≥ 0, no matter which point xi ′ ∈ B(xi , ω̄) is satisfied globally, then the system is globally asymptotically
x ∈ D, ω̄, ∆
N
N
Q
¯ yj ). stabilized. The interested reader is referred to [2] for a similar
taken, one has xiN ∈ {xi } × j∈N (i) ∪yj ∈B(xi ′ ,ω̄) Rj (∆,
j
To show that the bounded trajectories of the ROBUST TEAM - study in a optimal networked deployment problem where the
assumptions hold globally.
TRIGGERED LAW converge to D ′ , we begin by noting that all
Consider 4 agents communicating over a graph which is
properties of M used in the proof of Proposition V.2 still hold
in the presence of packet drops, delays, and communication only missing the edge (1, 3) from the complete graph. The
noise as long as the time schedule T i is unbounded for each agents seek to attain a rectangle formation of side lengths 1
agent i ∈ {1, . . . , N }. In order for the time schedule T i to be and 2. Each agent has unicycle dynamics,
unbounded, each agent i must receive an infinite number of
cos θi
i
ẋi = ui
messages, and tℓ → ∞. Since packet drops have probability
sin θi
0 ≤ p < 1, the probability that there is a finite number of
θ̇i = vi ,
updates for any given agent i over an infinite time horizon
is 0. Thus, with probability 1, there are an infinite number of where 0 ≤ u ≤ u
i
max = 5 and |vi | ≤ vmax = 3 are the control
information updates for each agent. Using a similar argument inputs. The safe-mode controller is then simply usf ≡ 0. To
i
to that of Lemma V.5, one can show that the positive dwell compute the distributed control law, each agent computes a
times Td,self , Td,event > 0 ensure that Zeno behavior does not goal point
occur, meaning that tiℓ → ∞. Then, by the analysis in the
X
p∗i (x) = xi +
(kxj − xi k2 − dij ) unit(xj − xi ),
proof of Proposition V.2, the bounded trajectories of M still
j∈N (i)
converge to S ∗ as defined in (22).
For a bounded evolution γ of the ROBUST TEAM where dij is the pre-specified desired distance between agents i
TRIGGERED LAW , we have that Ω(γ) ⊂ S ∗ is weakly posand j, and unit(xj −xi ) denotes the unit vector in the direction
itively invariant. Note that, since agents may never have exact
of xj − xi . Then, the control law is then given by
information about their neighbors, we can no longer leverage
properties (11a) and (11b) to precisely characterize Ω(γ). We u∗i = max min{k[cos θi sin θi ]T · (p∗i (x) − xi ), umax }, 0 ,
now show that for any (Z, ℓ) ∈ Ω(γ), we have loc(Z, ℓ) ∈ D′ . v ∗ = max {min{k(∠(p∗ (x) − xi ) − θi ), vmax }, −vmax } ,
i
i
i
Let (Z, ℓ) ∈ Ω(γ). This means that Li V sup (ZN
) ≥ 0 for
all i ∈ {1, . . . , N }. Take any agent i, by the ROBUST where k > 0 is a design parameter. For our simulations we set
k = 150. This continuous control law essentially ensures that
TEAM - TRIGGERED LAW , agent i will request new information
∗
from neighbors in at most Td,self seconds. This means there the position xi moves towards pi (x) when possible while the
′
′
exists a state (Z , ℓ + ℓ ) ∈ Ω(γ) for which agent i has unicycle rotates its orientation
N towards this goal. This control
→ R≥0 given by
just received updated, possibly delayed, information from its law ensures that V : R2
neighbors j ∈ N (i). Since (Z ′ , ℓ + ℓ′ ) ∈ S ∗ , we know
2
1 X
i ′
kxj − xi k22 − d2ij ,
V (x) =
Li V sup (ZN
) ≥ 0. We also know,
since information was just
Q
2
′
i ′
¯ yj ).
(i,j)∈E
updated, that ZN
⊂ {Zii } × j∈N (i) ∪yj ∈B(zi ′ ,ω̄) R(∆,
for all i ∈ {1, . . . , N }},
j
′
i
Since (Z ′ , ℓ + ℓ′ ) ∈ S ∗ , we know that Li V sup (ZN
) ≥ 0, for
′
′
all i ∈ {1, . . . , N }. This means that loc(Z , ℓ + ℓ ) ⊂ D′ , thus
loc(Z, ℓ) ∈ S ∗ ⊂ D′ .
is a nonincreasing function for the closed-loop system to
establish the asymptotic convergence to the desired formation.
For the team-triggered approach, we use both static and
12
dynamic ball-radius promise rules. The controller uteam is then
defined by (16), where controller u∗∗ is given by (12) as
described in Example IV.3. Note that although the agent has
no forward velocity when using the safe controller, it will still
rotate in place. The initial conditions are x1 (0) = (6, 10)T ,
x2 (0) = (7, 3)T , x3 (0) = (14, 8)T , and x4 (0) = (7, 13)T
and θi (0) = π/2 for all i. We begin by simulating the teamtriggered approach using fixed dwell times of Td,self = 0.3
and Td,event = 0.003 and the static ball-radius promise of
Remark IV.2 with the same radius δ = 1 for all agents.
Figure 1 shows the trajectories of the TEAM - TRIGGERED LAW.
13
12
11
10
9
8
7
Agent 1
Agent 2
Agent 3
Agent 4
6
5
4
3
4
6
8
10
12
14
16
Fig. 1. Trajectories of an execution of the TEAM - TRIGGERED LAW with
fixed dwell times and promises. The initial and final condition of each agent
is denoted by an ‘x’ and an ‘o’, respectively.
Next, we illustrate the role that the tightness of promises has
on the network performance. With the notation of Remark IV.2
for the static ball-radius rule, let λ = 2uδmax . Note that when
λ = 0, the promise generated by (10) is a singleton, i.e., an
exact promise. On the other hand, when λ = 1, the promise
generated by (10) contains the reachable set, corresponding
to no actual commitment being made (i.e., the self-triggered
approach). Figure 3 compares the value of the Lyapunov
function after a fixed amount of time (30 seconds) and the
total number of messages sent Ncomm between agents by
this time for varying tightness of promises. The dwell times
here are fixed at Td,self = 0.3 and Td,event = 0.003. Note
that a suitable choice of λ helps greatly reduce the amount
of communication compared to the self-triggered approach
(λ = 1) while maintaining a similar convergence rate.
Finally, we demonstrate the added benefits of using adaptive promises and dwell times. Figure 4(a) compares the
total number of messages sent in the self-triggered approach
and the team-triggered approaches with fixed promises and
dwell times (FPFD), fixed promises and adaptive dwell times
(FPAD), adaptive promises and fixed dwell times (APFD), and
adaptive promises and dwell times (APAD). The parameters
of the adaptive dwell time used in (23) are δd = 0.15
and ∆d = 0.3. For agent j ∈ {1, . . . , 4}, the radius δj
of the dynamic ball-radius rule of Remark IV.2 is δj (t) =
j
sf
−6
0.50ku∗∗
. This plot shows the
j (XN (t)) − uj (xj (t))k2 + 10
advantage of the team-triggered approach in terms of required
communication over the self-triggered one and also shows
the additional benefits of implementing the adaptive promises
and dwell time. This is because by using the adaptive dwell
time, agents decide to wait longer periods for new information
while their neighbors are still moving. By using the adaptive
promises, as agents near convergence, they are able to make
increasingly tighter promises, which allows them to request
information from each other less frequently. As Figure 4(b)
shows, the network performance is not compromised despite
the reduction in communication.
To compare the team- and self-triggered approaches, we let
NSi be the number of times i has requested new information and thus has received a message from each one of its
neighbors and NEi be the number of messages i has sent to
a neighboring agent because it decided to break its promise.
The total number of messages for an execution is Ncomm =
P
4
i
i
i=1 |N (i)|NS + NE . Figure 2 compares the number of
required communications in both approaches. Remarkably, for
this specific example, the team-triggered approach outperforms
the self-triggered approach in terms of required communication without sacrificing any performance in terms of time to
convergence (the latter is depicted through the evolution of
the Lyapunov function in Figure 4(b) below). Less overall
communication has an important impact on reducing network
1000
load. In Figure 2(a), we see that very quickly all agents are
900
requesting information as often as they can (as restricted by Ncomm 800
700
the self-triggered dwell time), due to the conservative nature
600
500
of the self-triggered time computations. In the execution of the
400
300
TEAM - TRIGGERED LAW in Figure 2(b), we see that the agents
200
are requesting information from one another less frequently.
100
0
Figure 2(c) shows that agents were required to break a few
promises early on in the execution.
0.5
1000
Ncomm
800
0.3
600
0.2
5
10
15
Time
(a)
20
25
30
80
70
V 60
50
40
30
20
10
0
Self
Team FPFD
Team FPAD
Team APFD
Team APAD
5
10
15
Time
(b)
20
25
30
Fig. 4. Plots of (a) the total number of messages sent and (b) the evolution
of the Lyapunov function V for executions of self-triggered approach and the
team-triggered approaches with fixed promises and dwell times (FPFD), fixed
promises and adaptive dwell times (FPAD), adaptive promises and fixed dwell
times (APFD), and adaptive promises and dwell times (APAD).
1200
0.4 V (30)
Self
Team FPFD
Team FPAD
Team APFD
Team APAD
400
0.1
200
0
0.2
0.4
λ
(a)
0.6
0.8
1
0
0.2
0.4
λ 0.6
0.8
1
(b)
Fig. 3. Plots of (a) the value of the Lyapunov function at a fixed time (30
sec) and (b) the total number of messages exchanged in the network by this
time for the team-triggered approach with varying tightness of promises λ.
VIII. C ONCLUSIONS
We have proposed a novel approach, termed team-triggered,
that combines ideas from event- and self-triggered control
for the implementation of distributed coordination strategies
for networked cyber-physical systems. Our approach is based
13
100
90
80
NS 70
60
50
40
30
20
10
0
0
4
70
Agent 1
Agent 1
60
Agent 2
Agent 3
Agent 2
Agent 3
NS 50
Agent 4
3.5
NE
Agent 4
3
Agent 1
2.5
40
Agent 2
2
30
1.5
20
5
10
15
Time
(a)
20
25
30
Agent 4
1
10
0
0
Agent 3
0.5
5
10
15
Time
(b)
20
25
30
0
0
5
10
15
Time
(c)
20
25
30
Fig. 2. Number of self-triggered requests made by each agent in an execution of the (a) self-triggered approach and (b) team-triggered approach with fixed
dwell times and promises. For the latter execution, (c) depicts the number of event-triggered messages sent (broken promises) by each agent.
on agents making promises to each other about their future
states. If a promise is broken, this triggers an event where
the corresponding agent provides a new commitment. As a
result, the information available to the agents is set-valued
and can be used to schedule when in the future further
updates are needed. We have provided a formal description
and analysis of team-triggered coordination strategies and
have also established robustness guarantees in scenarios where
communication is unreliable. The proposed approach opens
up numerous venues for future research. Among them, we
highlight the robustness under disturbances and sensor noise,
more general models for individual agents, the design of
team-triggered implementations that guarantee the invariance
of a desired set in distributed scenarios, the relaxation of
the availability of the safe-mode control via controllers that
allow agents to execute maneuvers that bring them back to
their current state, relaxing the requirement on the negative
semidefiniteness of the derivative of the Lyapunov function
along the evolution of each individual agent, methods for the
systematic design of controllers that operate on set-valued
information models, understanding the implementation tradeoffs in the design of promise rules, analytic guarantees on
the performance improvements with respect to self-triggered
strategies, and the impact of evolving topologies on the
generation of promises.
R EFERENCES
[1] C. Nowzari and J. Cortés, “Team-triggered coordination of networked systems,” in American Control Conference, (Washington, D.C.),
pp. 3827–3832, June 2013.
[2] C. Nowzari and J. Cortés, “Robust team-triggered coordination of networked cyber-physical systems,” in Control of Cyber-Physical Systems
(D. C. Tarraf, ed.), vol. 449 of Lecture Notes in Control and Information
Sciences, pp. 317–336, New York: Springer, 2013.
[3] K. D. Kim and P. R. Kumar, “Cyberphysical systems: A perspective
at the centennial,” Proceedings of the IEEE, vol. 100, no. Special
Centennial Issue, pp. 1287–1308, 2012.
[4] J. Sztipanovits, X. Koutsoukos, G. Karsai, N. Kottenstette, P. Antsaklis,
V. Gupta, B. Goodwine, J. Baras, and S. Wang, “Toward a science of
cyberphysical system integration,” Proceedings of the IEEE, vol. 100,
no. 1, pp. 29–44, 2012.
[5] D. Hristu and W. Levine, Handbook of Networked and Embedded
Control Systems. Boston, MA: Birkhäuser, 2005.
[6] K. J. Åström and B. Wittenmark, Computer Controlled Systems: Theory
and Design. Englewood Cliffs, NJ: Prentice Hall, 3rd ed., 1996.
[7] P. Wan and M. D. Lemmon, “Event-triggered distributed optimization
in sensor networks,” in Symposium on Information Processing of Sensor
Networks, (San Francisco, CA), pp. 49–60, 2009.
[8] K. J. Åström and B. M. Bernhardsson., “Comparison of Riemann and
Lebesgue sampling for first order stochastic systems,” in IEEE Conf. on
Decision and Control, (Las Vegas, NV), pp. 2011–2016, Dec. 2002.
[9] P. Tabuada, “Event-triggered real-time scheduling of stabilizing control tasks,” IEEE Transactions on Automatic Control, vol. 52, no. 9,
pp. 1680–1685, 2007.
[10] W. P. M. H. Heemels, J. H. Sandee, and P. P. J. van den Bosch, “Analysis
of event-driven controllers for linear systems,” International Journal of
Control, vol. 81, no. 4, pp. 571–590, 2008.
[11] M. Velasco, P. Marti, and J. M. Fuertes, “The self triggered task model
for real-time control systems,” in Proceedings of the 24th IEEE RealTime Systems Symposium, pp. 67–70, 2003.
[12] R. Subramanian and F. Fekri, “Sleep scheduling and lifetime maximization in sensor networks,” in Symposium on Information Processing of
Sensor Networks, (New York, NY), pp. 218–225, 2006.
[13] A. Anta and P. Tabuada, “To sample or not to sample: self-triggered
control for nonlinear systems,” IEEE Transactions on Automatic Control,
vol. 55, no. 9, pp. 2030–2042, 2010.
[14] M. Mazo Jr. and P. Tabuada, “Decentralized event-triggered control over
wireless sensor/actuator networks,” IEEE Transactions on Automatic
Control, vol. 56, no. 10, pp. 2456–2461, 2011.
[15] X. Wang and N. Hovakimyan, “L1 adaptive control of event-triggered
networked systems,” in American Control Conference, (Baltimore, MD),
pp. 2458–2463, 2010.
[16] M. C. F. Donkers and W. P. M. H. Heemels, “Output-based eventtriggered control with guaranteed L∞ -gain and improved and decentralised event-triggering,” IEEE Transactions on Automatic Control,
vol. 57, no. 6, pp. 1362–1376, 2012.
[17] D. V. Dimarogonas, E. Frazzoli, and K. H. Johansson, “Distributed
event-triggered control for multi-agent systems,” IEEE Transactions on
Automatic Control, vol. 57, no. 5, pp. 1291–1297, 2012.
[18] G. Shi and K. H. Johansson, “Multi-agent robust consensus-part II:
application to event-triggered coordination,” in IEEE Conf. on Decision
and Control, (Orlando, FL), pp. 5738–5743, Dec. 2011.
[19] X. Meng and T. Chen, “Event based agreement protocols for multi-agent
networks,” Automatica, vol. 49, no. 7, pp. 2125–2132, 2013.
[20] M. Mazo Jr. and P. Tabuada, “On event-triggered and self-triggered
control over sensor/actuator networks,” in IEEE Conf. on Decision and
Control, (Cancun, Mexico), pp. 435–440, 2008.
[21] Y. Fan, G. Feng, Y. Wang, and C. Song, “Distributed event-triggered
control of multi-agent systems with combinational measurements,” Automatica, vol. 49, no. 2, pp. 671–675, 2013.
[22] A. Eqtami, D. V. Dimarogonas, and K. J. Kyriakopoulos, “Eventtriggered strategies for decentralized model predictive controllers,” in
IFAC World Congress, (Milano, Italy), Aug. 2011.
[23] E. Garcia and P. J. Antsaklis, “Model-based event-triggered control
for systems with quantization and time-varying network delays,” IEEE
Transactions on Automatic Control, vol. 58, no. 2, pp. 422–434, 2013.
[24] W. P. M. H. Heemels and M. C. F. Donkers, “Model-based periodic
event-triggered control for linear systems,” Automatica, vol. 49, no. 3,
pp. 698–711, 2013.
[25] C. Nowzari and J. Cortés, “Self-triggered coordination of robotic networks for optimal deployment,” Automatica, vol. 48, no. 6, pp. 1077–
1087, 2012.
[26] X. Wang and M. D. Lemmon, “Event-triggered broadcasting across dis-
14
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
[36]
[37]
[38]
[39]
tributed networked control systems,” in American Control Conference,
(Seattle, WA), pp. 3139–3144, June 2008.
M. Zhong and C. G. Cassandras, “Asynchronous distributed optimization
with event-driven communication,” IEEE Transactions on Automatic
Control, vol. 55, no. 12, pp. 2735–2750, 2010.
X. Wang and M. D. Lemmon, “Event-triggering in distributed networked
control systems,” IEEE Transactions on Automatic Control, vol. 56,
no. 3, pp. 586–601, 2011.
G. S. Seybotha, D. V. Dimarogonas, and K. H. Johansson, “Event-based
broadcasting for multi-agent average consensus,” Automatica, vol. 49,
no. 1, pp. 245–252, 2013.
M. Althoff, C. L. Guernic, and B. H. Krogh, “Reachable set computation
for uncertain time-varying linear systems,” in Proceedings of the 14th
international conference on Hybrid Systems: Computation and Control,
(Chicago, IL), pp. 93–102, 2011.
G. Frehse, “PHAVer: algorithmic verification of hybrid systems past
HyTech,” in Hybrid Systems: Computation and Control (M. Morari
and L. Thiele, eds.), vol. 3414 of Lecture Notes in Computer Science,
pp. 258–273, Heidelberg, Germany: Springer, 2005.
F. Blancini and S. Miani, Set-theoretic Methods in Control. Boston,
MA: Birkhäuser, 2008.
J. Lin, A. S. Morse, and B. D. O. Anderson, “The multi-agent rendezvous
problem. Part 2: The asynchronous case,” SIAM Journal on Control and
Optimization, vol. 46, no. 6, pp. 2120–2147, 2007.
F. Bullo, J. Cortés, and S. Martı́nez, Distributed Control of Robotic
Networks. Applied Mathematics Series, Princeton University Press,
2009. Electronically available at http://coordinationbook.info.
N. A. Lynch, Distributed Algorithms. Morgan Kaufmann, 1997.
D. Peleg, Distributed Computing. A Locality-Sensitive Approach. Monographs on Discrete Mathematics and Applications, SIAM, 2000.
M. Guinaldo, D. Lehmann, J. S. Moreno, S. Dormido, and K. H.
Johansson, “Distributed event-triggered control with network delays and
packet losses,” in IEEE Conf. on Decision and Control, (Hawaii, USA),
pp. 1–6, Dec. 2012.
L. Krick, M. E. Broucke, and B. Francis, “Stabilization of infinitesimally
rigid formations of multi-robot networks,” International Journal of
Control, vol. 82, no. 3, pp. 423–439, 2009.
F. Dorfler and B. Francis, “Geometric analysis of the formation problem
for autonomous robots,” IEEE Transactions on Automatic Control,
vol. 55, no. 10, pp. 2379–2384, 2010.